report
stringlengths
320
1.32M
summary
stringlengths
127
13.7k
While there is no uniformly accepted definition of predatory lending, a number of practices are widely acknowledged to be predatory. These include, among other things, charging excessive fees and interest rates, lending without regard to borrowers’ ability to repay, refinancing borrowers’ loans repeatedly over a short period of time without any economic gain for the borrower (referred to as “loan flipping”), and committing outright fraud or deception—for example, falsifying documents or intentionally misinforming borrowers about the terms of a loan. These types of practices offer lenders that originate predatory loans potentially high returns even if borrowers default, because many of these loans require excessive up-front fees. No comprehensive data are available on the incidence of these practices, but banking regulators, consumer advocates, and industry participants generally agree that predatory loans are most likely to occur in the market for “subprime” loans. The subprime market serves borrowers who have limited incomes or poor or no credit histories, in contrast with the prime market, which encompasses traditional lenders and borrowers with credit histories that put them at low risk of default. Subprime lending is not inherently abusive, and, according to officials at HUD and the Department of the Treasury, the emergence of a subprime mortgage market has enabled a whole class of credit-impaired borrowers to buy homes or access the equity in their homes. Originators of subprime loans most often are mortgage and consumer finance companies but can also be banks, thrifts, and other institutions. Serious data limitations make the extent of predatory lending difficult to determine. However, there have been a number of major settlements resulting from government enforcement actions or private party lawsuits in the last 5 years that have accused lenders of abusive practices affecting large numbers of borrowers. For example, in October 2002, Household International, a large home mortgage lender, agreed to pay up to $484 million to homeowners to settle states’ allegations that it used unfair and deceptive lending practices to make mortgage loans with excessive interest and fees. In addition, the rate of foreclosures of subprime loans has increased substantially since 1990, far exceeding the rate of increase for subprime originations. Some consumer groups and industry observers have attributed this development, at least in part, to an increase in abusive lending, particularly loans made without regard to borrowers’ ability to repay. Additionally, groups such as legal services agencies have reported seeing an ever-growing number of consumers, particularly the elderly and minorities, who are in danger of losing their homes as a result of predatory lending practices. As shown in figure 1, Congress has passed numerous laws that federal agencies and regulators have used to combat predatory lending. Among the most frequently used laws—HOEPA, the Federal Trade Commission Act, TILA, and RESPA—only HOEPA was specifically designed to address predatory lending. Enacted in 1994, HOEPA places restrictions on certain high-cost loans, including limits on prepayment penalties and balloon payments and prohibitions against negative amortization. However, HOEPA covers only loans that exceed certain rate or fee triggers, and although comprehensive data are lacking, it appears that HOEPA covers only a limited portion of all subprime loans. The Federal Trade Commission Act, enacted in 1914 and amended on numerous occasions, authorizes FTC to prohibit and take action against unfair or deceptive acts or practices in or affecting commerce. TILA and RESPA are designed in part to provide consumers with accurate information about the cost of credit. Other federal laws that have been used to address predatory lending practices include criminal fraud statutes that prohibit certain types of fraud sometimes used in abusive lending schemes, such as forgery and false statements. Also, the Fair Housing Act and Equal Credit Opportunity Act—which prohibit discrimination in housing-related transactions and the extension of credit, respectively—have been used in cases against abusive lenders that have targeted certain protected groups. Using these or other authorities, federal agencies have taken a number of enforcement actions and other steps, such as issuing guidance and revising regulations. Among federal agencies, FTC has a prominent role in combating predatory lending because of its responsibilities in implementing and enforcing certain federal laws among lending institutions that are not depository institutions supervised by federal banking regulators. FTC reported that it has filed 19 complaints—17 since 1998—alleging deceptive or other illegal practices by mortgage lenders or brokers and that some actions have resulted in multimillion dollar settlements. The Department of Justice, which is responsible for enforcing certain federal civil rights laws, has taken two such enforcement actions related to predatory mortgage lending practices and has taken an additional action on behalf of FTC. The Department of Housing and Urban Development has undertaken enforcement activities related to abusive lending that focus primarily on reducing losses to the Federal Housing Administration insurance fund. It has also taken three enforcement actions in abusive mortgage lending cases for violations of the Real Estate Settlement Procedures Act’s prohibitions on certain types of fees. Federal banking regulators have stated that their monitoring and examination activities have uncovered little evidence of predatory lending in federally regulated depository institutions. Four of the five federal banking regulators reported taking no formal enforcement actions involving predatory mortgage lending, while the fifth—the Office of the Comptroller of the Currency—reported that it has taken one formal enforcement action against a bank engaged in abusive mortgage lending. Regulators noted that they have taken informal enforcement actions to address questionable practices raised during the examination process and required their institutions to take corrective actions. The banking regulators have also issued guidance to the institutions they supervise on avoiding direct or indirect involvement in predatory lending. In addition, in 2001 the Board made changes to its regulations implementing HOEPA that, among other things, increase the number of loans HOEPA covers. The Board also made changes to its regulations implementing the Home Mortgage Disclosure Act in 2002 that make it easier to analyze potential patterns of predatory lending. Federal agencies and banking regulators have coordinated their efforts to address predatory lending on certain occasions through participation in interagency working groups and through joint enforcement actions. For example, FTC, the Department of Justice, and the Department of Housing and Urban Development coordinated to take an enforcement action against Delta Funding Corporation, with each agency investigating and bringing actions for violations of the laws within its jurisdiction. Issues related to federal oversight and regulation of certain nonbank mortgage lenders may challenge efforts to combat predatory lending. Nonbank mortgage lending companies owned by financial or bank holding companies (i.e., nonbank mortgage lending subsidiaries) account for an estimated 24 percent of subprime loan orginations, according to the Department of Housing and Urban Development, and some have been the target of notable federal and state enforcement actions involving allegations of abusive lending. The Board may be better equipped than FTC to monitor and examine these holding company subsidiaries because of its role in overseeing financial and bank holding companies, but the Board does not have clear authority to do so. Our report recommends that Congress consider (1) making appropriate statutory changes that would grant the Board the authority to routinely monitor and, as necessary, examine the nonbank mortgage lending subsidiaries of financial and bank holding companies for compliance with federal consumer protection laws applicable to predatory lending practices and (2) giving the Board specific authority to initiate enforcement actions under those laws against these nonbank mortgage lending subsidiaries. In commenting on our report, the Board stated that while the existing structure has not been a barrier to Federal Reserve oversight, the approach we recommended for consideration by the Congress would likely be useful for catching some abusive practices that might not be caught otherwise. The Board also noted that the approach would present tradeoffs, such as different supervisory schemes being applied to nonbank mortgage lenders based on whether or not they are part of a holding company, and additional costs. However, these nonbank mortgage lenders are already subject to a different supervisory scheme than other lenders. We agree that costs could increase and believe that Congress should consider both the potential costs and benefits of clarifying the Board’s authorities. In response to concerns about the growth of predatory lending and the limitations of existing laws, 25 states, the District of Columbia, and 11 localities have passed their own laws addressing predatory lending practices, according to a database that tracks such laws. Most of these laws regulate and restrict the terms and characteristics of high-cost loans—that is, loans that exceed certain rate or fee thresholds. While some state statutes follow the thresholds for covered loans established in HOEPA, many set lower thresholds in order to cover more loans than the federal statute. The statutes vary, but they generally cover a variety of predatory practices, such as balloon payments and prepayment penalties, and some include restrictions on such things as mandatory arbitration clauses that can restrict borrowers’ ability to obtain legal redress through the courts. Some states have also increased the regulation of and licensing requirements for mortgage lenders and brokers, in part to address concerns that some unscrupulous lenders and brokers have been responsible for lending abuses and that these entities have not been adequately regulated. For example, some states have added educational requirements that lenders and brokers must meet in order to obtain a license. In recent years, state law enforcement agencies and banking regulators have also taken a number of actions against mortgage lenders involving predatory lending. For example, an official from Washington State’s Department of Financial Institutions reported that the department had taken several enforcement actions to address predatory lending, including one that resulted in a lender being ordered to return more than $700,000 to 120 Washington borrowers for allegedly deceiving them and charging prohibited fees. Three federal banking regulators—the National Credit Union Administration, the Office of the Comptroller of the Currency, and the Office of Thrift Supervision—have issued opinions stating that federal laws preempt some state predatory lending laws for the institutions that they regulate. The regulators note that such preemption creates a more uniform regulatory framework, relieves lending institutions of the burden of complying with a hodgepodge of state and federal laws, and avoids state laws that may restrict legitimate lending activities. State officials and consumer advocates that oppose preemption argue that federal laws do not effectively protect consumers against predatory lending practices and that federal regulators do not devote sufficient resources toward enforcement of consumer protection laws for the institutions they oversee. In response, federal banking regulators have noted that federally supervised institutions are highly regulated and subject to comprehensive supervision. The regulators also said they found little to no evidence of predatory lending by the institutions they regulate. Consistent observational and anecdotal evidence, along with some limited data, indicates that, for a variety of reasons, elderly homeowners are disproportionately the targets of predatory lending. Because older homeowners, on average, have more equity in their homes than younger homeowners, abusive lenders could be expected to target these borrowers in order to “strip” the equity from their homes. According to federal officials and consumer groups we contacted, abusive lenders often try to convince elderly borrowers to repeatedly refinance their loans, adding more costs each time—an abuse known as loan flipping. In addition, some brokers and lenders aggressively market home equity loans as a source of cash, particularly for older homeowners who may have limited incomes but require funds for major home repairs or medical expenses. The financial losses older people can suffer as a result of abusive loans can result in the loss of independence and security and a significant decline in their quality of life. A number of factors may make the elderly particularly susceptible to predatory lending practices. For example: Diseases and physical impairments associated with aging—such as declining vision, hearing, or mobility—can restrict elderly consumers’ ability to access financial information and compare credit terms. In such situations, potential borrowers may be susceptible to the first lender to offer what seems to be a good deal, especially if the lender is willing to visit them at home or provide transportation to the closing. Some older people may also have diminished cognitive capacity, which can impair their ability to comprehend and make informed judgments on financial issues. According to a report sponsored by the National Academy of Sciences, elderly people may be more likely to have conditions or disabilities that make them easy targets for financial abuse and they may have diminished capacity to evaluate proposed courses of action. Representatives of legal aid organizations have said that they frequently represent elderly clients in predatory lending cases involving lenders that have taken advantage of a borrower’s confusion and, in some cases, dementia. Several advocacy groups have noted that some elderly people lack social and family support systems, potentially increasing their susceptibility to unscrupulous lenders who may market loans by making home visits or offering other personal contact. Elderly homeowners often live in older homes and are more likely to need someone to do repairs for them. Federal officials, legal aid services, and consumer groups have reported that home repair scams targeting elderly homeowners are particularly common. For example, a joint report on predatory lending by the Department of Housing and Urban Development and the Department of the Treasury noted that predatory brokers and home improvement contractors have collaborated to swindle older consumers. A contractor may come to a homeowner’s door, pressure the homeowner into accepting a home improvement contract, and arrange for financing of the work with a high-cost loan. The contractor then does shoddy work or does not finish the agreed-on repairs, leaving the borrower to pay off the expensive loan. Federal agencies, states, nonprofits, and trade organizations have conducted and funded financial education for consumers as a means of improving consumers’ financial literacy and, in some cases, raising consumers’ awareness of predatory lending practices. Because the elderly may be more susceptible to predatory lending, government agencies and consumer advocacy organizations have focused some of their education efforts on this population. For example, the Department of Justice offers on its Web site the guide “Financial Crimes Against the Elderly,” which includes references to predatory lending. The Department of Health and Human Services’ Administration on Aging provides grants to state and nonprofit agencies for programs aimed at preventing elder abuse, including predatory lending practices targeting older consumers. AARP, which represents Americans age 50 and over, sponsors a number of financial education efforts, including a borrower’s kit that contains tips for avoiding predatory lending. However, federal consumer protection and fair lending laws that have been used to address predatory lending do not generally have provisions specific to elderly persons. For example, age is not a protected class under the Fair Housing Act, which prohibits discrimination in housing-related transactions. In addition, the Home Mortgage Disclosure Act (HMDA)— which requires certain financial institutions to collect, report, and disclose data on loan applications and originations—does not require lenders to report information about the age of the applicant or borrower. An exception is the Equal Credit Opportunity Act, which prohibits unlawful discrimination on the basis of age in connection with any aspect of a credit transaction. Little comprehensive data exist on the ages of consumers involved in federal and state enforcement actions and private class-action lawsuits involving predatory lending. Such actions generally seek to provide redress to large groups of consumers, but a few cases have involved allegations of predatory lending targeting elderly borrowers. For example, FTC, six states, AARP, and private plaintiffs settled a case with First Alliance Mortgage Company in March 2002 for more than $60 million. The company was accused of using misrepresentation and unfair and deceptive practices to lure senior citizens and those with poor credit histories into entering into abusive loans; an estimated 28 percent of the 8,712 borrowers represented in the class-action suit were elderly. Some nonprofit groups—such as the AARP Foundation Litigation, the National Consumer Law Center, and the South Brooklyn Legal Services’ Foreclosure Prevention Project—provide legal services that focus, in part, on helping elderly victims of predatory lending. The AARP Foundation Litigation, which conducts litigation to benefit Americans 50 years and older, has been party to 7 lawsuits since 1998 involving allegations of predatory lending against more than 50,000 elderly borrowers. Six of these suits have been settled, and the other is pending. While representatives of the mortgage lending industry and consumer groups have noted that financial education may make some consumers less susceptible to abusive lending practices, GAO’s review of literature and interviews with consumer and federal officials suggest that consumer education by itself has limits as a tool for deterring predatory lending. First, mortgage loans are complex financial transactions, and many different factors—including the interest rate, fees, provisions of the loan, and situation of the borrower—determine whether a loan is in a borrower’s best interest. Even an excellent campaign of consumer education is unlikely to provide less sophisticated consumers with enough information for them to determine whether a loan contains abusive terms. Second, predatory lenders and brokers tend to use aggressive marketing tactics that are designed to confuse consumers. Broad-based campaigns to make consumers aware of predatory lending may not be sufficient to prevent many consumers—particularly those who may be uneducated or unsophisticated in financial matters—from succumbing to such tactics. Finally, the consumers who are often the targets of predatory lenders are also some of the hardest to reach with educational information. Prepurchase mortgage counseling—which can offer a “third party” review of a prospective mortgage loan—may help borrowers avoid predatory loans, in part by alerting consumers to predatory loan terms and practices. The Department of Housing and Urban Development supports a network of approximately 1,700 approved counseling agencies across the country and in some cases provides funding for their activities. While beneficial, the role of mortgage counseling in preventing predatory lending is likely to be limited. Borrowers do not always attend such counseling, and when they do, counselors may not have access to all of the loan documents needed to review the full final terms and provisions before closing. In addition, counseling may be ineffective against lenders and brokers engaging in fraudulent practices, such as falsifying applications or loan documents, that cannot be detected during a prepurchase review of mortgage loan documents. Finally, disclosures made during the mortgage loan process, while important, may be of limited usefulness in reducing the incidence of predatory lending practices. Certain federal laws, including TILA and RESPA, have requirements covering the content, form, and timing of the information that must be disclosed to borrowers. However, industry and consumer advocacy groups have publicly expressed dissatisfaction with the current disclosure system. In July 2002, the Department of Housing and Urban Development issued proposed rules intended to streamline the disclosure process and make disclosures more understandable and timely, and debate over the proposed rules has been contentious. Although improving loan disclosures would undoubtedly have benefits, once again the inherent complexity of loan transactions may limit any impact on the incidence of predatory lending practices. Moreover, even a relatively clear and transparent system of disclosures may be of limited use to borrowers who lack sophistication about financial matters, are not highly educated, or suffer physical or mental infirmities. Finally, as with mortgage counseling, revised disclosures would not necessarily help protect consumers against lenders and brokers who engage in outright fraud or who mislead borrowers about the terms of loans in the disclosure documents themselves. The existence of a secondary market for subprime loans has benefited consumers by increasing the sources of funds available to subprime lenders, potentially lowering interest rates and origination costs for subprime loans. However, the secondary market may also inadvertently facilitate predatory lending by providing a source of funds for unscrupulous originators, allowing them to quickly sell off loans with predatory terms. Further, the existence of a secondary market may reduce the incentive for originating lenders—who generally make their profits from high origination fees—to ensure that borrowers can repay. Purchasers of mortgage loans undertake a process of due diligence designed to avoid legal, financial, and reputational risk. However, the degree of due diligence purchasers undertake varies. Officials of Fannie Mae and Freddie Mac—which are estimated to account for a relatively small portion of the secondary market for subprime loans—told us that their organizations undertake a series of measures aimed at avoiding the purchase of loans with abusive characteristics that may have harmed borrowers. In contrast, according to some market participants, the due diligence of other secondary market purchasers of residential mortgages may be more narrowly focused on the creditworthiness of the loans and on their compliance with federal, state, and local laws. However, even the most stringent efforts cannot uncover some predatory loans. For example, due diligence may be unable to uncover fraud that occurred during the loan underwriting or approval process, some excessive or unwarranted fees, or loan flipping. Under some state and local legislation, purchasers of mortgages or mortgage-backed securities on the secondary market may be held liable for violations committed by the originating lenders—referred to as “assignee liability” provisions. Assignee liability is intended to discourage secondary market participants from purchasing loans that may have predatory features and to provide an additional source of redress for victims of abusive lenders, but some argue that it can also discourage legitimate lending activity. Secondary market purchasers that are unwilling to assume the potential risks associated with assignee liability provisions have stopped purchasing, or announced their intention to stop purchasing, mortgages originated in areas covered by such provisions. Assignee liability provisions of the Georgia Fair Lending Act were blamed for causing several participants in the mortgage lending industry to withdraw from the market, and the provisions were subsequently repealed. Mr. Chairman, this concludes my prepared statement. I would be happy to answer any questions at this time. For further information on this testimony, please contact David G. Wood at (202) 512-8678, or Harry Medina at (415) 904-2000. Individuals making key contributions to this testimony included Jason Bromberg, Randall C. Fasnacht, Jr., Elizabeth Olivarez, and Paul Thompson. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
While there is no universally accepted definition, the term "predatory lending" is used to characterize a range of practices, including deception, fraud, or manipulation, that a mortgage broker or lender may use to make a loan with terms that are disadvantageous to the borrower. Concerns about predatory lending have increasingly garnered the attention and concern of policymakers, consumer advocates and participants in the mortgage industry. This statement is based on GAO's report, released at today's hearing, and discusses federal and state efforts to combat predatory lending; factors that may make elderly consumers more susceptible to predatory lending; the roles of consumer education, mortgage counseling, and loan disclosures in preventing predatory lending; and how the secondary mortgage market can affect predatory lending. Federal agencies have taken a number of enforcement actions, sometimes jointly, using various federal consumer protection laws to combat predatory lending. The Federal Trade Commission (FTC) has played the most prominent enforcement role, filing 19 complaints and reaching multimillion dollar settlements. The Departments of Justice and Housing and Urban Development have also taken various predatory lending-related enforcement actions. Federal banking regulators report little evidence of predatory lending by the institutions they supervise. However, concerns exist about nonbank mortgage lending companies owned by financial or bank holding companies. While FTC is the primary federal enforcer of consumer protection laws for these entities, it is a law enforcement agency that conducts targeted investigations. In contrast, the Federal Reserve Board is well equipped to routinely monitor and examine these entities and, thus, potentially deter predatory lending activities, but its authority in this regard is less clear. As of January 2004, 25 states, as well as several localities, had passed laws to address predatory lending, often by restricting the terms or provisions of certain high-cost loans; however, federal banking regulators have preempted some state laws for the institutions they supervise. Also, some states have strengthened their regulation and licensing of mortgage lenders and brokers. While there are no comprehensive data, federal, state, and consumer advocacy officials report that elderly people have disproportionately been victims of predatory lending. According to these officials and relevant studies, predatory lenders target older consumers in part because they are more likely to have substantial home equity or may live on limited incomes that make them more susceptible to offers for quick access to cash. Older consumers may also have cognitive or physical impairments such as poor eyesight, hearing, or mobility that limit their ability to access competitive sources of credit. GAO's review of literature and interviews with consumer and federal officials suggest that consumer education, mortgage counseling, and loan disclosures are useful, but may be of limited effectiveness in reducing predatory lending. A variety of factors limit their effectiveness, including the complexity of mortgage transactions, difficulties in reaching target audiences, and counselors' inability to review loan documents. The secondary market--where mortgage loans and mortgage-backed securities are bought and sold--benefits borrowers by expanding credit, but may facilitate predatory lending by allowing unscrupulous lenders to quickly sell off loans with predatory terms. In part to avoid certain risks, secondary market participants perform varying degrees of "due diligence" to screen out loans with predatory terms, but may be unable to identify all such loans.
Over the past 8 years, DOD has designated over 34,000 servicemembers involved in OEF and OIF as wounded in action. The severity of injuries can result in a lengthy process for a patient to either return to duty or to transition to veteran status. The most seriously injured servicemembers from these conflicts usually receive care at Walter Reed Army Medical Center or the National Naval Medical Center. According to DOD officials, once they are stabilized and discharged from the hospital, servicemembers may relocate closer to their homes or military bases and be treated as outpatients by the closest military or VA facility. Recovering servicemembers potentially navigate two different disability evaluation systems that serve different purposes. DOD’s system serves a personnel management purpose by identifying servicemembers who are no longer medically fit for duty. If a servicemember is found unfit because of medical conditions incurred in the line of duty, the servicemember is assigned a disability rating and can be discharged from duty. This disability rating, along with years of service and other factors, determines subsequent disability and health care benefits from DOD. Under VA’s system, disability ratings help determine the level of disability compensation a veteran receives and priority status for enrollment for health care benefits. To determine eligibility for disability compensation, VA evaluates all claimed medical conditions, whether they were evaluated previously by the military service’s evaluation process or not. If VA finds that a veteran has one or more service-connected disabilities that together result in a final rating of at least 10 percent, VA will pay monthly compensation and the veteran will be eligible to receive a higher priority status for health care benefits enrollment. Efforts have been taken to address the deficiencies reported at Walter Reed related to the care provided and the transition of recovering servicemembers. After the press reports about Walter Reed, several high- level review groups were established to study the care and benefits provided to recovering servicemembers by DOD and VA. In addition, two previously-established review groups were already examining related issues. The studies produced from all of these groups, released from April 2007 through June 2008, contained over 400 recommendations covering a broad range of topics, including case management, disability evaluation systems, data sharing between the departments, and the need to better understand and diagnose TBI and PTSD. In May 2007, DOD and VA established the SOC as a temporary, 1-year committee with the responsibility for addressing recommendations from these reports. To conduct its work, the SOC established eight work groups called lines of action (LOA). Each LOA is co-chaired by representatives from DOD and VA and has representation from each military service. LOAs are responsible for specific issues, such as disability evaluation systems and case management. (See table 1 for an overview of the LOAs.) The committee was originally intended to expire May 2008 but it was extended to January 2009. Then, the NDAA 2009 extended the SOC through December 2009. In addition to addressing the published recommendations, the SOC assumed responsibility for addressing the policy development and reporting requirements contained in the NDAA 2008. Section 1611(a) of the NDAA 2008 directs DOD and VA, to the extent feasible, to develop and implement a comprehensive policy covering four areas—(1) care and management, (2) medical evaluation and disability evaluation, (3) the return of servicemembers to active duty, and (4) the transition of recovering servicemembers from DOD to VA. The specific requirements for each of these four areas are further enumerated in sections 1611 through 1614 of the law and include the development of multiple policies. Table 2 summarizes the requirements for the jointly developed policies. Since its inception, the SOC has completed many initiatives, such as establishing the Defense Centers of Excellence for Psychological Health and Traumatic Brain Injury and creating a National Resource Directory, which is an online public resource for recovering servicemembers, veterans, and their families. In addition, the SOC supported the development of several programs to improve the care, management, and transition of recovering servicemembers, including the disability evaluation system pilot and the Federal Recovery Coordination Program. These programs are currently in pilot or beginning phases. Disability evaluation system pilot: DOD and VA are piloting a joint disability evaluation system to improve the timeliness and resource use of their separate disability evaluation systems. Key features of the pilot include a single physical examination conducted to VA standards to be used by a medical evaluation board to document medical conditions that may limit a servicemember’s ability to serve in the military, a single source disability rating prepared by VA for use by both DOD and VA in determining disability benefits, and additional outreach and nonclinical case management provided by VA staff at the DOD pilot locations to explain VA results and processes to servicemembers. DOD and VA anticipate a final report on the pilot in August 2009. Federal Recovery Coordination Program: In 2007, DOD and VA established the Federal Recovery Coordination Program in response to the report by the President’s Commission on Care for America’s Returning Wounded Warriors, commonly referred to as the Dole-Shalala Commission. The commission’s report highlighted the need for better coordination of care and additional support for families. The Federal Recovery Coordination Program serves the most severely injured or ill servicemembers. These servicemembers are highly unlikely to be able to return to duty and may have to adjust to permanent disabling conditions. The program was created to provide uniform and seamless care, management, and transition of recovering servicemembers and their families by assigning recovering servicemembers to coordinators who manage the development and implementation of a recovery plan. Each servicemember enrolled in the Federal Recovery Coordination Program has a Federal Individual Recovery Plan, which tracks care, management, and transition through recovery, rehabilitation, and reintegration. Although the Federal Recovery Coordination Program is operated as a joint DOD and VA program, VA is responsible for the administrative duties and program personnel are employees of the agency. Beyond these specific initiatives, the SOC took responsibility for issues related to electronic health records through the work of LOA 4, the SOC’s work group focused on DOD and VA data sharing. This LOA also addressed issues more generally focused on joint DOD and VA data needs, including overseeing the development of components for the disability evaluation system pilot and the individual recovery plans for the Federal Recovery Coordination Program. LOA 4’s progress on these issues was monitored and overseen by the SOC. The NDAA 2008 established an interagency program office (IPO) to serve as a single point of accountability for both departments in the development and implementation of interoperable electronic health records. Subsequently, management oversight of many of LOA 4’s responsibilities were transferred to the IPO. Also, the IPO’s scope of responsibility was broadened to include personnel and benefits data sharing between DOD and VA. As of April 2009, DOD and VA have completed 60 of the 76 requirements we identified for jointly developing policies for recovering servicemembers on (1) care and management, (2) medical and disability evaluation, (3) return to active duty, and (4) servicemember transition from DOD to VA. The two departments have completed all requirements for developing policy for two of the policy areas—medical and disability evaluation and return to active duty. Of the 16 requirements that are in progress, 10 are related to care and management and 6 are related to servicemembers transitioning from DOD to VA. (See table 3.) We found that more than two-thirds of the requirements for DOD’s and VA’s joint policy development to improve the care and management of recovering servicemembers have been completed, while the remaining requirements are in progress. (See table 4.) We identified 38 requirements for this policy area and grouped them into five categories. Although 28 of the 38 requirements had been completed, one category—improving access to medical and other health care services—had most of its requirements in progress. Most of the completed requirements were addressed in DOD’s January 2009 Directive-Type Memorandum (DTM), which was developed in consultation with VA. This DTM, entitled Recovery Coordination Program: Improvements to the Care, Management, and Transition of Recovering Service Members, establishes interim policy for the improvements to the care, management, and transition of recovering servicemembers in response to sections 1611 and 1614 of the NDAA 2008. In consultation with VA, DOD created the Recovery Coordination Program in response to the NDAA 2008 requirements. This program, which was launched in November 2008, extended the same comprehensive coordination and transition support provided under the Federal Recovery Coordination Program to servicemembers who were less severely injured or ill, yet who are unlikely to return to active duty in less than 180 days. This program follows the same structured process as the Federal Recovery Coordination Program. However, DOD oversees this program and the coordinators are DOD employees. DOD’s January 2009 DTM includes information on the scope and program elements of the Recovery Coordination Program as well as on the roles and responsibilities of the recovery care coordinators, federal recovery coordinators, and medical care case managers and non-medical care managers. According to DOD officials, DOD took the lead in developing policy to address the requirements for care and management because it interpreted most of the requirements to refer to active duty servicemembers. According to DOD and VA officials, the January 2009 DTM serves as the interim policy for care, management, and transition until the completion of DOD’s comprehensive policy instruction, which is estimated to be completed by August 2009. This policy instruction will contain more detailed information on the policies outlined in the DTM. A VA official told us that VA also plans to issue related policy guidance as part of a VA handbook during the fourth quarter of 2009. The VA official noted that the final form of the policy document would correspond with DOD’s instruction. DOD and VA have completed all of the requirements for developing policy to improve the medical and physical disability evaluation of recovering servicemembers. (See table 5.) We identified 18 requirements for this policy area and grouped them into three categories: (1) policy for improved medical evaluations, (2) policy for improved physical disability evaluations, and (3) reporting on the feasibility and advisability of consolidating DOD and VA disability evaluation systems. DOD issued a series of memoranda that addressed the first two categories starting in May 2007. These memoranda, some of which were developed in collaboration with VA, contained policies and implementing guidance to improve DOD’s existing disability evaluation system. To address the third category in this policy area, DOD and VA have issued a report to Congress that describes the organizing framework for consolidating the two departments’ disability evaluation systems and states that the departments are hopeful that consolidation would be feasible and advisable even though the evaluation of this approach through the disability evaluation system pilot is still ongoing. According to a DOD official, further assessment of the feasibility and advisability of consolidation will be conducted. DOD and VA anticipate issuing a final report on the pilot in August 2009. However, as we reported in September 2008, it was unclear what specific criteria DOD and VA will use to evaluate the success of the pilot, and when sufficient data will be available to complete such an evaluation. DOD has completed the requirement for establishing standards for determining the return of recovering servicemembers to active duty. (See table 6.) On March 13, 2008, DOD issued a DTM amending its existing policy on retirement or separation due to a physical disability. The revised policy states that the disability evaluation system will be the mechanism for determining both retirement or separation and return to active duty because of a physical disability. An additional revision to the existing DOD policy allows DOD to consider requests for permanent limited active duty or reserve status for servicemembers who have been determined to be unfit because of a physical disability. Previously, DOD could consider such cases only as exceptions to the general policy. According to a DOD official, it is too early to tell whether the revisions will have an effect on retirement rates or return-to-duty rates. DOD annually assesses the disability evaluation system and tracks retirement and return to duty rates. However, because of the length of time a servicemember takes to move through the disability evaluation system—sometimes over a year—it will take a while before changes resulting from the policy revisions register in the annual assessment of the disability evaluation system. DOD and VA have completed more than two-thirds of the requirements for developing procedures, processes, or standards for improving the transition of recovering servicemembers. (See table 7.) We identified 19 requirements for this policy area, and we grouped them into five categories. We found that 13 of the 19 policy requirements have been completed, including all of the requirements for two of the categories—the development of a process for a joint separation and evaluation physical examination and development of procedures for surveys and other mechanisms to measure patient and family satisfaction with services for recovering servicemembers. The remaining three categories contain requirements that are still in progress. Most of the requirements for improving the transition from DOD to VA were addressed in DOD’s January 2009 DTM—Recovery Coordination Program: Improvements to the Care, Management, and Transition of Recovering Service Members—which establishes interim policy for the care, management, and transition of recovering servicemembers through the Recovery Coordination Program. However, we found that DOD’s DTM includes limited detail related to the procedures, processes, and standards for transition of recovering servicemembers. As a result, we could not always directly link the interim policy in the DTM to the specific requirements contained in section 1614 of the NDAA 2008. DOD and VA officials noted that they will be further developing the procedures, processes, and standards for the transition of recovering servicemembers in a subsequent comprehensive policy instruction, which is estimated to be completed by June 2009. A VA official reported that VA plans to separately issue policy guidance addressing the requirements for transitioning servicemembers from DOD to VA in the fourth quarter of 2009. DOD and VA officials told us that they experienced numerous challenges as they worked to jointly develop policies to improve the care, management, and transition of recovering servicemembers. According to officials, these challenges contributed to the length of time required to issue policy guidance, and in some cases the challenges have not yet been completely resolved. In addition, recent changes to the SOC staff, including DOD’s organizational changes for staff supporting the SOC, could pose challenges to the development of policy affecting recovering servicemembers. DOD and VA officials encountered numerous challenges during the course of jointly developing policies to improve the care, management, and transition of recovering servicemembers, as required by sections 1611 through 1614 of the NDAA 2008, in addition to responding to other requirements of the law. Many of these challenges have been addressed, but some have yet to be completely resolved. DOD and VA officials cited the following examples of issues for which policy development was particularly challenging. Increased support for family caregivers. The NDAA 2008 includes a number of provisions to strengthen support for families of recovering servicemembers, including those who become caregivers. However, DOD and VA officials on a SOC work group stated that before they could develop policy to increase support for such families, they had to obtain concrete evidence of their needs. Officials explained that while they did have anecdotal information about the impact on families who provide care to recovering servicemembers, they lacked the systematic data needed for sound policy decisions—such as frequency of job loss and the economic value of family-provided medical services. A work group official told us that their proposals for increasing support to family caregivers were rejected twice by the SOC, due in part to the lack of systematic data on what would be needed. The work group then contracted with researchers to obtain substantiating evidence, a study that required 18 months to complete. In January 2009, the SOC approved the work group’s third proposal. A provision for caregiver benefits based on the SOC’s proposal was included in the NDAA 2010 bill that was introduced in May 2009. Establishing standard definitions for operational terms. One of the important tasks facing the SOC was the need to standardize key terminology relevant to policy issues affecting recovering servicemembers. DOD took the lead in working with its military services and VA officials to identify and define key terms. DOD and VA officials told us that many of the key terms found in existing DOD and VA policy, the reports from the review groups, and the NDAA 2008, as well as those used by the different military services were not uniformly defined. Consequently, standardized definitions were important to promote agreement on issues such as identifying the recovering servicemembers who are subject to NDAA 2008 requirements, identifying categories of servicemembers who would receive services from the different classes of case managers or be eligible for certain benefits, managing aspects of the disability evaluation process, and establishing criteria to guide research. In some cases, standardized definitions were critical to policy development. The importance of agreement on key terms is illustrated by an issue encountered by the SOC’s work group responsible for family support policy. In this case, before policy could be developed for furnishing additional support to family members that provide medical care to recovering servicemembers, the definition of “family” had to be agreed upon. DOD and VA officials said that they considered two options: to define the term narrowly to include a servicemember’s spouse, parents, and children, or to use broader definitions that included distant relatives and unrelated individuals with a connection to the servicemember. These two definitions would result in significantly different numbers of family members eligible to receive additional support services. DOD and VA officials decided to use a broader definition to determine who would be eligible for support. Of the 41 key definitions identified for reconciliation, DOD and VA had concurred on 33 as of April 2009 and these 33 standardized definitions are now being used. Disagreement remains over the remaining definitions, including the definition of “mental health.” A DOD official stated that given the uncertainty associated with the organizational and procedural changes recently introduced to the SOC (which are discussed below), obtaining concurrence on the remaining definitions has been given lower priority. Improving TBI and PTSD screening and treatment. Requirements related to screening and treatment for TBI and PTSD were embedded in several sections of the NDAA 2008, including section 1611, and were also discussed extensively in a task force report on mental health. DOD and VA officials told us that policy development for these issues was difficult. For example, during development of improved TBI and PTSD treatment policy, policymakers often lacked sufficient scientific information needed to help achieve consensus on policy decisions. Also, members of the SOC work group told us that they disagreed on appropriate models for screening and treatment and struggled to reorient the military services to patient-focused treatment. A senior DOD official stated that the adoption of patient-focused models is particularly difficult for the military services because, historically, the needs of the military have been given precedence over the needs of individual servicemembers. To address these challenges, the SOC oversaw the creation of the Defense Centers of Excellence for Psychological Health and Traumatic Brain Injury—a partnership between DOD and VA. While policies continue to be developed on these issues, TBI and PTSD policy remains a challenge for DOD and VA. However, DOD officials told us that the centers of excellence have made progress with reducing knowledge gaps in psychological health and TBI treatment, identifying best practices, and establishing clinical standards of care. Release of psychological health treatment records to DOD by VA health care providers who treat members of the National Guard and Reserves. Section 1614 of the NDAA 2008 requires the departments to improve medical and support services provided to members of the National Guard and Reserves. In pursuing these objectives, VA faced challenges related to the release of medical information to DOD on reservists and National Guard servicemembers who have received treatment for PTSD or other mental health conditions from VA. DOD requests medical information from VA to help make command decisions about the reactivation of servicemembers, but VA practitioners face an ethical dilemma if the disclosure of medical treatment could compromise servicemembers’ medical conditions, particularly for those at risk of suicide. The challenge of sharing and protecting sensitive medical information on servicemembers who obtain treatment at VA was reviewed by the Blue Ribbon Work Group on Suicide Prevention convened in 2008 at the behest of the Secretary of Veterans Affairs. DOD and VA are continuing their efforts to address the privacy rights of patients who receive medical services from VA while serving in the military, and to protect the confidential records of VA patients who may also be treated by the military’s health care system. The need to resolve this challenge assumes even greater importance in light of DOD’s and VA’s increasing capability to exchange medical records electronically, which will expand DOD’s ability to access records of servicemembers who have received medical treatment from VA. The SOC has experienced turnover in leadership, reconfiguration in its organizational structure at DOD, and changes affecting policy development responsibilities. These changes could pose future challenges to DOD’s and VA’s efforts to develop joint policy. The SOC has experienced leadership changes caused by the turnover in presidential administrations as well as turnover in some of its key staff. For example, the outgoing deputy secretaries of DOD and VA, who previously chaired the SOC, left their positions in January 2009 with the change in administration, and new deputy secretaries were not confirmed until February and April 2009. In their absence, the Secretaries of VA and DOD co-chaired a SOC meeting as a short-term measure. DOD also introduced other staffing changes to replace personnel who had been temporarily detailed to the SOC and needed to return to their primary duties. DOD had relied on temporarily-assigned staff to meet SOC staffing needs because the SOC was originally envisioned as a short-term effort. In a December 2008 memorandum, DOD outlined the realignment of its SOC staff. This included the transition of responsibilities from detailed, temporary SOC staff and executives to permanent staff in existing DOD offices that managed similar issues. For example, the functions of LOA 7 (Legislation and Public Affairs) will now be overseen by the Assistant Secretary of Defense for Legislative Affairs, the Assistant Secretary of Defense for Public Affairs, and the DOD General Counsel. DOD also established two new organizational structures—the Office of Transition Policy and Care Coordination and an Executive Secretariat office. The Office of Transition Policy and Care Coordination oversees transition support for all servicemembers and serves as the permanent entity for issues being addressed by LOA 1 (Disability Evaluation System), LOA 3 (Case Management), and LOA 8 (Personnel, Pay, and Financial Support). The Executive Secretariat office is responsible for performance planning, performance management, and SOC support functions. According to DOD officials, the new offices were created to establish permanent organizations that address a specific set of issues and to enhance accountability for policy development and implementation as these offices report directly to the Office of the Under Secretary of Defense for Personnel and Readiness. Currently, many of the positions in these new offices, including the director positions, are staffed by officials in an acting capacity or are unfilled. DOD’s changes to the SOC are important because of the potential effects these changes could have on the development of policy for recovering servicemembers. However, officials in both DOD and VA have mixed reactions about the consequences of these changes. Some DOD officials consider the organizational changes to the SOC to be positive developments that will enhance the SOC’s effectiveness. They point out that the SOC’s temporary staffing situation needed to be addressed, and also that the two new offices were created to support the SOC and provide focus on the implementation of key policy initiatives developed by the SOC—primarily the disability evaluation system pilot and the new case management programs. In contrast, others are concerned by DOD’s changes, stating that the new organizations disrupt the unity of command that once characterized the SOC’s management because personnel within the SOC organization now report to three different officials within DOD and VA. However, it is too soon to determine how well DOD’s new structure will work in conjunction with the SOC. DOD and VA officials we spoke with told us that the SOC’s work groups continue to carry out their roles and responsibilities. Finally, according to DOD and VA officials, the scope of responsibilities of both the SOC and the DOD and VA Joint Executive Council appear to be in flux and may evolve further still. According to DOD and VA officials, changes to the oversight responsibilities of the SOC and the Joint Executive Council are causing confusion. While the SOC will remain responsible for policy matters directly related to recovering servicemembers, a number of policy issues may now be directed to the Joint Executive Council, including issues that the SOC had previously addressed. For example, management oversight of many of LOA 4’s responsibilities (DOD and VA Data Sharing) has transitioned from the SOC to the IPO, which reports primarily to the Joint Executive Council. It is not clear how the IPO will ensure effective coordination with the SOC’s LOAs for overseeing the development of information technology applications for the disability evaluation system pilot and the individual recovery plans for the Federal Recovery Coordination Program. Given that information technology support for two key SOC initiatives is identified in the joint DOD/VA Information Interoperability Plan, if the IPO and the SOC do not effectively coordinate with one another, the result may negatively affect the development of improved policies for recovering servicemembers. We provided a draft of this report to DOD and VA for comment. VA provided technical comments, which we incorporated as appropriate. DOD and VA did not provide other comments. We are sending copies of this report to the Secretaries of the Departments of Defense and Veterans Affairs, congressional committees, and other interested parties. The report is also available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-7114 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. To summarize the status of the Departments’ of Defense (DOD) and Veterans Affairs (VA) efforts to jointly develop policies for each of the four policy areas outlined in sections 1611 through 1614 of the National Defense Authorization Act for Fiscal Year 2008 (NDAA 2008), we identified 76 requirements in these sections and grouped related requirements into 14 logical categories. Tables 8 through 11 enumerate the requirements in each of GAO’s categories and provide the status of DOD’s and VA’s efforts to develop policy related to each requirement, as of April 2009. In addition to the contact named above, Bonnie Anderson, Assistant Director; Susannah Bloch; Catina Bradley; April Brantley; Frederick Caison; Lisa Motley; and Elise Pressma made major contributions to this report.
The National Defense Authorization Act for Fiscal Year 2008 (NDAA 2008) requires the Departments of Defense (DOD) and Veterans Affairs (VA) to jointly develop and implement comprehensive policies on the care, management, and transition of recovering servicemembers. The Wounded, Ill, and Injured Senior Oversight Committee (SOC)--jointly chaired by DOD and VA leadership--has assumed responsibility for these policies. The NDAA 2008 also requires GAO to report on the progress DOD and VA make in jointly developing and implementing the policies. This report focuses on the joint development of the policies. Implementation of the policies will be addressed in future reports. Specifically, this report provides information on (1) the progress DOD and VA have made in jointly developing the comprehensive policies required by the NDAA 2008 and (2) the challenges DOD and VA are encountering in the joint development of these policies. GAO determined the current status of policy development by assessing the status reported by SOC officials and analyzing supporting documentation. To identify challenges, GAO interviewed the Acting Under Secretary of Defense for Personnel and Readiness, the Executive Director and Chief of Staff of the SOC, the departmental co-leads for most of the SOC work groups, the Acting Director of DOD's Office of Transition Policy and Care Coordination, and other knowledgeable officials. DOD and VA have made substantial progress in jointly developing policies required by sections 1611 through 1614 of the NDAA 2008 in the areas of (1) care and management, (2) medical and disability evaluation, (3) return to active duty, and (4) transition of care and services received from DOD to VA. Overall, GAO's analysis showed that as of April 2009, 60 of the 76 policy requirements GAO identified have been completed and the remaining 16 policy requirements are in progress. DOD and VA have completed all of the policy development requirements for medical and physical disability evaluations, including issuing a report on the feasibility and advisability of consolidating the DOD and VA disability evaluation systems, although the pilot for this approach is still ongoing. DOD has also completed establishing standards for returning recovering servicemembers to active duty. More than two-thirds of the policy development requirements have been completed for the remaining two policy areas--care and management and the transition of recovering servicemembers from DOD to VA. Most of these requirements were addressed in a January 2009 DOD memorandum that was developed in consultation with VA. DOD officials reported that more information will be provided in a subsequent policy instruction, which is to be issued in August 2009. VA also plans to issue related policy guidance in the fourth quarter of 2009. DOD and VA officials told GAO that they have experienced numerous challenges as they worked to jointly develop policies to improve the care, management, and transition of recovering servicemembers. According to officials, these challenges contributed to the length of time required to issue policy guidance, and in some cases the challenges have not yet been completely resolved. For example, the SOC must still standardize key terminology relevant to policy issues affecting recovering servicemembers. DOD and VA agreement on key definitions for what constitutes "mental health," for instance, is important for developing policies that define the scope, eligibility, and service levels for recovering servicemembers. Recent changes affecting the SOC may also pose future challenges to policy development. Some officials have expressed concern that DOD's recent changes to staff supporting the SOC have disrupted the unity of command because SOC staff now report to three different officials within DOD and VA. However, it is too soon to determine how well DOD's staffing changes will work. Additionally, according to DOD and VA officials, the SOC's scope of responsibilities appears to be in flux. While the SOC will remain responsible for policy matters for recovering servicemembers, a number of policy issues may now be directed to the DOD and VA Joint Executive Council. Despite this uncertainty, DOD and VA officials told GAO that the SOC's work groups continue to carry out their roles and responsibilities. GAO provided a draft of this report to DOD and VA for comment. VA provided technical comments, which GAO incorporated as appropriate. DOD and VA did not provide other comments.
VA manages a vast medical care network for veterans, providing health care services to about 5 million beneficiaries. The estimated cost of these services in fiscal year 2004 was $29 billion. According to VA, its health care system now includes 157 medical centers, 862 ambulatory care and community-based outpatient clinics (CBOC), and 134 nursing homes. VA health care facilities provide a broad spectrum of medical, surgical, and rehabilitative care. The management of VA’s facilities is decentralized to 21 regional networks referred to as Veterans Integrated Service Networks (networks). The Charleston facility is part of Network 7, or the Southeast Network. The Charleston medical facility is a part of the VA health care network and has served the medical needs of Charleston area veterans since it opened in 1966. The Charleston facility is a primary, secondary, and tertiary care facility. (See fig. 1.) The facility consists of more than 352,000 square feet with 117 medical and surgical beds and 28 nursing home care unit beds; according to VA officials, the average daily occupancy rate is about 80 percent. The outpatient workload was about 460,000 clinic visits in fiscal year 2004. VA employs about 1,100 staff at the Charleston facility, which has an annual operating budget of approximately $160 million. VA’s Charleston medical facility is affiliated with MUSC. MUSC is the main source of the Charleston facility’s medical residents, who rotate through all major VA clinical service areas. VA also purchases approximately $13 million in medical care services from MUSC, including gastroenterology, infectious disease, internal medicine, neurosurgery, anesthesia, pulmonary, cardiovascular perfusion, and radiology services. In addition, VA has a medical research partnership with MUSC for a mutually supported biomedical research facility, the Thurmond Biomedical Research Center. MUSC operates a 709 licensed bed acute care hospital in Charleston that also provides primary, secondary, and tertiary services. The services available through MUSC span the continuum of care with physician specialists and subspecialists in medicine, surgery, neurology, neurological surgery, psychiatry, radiology, and emergency medicine, among other specialties. During a 12-month period ending on June 30, 2003, MUSC admitted 28,591 patients (including newborns), representing an occupancy rate of approximately 78 percent of available beds. Outpatient activity for the same period included 6,802 same-day surgeries, 551,914 outpatient visits, and 35,375 emergency visits. MUSC’s net patient service revenue for the fiscal year ending on June 30, 2003, was about $559 million. VA and the CARES Commission concluded that the Charleston facility is in overall good condition and, with relatively minor renovations, can continue to meet veterans’ health care needs in the future. VA conducts facility condition assessments (FCA) at its facilities every 3 years on a rotating basis. FCAs evaluate the condition of a VA facility’s essential functions—electrical and energy systems, accessibility, sanitation and water—and subsequently estimate the useful and remaining life of those systems. The Charleston facility’s most recent FCA was conducted in 2003, and this assessment showed that the facility currently is in overall good condition. According to VA officials, the facility’s current condition is a result of targeted capital investments. In particular, VA invested about $11.6 million in nonrecurring maintenance projects over the last 5 years. Such projects include installing a new fire alarm system, replacing roofing, painting the exterior of the building, and upgrading interior lighting. The CARES Commission did not recommend replacing VA’s facility in Charleston as it did with facilities in some other locations. In assessing the capital asset requirements for the Charleston facility, the Commission relied on the 2003 FCA and projections of inpatient and outpatient service demands through 2022, among other things. These projections indicate that demand for inpatient beds at VA’s facility in Charleston will increase by 29 percent from 2001 to 2022, while demand for outpatient services will increase by 69 percent during the same period. Although the CARES Commission did not recommend a new facility in Charleston, it did call for renovating the nursing home units and the inpatient wards. In his response to the Commission’s recommendations, the Secretary agreed to make the necessary renovations at the Charleston facility. VA officials at the Charleston medical facility have a number of ongoing and planned capital maintenance and improvement projects to address the CARES Commission recommendations and to maintain the condition of the current medical center. For example, two minor capital improvements—totaling $6.25 million—are currently under construction. These projects include a third floor clinical addition, which will add 20,000 square feet of space to the medical center for supply processing and distribution, rehabilitation medicine, and prosthetics; and the patient privacy project, which will renovate the surgical in-patient ward to provide private and semiprivate bathrooms for veterans. Planned capital maintenance and improvements projects over the next 10 years include electrical upgrades, renovation of several wards to address patient privacy concerns, renovation of operating rooms and the intensive care units, and the expansion of the specialty care clinics. VA officials estimate that the total cost for all planned capital maintenance and improvement projects is approximately $62 million. In addition to the capital improvement projects at the medical center in Charleston, VA is currently constructing a CBOC, in partnership with the Navy, at the Naval Weapons Station in Goose Creek, South Carolina. The new clinic will be a joint VA-Navy facility and will help VA address the projected increase in demand for outpatient services. The new clinic— called the Goose Creek CBOC—is scheduled to open in 2008 and will serve a projected 8,000 patients who are currently served by VA’s Charleston facility. VA estimates its investment in the planning, design, and construction of the Goose Creek CBOC will be about $6 million. VA and MUSC have collaborated and communicated to a limited extent on a proposal for a joint venture medical center over the past 3 years. As a result of the limited collaboration, negotiations over the proposal stalled. In August 2005, however, initial steps were taken to move the negotiations forward. Specifically, four workgroups were created—which include both VA and MUSC officials—and tasked with examining critical issues related to the proposal. To meet the needs of a growing and aging patient population, MUSC has undertaken an ambitious five-phase construction project to replace its aging medical campus. Construction on the first phase began in October 2004. Phase I includes the development of a four-story diagnostic and treatment building and a seven-story patient hospitality tower, providing an additional 641,000 square feet in clinical and support space—156 beds for cardiovascular and digestive disease services, 9 operating rooms, outpatient clinics with a capacity of 100,000 visits, and laboratory and other ancillary support services. Phase I also includes the construction of an atrium connecting the two buildings, a parking structure, and a central energy plant. Initial plans for phases II through V include diagnostic and treatment space and patient bed towers. As shown in figure 2, phases IV and V would be built on VA property. In particular, phase V would be built on the site of VA’s existing medical center. MUSC has informed VA about its proposed locations for these facilities. According to MUSC officials, there are approximately 2 years remaining for the planning of phase II. In November 2002, the President of MUSC sent a proposal to the Secretary of VA about partnering with MUSC in the construction and operation of a new medical center in phase II of MUSC’s construction project. Under MUSC’s proposal, VA would vacate its current facility and move to a new facility located on MUSC property to the south of phase I. MUSC also indicated that sharing medical services would be a component of the joint venture—that is, VA and MUSC would enter into sharing agreements to buy, sell, or barter medical and support services. VA and MUSC currently share some services—for example, VA purchases services for gastroenterology, infectious disease, and internal medicine. According to MUSC officials, the joint venture proposal would increase the level of sharing of medical services and equipment, which would create cost savings for both VA and MUSC. VA officials told us that the proposed joint venture between MUSC and VA is unprecedented—that is, should VA participate in the joint venture, it would be the first of its kind between VA and a medical education affiliate. In response to MUSC’s proposal, VA formed an internal workgroup composed of officials primarily from VA’s Southeast Network to evaluate MUSC’s proposal. The workgroup analyzed the feasibility and cost effectiveness of the proposal and issued a report in March 2003, which outlined three other options available to VA: replacing the Charleston facility at its present location, replacing the Charleston facility on land presently occupied by the Naval Hospital in Charleston, or renovating the Charleston facility. The workgroup concluded that it would be more cost effective to renovate the current Charleston facility than to replace it with a new facility. This conclusion was based, in part, on the cost estimates for constructing a new medical center. In April 2003, the Secretary of VA sent a counterproposal to the President of MUSC, which indicated that VA preferred to remain in its current facility. The Secretary indicated, however, that if VA agreed to the joint venture, it would rather place the new facility in phase III—which is north of phase I—to provide better street access for veterans. (See fig. 3 for MUSC’s proposal and VA’s counterproposal.) In addition, the Secretary indicated that MUSC would need to provide a financial incentive for VA to participate in the joint venture. Specifically, MUSC would need to make up the difference between the estimated life-cycle costs of renovating the Charleston facility and building a new medical center—which VA estimated to be about $85 million—through negotiations or other means. The MUSC President responded to VA’s counterproposal in an April 2003 letter to the Secretary of VA. In the letter, the MUSC President stated that MUSC was proceeding with phase I of the project and that the joint venture concept could be pursued during later phases of construction. The letter did not specifically address VA’s proposal to locate the new facility in phase III, nor the suggestion that MUSC would need to provide some type of financial incentive for VA to participate in the joint venture. To move forward with phase I, the MUSC President stated that MUSC would like to focus on executing an enhanced use lease (EUL) for Doughty Street. Although MUSC owns most of the property that will be used for phases I through III, Doughty Street is owned by VA and serves as an access road to the Charleston facility and parking lots. The planned facility for phase I would encompass Doughty Street. (See fig. 4.) Therefore, MUSC could not proceed with phase I—as originally planned—until MUSC secured the rights to Doughty Street. To help its medical affiliate move forward with construction, VA executed a EUL agreement with MUSC in May 2004 for use of the street. According to the terms of the EUL, MUSC will pay VA $342,000 for initial use of the street and $171,000 for each of the following eight years. Although both entities successfully collaborated in executing the enhanced use lease for Doughty Street, limited collaboration and communication generally characterize the negotiations between MUSC and VA over the joint venture proposal. In particular, before this summer, VA and MUSC had not exchanged critical information that would help facilitate negotiations. For instance, MUSC did not clearly articulate to VA how replacing the Charleston facility, rather than renovating the facility, would improve the quality of health care services for veterans or benefit VA. MUSC officials had generally stated that sharing services and equipment would create efficiencies and avoid duplication, which would lead to cost savings. However, MUSC had not provided any analyses to support such claims. Similarly, as required by law, VA studied the feasibility of coordinating its health care services with MUSC, pending construction of MUSC’s new medical center. This study was completed in June 2004. However, VA officials did not include MUSC officials in the development of the study, nor did they share a copy of the completed study with MUSC. VA also updated its cost analysis of the potential joint venture this spring, but again, VA did not share the results with MUSC. Because MUSC was not included in the development of these analyses, there was no agreement between VA and MUSC on key input for the analyses, such as the specific price MUSC would charge VA for, or the nature of, the medical services that would be provided. As a result of the limited collaboration and communication, negotiations stalled—prior to August 2005, the last formal correspondence between VA and MUSC leadership on the joint venture was in April 2003. (See fig. 5 for a time line of key events in the negotiations between VA and MUSC.) On August 1, 2005, a congressional delegation visited Charleston to meet with VA and MUSC officials to discuss the joint venture proposal. After this visit, VA and MUSC agreed to establish workgroups to examine key issues associated with the joint venture proposal. Specifically, VA and MUSC established the Collaborative Opportunities Steering Group (steering group). The steering group is composed of five members from VA, five members from MUSC, and a representative from the Department of Defense (DOD), which is also a stakeholder in the local health care market. The steering group chartered four workgroups, and according to VA: The governance workgroup will examine ways of establishing organizational authority within a joint venture between VA and MUSC, including shared medical services. The clinical service integration workgroup will identify medical services provided by VA and MUSC and opportunities to integrate or share these services. The legal workgroup will review federal and state authorities (or identify the lack thereof) and legal issues relating to a joint venture with shared medical services. The finance workgroup will provide cost estimates and analyses relating to a joint venture with shared medical services. The workgroups will help VA and MUSC determine if the joint venture proposal is mutually beneficial. The workgroups are scheduled to provide weekly reports to the steering group and a final report to the steering group by October 28, 2005. The steering group is scheduled to submit a final report by November 30, 2005, to the Deputy Under Secretary for Health for Operations and Management and to the President of MUSC. The possibility of participating in the joint venture raises a number of issues for VA to consider. The proposed joint venture presents a unique opportunity for VA to reevaluate how it provides health care services to veterans in Charleston. Our ongoing work, as well as our previous work on VA’s capital realignment efforts, cost-benefit analysis, organizational transformation, and performance management, however, suggests many issues to consider before making a decision about a joint venture, including governance, legal, and stakeholder issues. Some of these issues will be directly addressed by the workgroups, while others, such as the concerns of stakeholders, will not. In addition, some issues can be addressed through collaboration between VA and MUSC, while others may require VA to seek legislative remedies. Among the issues to explore are the following: Comparing appropriate options and assessing the costs and benefits of all options: According to Office of Management and Budget (OMB) guidelines on evaluating capital assets, a comparison of options, or alternatives, including the status quo, is critical for ensuring that the best alternative is selected. In its guidance, OMB encourages decision makers to consider the different ways in which various functions, most notably health care service delivery in this case, can be performed. OMB guidelines further state that comparisons of costs and benefits should facilitate selection among competing alternatives. The finance workgroup is examining the potential costs for shared services within a joint facility. However, it is unclear whether the workgroup will weigh the benefits and costs of a new facility against those of other alternatives, including maintaining the existing medical center. VA will also need to weigh the costs and benefits of investing in a joint venture in Charleston against the needs of other VA facilities in the network and across the nation. VA did not include the Charleston facility on its list of highest priority major medical facility construction requirements for fiscal years 2004 through 2010. According to VA, the list of priorities, which includes 48 projects across the nation, aligns with existing CARES recommendations. Nevertheless, exploring the potential costs and benefits of a joint venture gives VA an opportunity to reexamine how it delivers health care services to the nation’s veterans and uses its affiliations with medical universities now and in the future. As we have stated in previous reports, given the nation’s long-term fiscal challenges and other challenges of the 21st Century, such reexaminations of federal programs are warranted. Moreover, as the CARES Commission noted, the potential joint venture between VA and MUSC is a possible framework for future partnerships. Developing a governance plan that outlines responsibilities and ensures accountability: If VA and MUSC decide to enter into a joint venture for a new facility, they will need a plan for governing the facility. Any governance plan would have to maintain VA’s direct authority over and accountability for the care of VA patients. In addition, if shared medical services are a component of a joint venture between MUSC and the VA, the entities will need a mechanism to ensure that the interests of the patients served by both are protected today and in the future. For instance, VA may decide to purchase operating room services from MUSC. If the sharing agreement was dissolved at some point in the future, it would be difficult for VA to resume the independent provision of these services. Also, if MUSC physicians were to treat VA beneficiaries, or VA physicians were to treat MUSC patients, each entity would need a clear understanding of how to report health information to its responsible organization. Therefore, a clear plan for governance would ensure that VA and MUSC could continue to serve their patients’ health care needs as well as or better than before. Identifying legal issues and seeking legislative remedies: The proposed joint venture raises a number of complex legal issues depending on the type of joint venture that is envisioned. Many of the legal issues that will need to be addressed involve real estate, construction, contracting, budgeting, and employment. The following are among some of the potential issues relating to a joint venture that VA previously identified: What type of interest will VA have in the facility? If MUSC is constructing the facility on MUSC property, will VA be entering into a leasehold interest in real property or a sharing agreement for space, and what are the consequences of each? If the facility is to be located on VA property, will it involve a land transfer to MUSC or will VA lease the property to MUSC under its authority to enter into a EUL agreement? What are the advantages and disadvantages of these options? Because MUSC contracting officials do not have the authority to legally bind the VA, how would contracting for the services and equipment be handled? The legal workgroup is currently identifying VA’s and MUSC’s legal authorities, or lack thereof, on numerous issues relating to entering into a joint venture. Should VA decide to participate in the joint venture, it may need to seek additional authority from the Congress. Involving stakeholders in the decisionmaking process: Participating in a joint venture medical center, particularly if it includes significant service sharing between VA and MUSC, has significant implications for the medical center’s stakeholders, including VA patients, VA employees, and the community. These stakeholders have various perspectives and expectations—some of which are common to the different groups, while others are unique. For example, union representatives and VA officials whom we spoke to indicated that VA patients and employees would likely be concerned about maintaining the quality of patient care at a new facility and access to the current facility during construction. Union representatives also said the employees would be concerned about the potential for the loss of jobs if VA participated in the joint venture and purchased additional services from MUSC. As VA and MUSC move forward in negotiations, it will be important for all stakeholders’ concerns to be addressed. Developing a system to measure performance and results: If VA and MUSC decide to jointly build and operate a new facility in Charleston, it will become, as noted in the CARES Commission report, a possible framework for future partnerships between VA and other medical universities. As a result, a system for measuring whether the new joint venture facility is achieving the intended results would be useful. In our previous work on managing for results, we have emphasized the importance of establishing meaningful, outcome-oriented performance goals. In this case, potential goals could be operational cost savings and improved health care for veterans. If the goals are not stated in measurable terms, performance measures should be established that translate those goals into concrete, observable conditions. Such measures would enable VA and other stakeholders to determine whether progress is being made toward achieving the goals. This information could not only shed light on the results of a joint venture in Charleston, but it could also enable VA to identify criteria for evaluating other possible joint ventures with its medical affiliates in the future. It would also help Congress to hold VA accountable for results. In conclusion, Mr. Chairman, we have stated over the past few years that federal agencies, including VA, need to reexamine the way they do business in order to meet the challenges of the 21st century. To address future health care needs of veterans, VA’s challenge is to explore alternative ways to fulfill its mission of providing veterans with quality health care. The prospect of establishing a joint venture medical center with MUSC presents a good opportunity for VA to study the feasibility of one method—expanding its relationships with university medical school affiliates to include the sharing of medical services in an integrated facility. This is just one of several ways VA could provide care to veterans. Evaluating this option would involve VA officials, working in close collaboration with MUSC officials, weighing the benefits and costs as well as the risks involved in a joint venture against those of other alternatives, including maintaining the current medical center. Determining whether a new facility for Charleston is justified in comparison with the needs of other facilities in the VA system is also important. Until these difficult, but critical, issues are addressed, a fully-informed final decision on the joint venture proposal cannot be made. Mr. Chairman, this concludes my prepared statement. I will be happy to respond to any questions you or other Members of the Subcommittee may have. For further information, please contact Mark Goldstein at (202) 512-2834. Individuals making key contributions to this testimony include Nikki Clowers, Daniel Hoy, Jennifer Kim, Edward Laughlin, Donna Leiss, James Musselwhite Jr., Terry Richardson, Susan Michal-Smith, and Michael Tropauer. VA Health Care: Key Challenges to Aligning Capital Assets and Enhancing Veterans’ Care. GAO-05-429. Washington, D.C.: August 5, 2005. Federal Real Property: Further Actions Needed to Address Long-standing and Complex Problems. GAO-05-848T. Washington, D.C.: June 22, 2005. VA Health Care: Important Steps Taken to Enhance Veterans’ Care by Aligning Inpatient Services with Projected Needs. GAO-05-160. Washington, D.C.: March 2, 2005. High-Risk Series: An Update. GAO-05-207. Washington, D.C.: January 2005. VA Health Care: Access for Chattanooga-Area Veterans Needs Improvements. GAO-04-162. Washington, D.C.: January 30, 2004. Budget Issues: Agency Implementation of Capital Planning Principles Is Mixed. GAO-04-138. Washington, D.C.: January 16, 2004. Federal Real Property: Vacant and Underutilized Properties at GSA, VA, and USPS. GAO-03-747. Washington, D.C.: August 19, 2003. VA Health Care: Framework for Analyzing Capital Asset Realignment for Enhanced Services Decisions. GAO-03-1103R. Washington, D.C.: August 18, 2003. Department of Veterans Affairs: Key Management Challenges in Health and Disability Programs. GAO-03-756T. Washington, D.C.: May 8, 2003. VA Health Care: Improved Planning Needed for Management of Excess Real Property. GAO-03-326. Washington, D.C.: January 29, 2003. Major Management Challenges and Program Risks: Department of Veterans Affairs. GAO-03-110. Washington, D.C.: January 2003. High-Risk Series: Federal Real Property. GAO-03-122. Washington, D.C.: January 2003. VA Health Care: VA Is Struggling to Address Asset Realignment Challenges. GAO/T-HEHS-00-88. Washington, D.C.: April 5, 2000. VA Health Care: Improvements Needed in Capital Asset Planning and Budgeting. GAO/HEHS-99-145. Washington, D.C.: August 13, 1999. VA Health Care: Challenges Facing VA in Developing an Asset Realignment Process. GAO/T-HEHS-99-173. Washington, D.C.: July 22, 1999. VA Health Care: Capital Asset Planning and Budgeting Need Improvement. GAO/T-HEHS-99-83. Washington, D.C.: March 10, 1999. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Department of Veterans Affairs (VA) maintains partnerships, or affiliations, with university medical schools to obtain medical services for veterans and provide training for medical residents. In 2002, the Medical University of South Carolina (MUSC)--which is affiliated with VA's medical facility in Charleston--proposed that VA and MUSC enter into a joint venture for a new VA facility as part of MUSC's plan to expand its medical campus. Under the proposal, MUSC and VA would jointly construct and operate a new medical center in Charleston. In 2004, the Capital Asset Realignment for Enhanced Services (CARES) Commission, an independent body charged with assessing VA's capital asset requirements, issued its recommendations on the realignment and modernization of VA's capital assets. Although the Commission did not recommend a replacement facility for Charleston, it did recommend, among other things, that VA promptly evaluate MUSC's proposal. This testimony discusses GAO's preliminary findings on the (1) current condition of the Charleston facility, (2) extent to which VA and MUSC collaborated on the joint venture proposal, and (3) issues for VA to consider when exploring the opportunity to participate in the joint venture. VA concurred with GAO's preliminary findings. The most recent VA facility assessment and the CARES Commission concluded that the Charleston medical facility is in overall good condition and, with some renovations, can continue to meet veterans' health care needs in the future. VA officials attribute this to VA's continued capital investments in the facility. For example, over the last 5 years, VA has invested approximately $11.6 million in nonrecurring maintenance projects, such as replacing the fire alarm system and roofing. To maintain the facility's condition over the next 10 years, VA officials from the Charleston facility have identified a number of planned capital maintenance and improvement projects, totaling approximately $62 million. VA and MUSC have collaborated and communicated to a limited extent over the past 3 years on a proposal for a joint venture medical center. For example, before this summer, VA and MUSC had not exchanged critical information that would help facilitate negotiations, such as cost analyses of the proposal. As a result of the limited collaboration, negotiations over the proposal stalled. However, after a congressional delegation visit in August 2005, VA and MUSC took steps to move the negotiations forward. Specifically, VA and MUSC established four workgroups to examine critical issues related to the proposal. The MUSC proposal for a new joint venture medical center presents an opportunity for exploring new ways of providing health care to Charleston's veterans, but it also raises a variety of complex issues for VA. These include the benefits and costs of investing in a joint facility compared with other alternatives, legal issues associated with the new facility such as leasing or transferring property, and potential concerns of stakeholders, including VA patients and employees. The workgroups established by VA and MUSC are expected to examine some, but not all, of these issues. Additionally, some issues can be addressed through collaboration between VA and MUSC, but others may require VA to seek legislative remedies.
A number of communicable disease threats have raised concern regarding international transmission and travel since the 2003 SARS epidemic, which was the first major new disease of the 21st century, according to the World Health Organization (WHO) of the United Nations. WHO described that 2003 epidemic as a watershed event, because it revealed how much the world had changed in terms of the impact that communicable diseases can have in a highly mobile and closely interconnected world. Figure 1 provides information about major communicable disease epidemics since 2002. The International Health Regulations (IHR) are an international agreement of the World Health Assembly, the governing body of WHO, and were originally adopted by the Assembly in 1969 to address certain disease threats. The IHR have evolved since then in response to the growth in international travel and trade and the emergence of international disease threats. Most recently, the IHR were revised in 2005 following the SARS epidemic. WHO implements and oversees the IHR and—together with its partners, such as the International Civil Aviation Organization (ICAO)—helps member states build response capacities. ICAO takes a key role in coordinating the international aviation response to public health risks. Through the IHR, WHO member states have agreed to build core capacities at designated ports of entry, such as airports, to limit the spread of public health risks—such as communicable disease threats—while at the same time minimizing any unnecessary interference with travel and trade. ICAO develops international standards and recommended practices (SARPs) for civil aviation systems, in cooperation with its member states and global aviation organizations. Member states, including the United States, are obligated to establish regulations or take other appropriate steps to implement the ICAO standards within their own civil aviation systems. In the United States, different agencies are responsible for different aspects of the civil aviation system, as discussed below. The SARPs include some health-related standards and recommended practices based on the most recent IHR obligations. To encourage member states to comply with these health-related SARPs, ICAO has developed a template for the development of national aviation public- health emergency-preparedness plans, which are obligations under the IHR. The Collaborative Arrangement for the Prevention and Management of Public Health Events in Civil Aviation (CAPSCA) also works to bring together international, regional, national, and local organizations to develop a coordinated approach to preparedness and response. Other international organizations, notably Airports Council International (ACI) and IATA, also provide assistance to airports and airlines, respectively, in preparing for communicable disease threats. In the United States, a number of federal agencies, aviation stakeholders, and others have roles and responsibilities in preparing for, assessing, and responding to communicable disease threats in the aviation system. Each of the federal agencies involved in preparing for or responding to communicable disease threats from abroad have a different mission, including those described below, that affects their responsibilities for protecting against communicable disease threats. DHS and HHS are the lead agencies in a response to a communicable disease threat, and other federal agencies provide support as necessary. Within DHS, CBP aims to safeguard America’s borders thereby protecting the public from dangerous people and materials while enhancing the nation’s global economic competitiveness by enabling legitimate trade and travel. CBP officials at ports of entry, including airports, conduct the primary inspection of arriving international travelers and have authority to permit or deny admission to the United States. Within HHS, CDC has defined its mission as protecting America from health, safety, and security threats, both foreign and domestic. With its partners, such as CBP, CDC responds to sick travelers who arrive in the United States at major airports, seaports, or land border crossings, when warranted. CDC alerts travelers about disease outbreaks and steps they can take to protect themselves. CDC also has the authority to quarantine passengers traveling from foreign countries, if necessary, to protect the general population and respond to disease threats to the United States. Within DOT, FAA is responsible for safety of flight in the United States and the safe and efficient movement of air traffic in the national airspace system, as well as for the safety of U.S. airlines, other U.S. operators, and FAA-certificated air crews worldwide. As part of this responsibility, the agency regulates and certificates airports, airlines, and airmen and provides guidance through advisory circulars and other means. Within the Department of Labor, OSHA aims to assure safe and healthful working conditions for working men and women by setting and enforcing standards and by providing training, outreach, education, compliance, and assistance. The Department of State (State) has the authority to grant visas, which allow foreign citizens to travel to a U.S. port of entry (generally an airport) and request permission to enter the United States. Depending on location and threat, these agencies along with aviation stakeholders and their partners—including local public health authorities, first responders, contracted aviation-services firms, and others—may each have a role in preparing for or responding to a communicable disease incident. That is, the specific response actions taken by individual entities, such as airports and airlines—as well as federal, state, and local authorities—will depend on the operating characteristics of the airline or airport, disease characteristics, and the type and level of threat that exists. Finally, some roles and responsibilities for a response to a threat in the aviation system are established in law or by agreement and others may be defined in preparedness plans. Airports are required by FAA regulations to develop airport-emergency plans (AEP), which must address a variety of hazards, including aircraft incidents and accidents, acts of terrorism, fires, natural disasters, hazardous materials, power failures, or water rescues. These plans are not required to address communicable disease threats. The risk of disease transmission in an airport and aboard airlines may be heightened during a communicable disease epidemic, although airports, airlines, and public health authorities may have to address travelers with more common communicable diseases such as tuberculosis or measles at any time. The recent Ebola epidemic in West Africa provides an example of a regional epidemic that triggered governments and aviation stakeholders to take additional precautions during each stage of air travel to minimize the spread of the disease. Figure 2 shows routine and potential enhanced safety measures that may be taken before, during, and after flights by a variety of stakeholders. Before boarding an aircraft, passengers may be prevented from travel if they have a communicable disease that could pose a public health threat during the flight. For example, in the United States, DHS and HHS can identify travelers who are not allowed travel, based on public health threats. Airlines may only refuse to board a passenger—or otherwise restrict or delay travel—with a communicable disease under DOT regulations if that passenger poses a direct threat to the health and safety of others. Additionally, the Department of State can restrict visas for foreign travelers with a communicable disease, preventing them from entering the United States. Governments in areas experiencing such an outbreak may screen passengers exiting the area and restrict or discourage the transport of ill or possibly contagious passengers. Under the IHR, these decisions may be made based on recommendations from WHO, although states can make their own entry- and exit-screening decisions. During the SARS epidemic, WHO recommended screening travelers before their departure from affected areas on international flights for symptoms associated with SARS and advising travelers with symptoms to postpone travel. To date, WHO has not recommended screening passengers departing from a U.S. airport, and the United States has never instituted such a precaution, according to CDC officials. Aviation stakeholders have questioned the legal authority by which the United States could implement exit screening. According to CDC officials, such screening could be done under HHS’s quarantine and isolation authorities. CDC regulations require pilots to immediately report to CDC any deaths or the occurrence of any travelers with signs or symptoms that may indicate a communicable disease infection during international flights coming to the United States. In the case of an ill traveler, CDC guidance recommends that the aircraft’s crew take practical measures to protect themselves and others. These measures may include avoiding direct contact with bodily fluids, and if indicated, isolating ill passengers who exhibit specific signs or symptoms consistent with a communicable disease, as well as providing ill passengers with tissues or a mask. In conducting its assessment, CDC may also request that aircraft crewmembers hand out information about health risks or collect and report information on a suspected ill passenger’s travel history, as done during the recent Ebola epidemic in West Africa. According to CDC officials, reporting suspected ill travelers before the flight’s arrival gives ground-based responders preparation time to provide for medical assessment and treatment of the traveler upon arrival, if warranted. Once an aircraft with a suspected ill passenger approaches an airport, decisions about where to park the aircraft, how to respond to the suspected ill passenger, and how to deplane other passengers may be coordinated among stakeholders, according to ICAO guidance. Federal or local public health officials, first responders (e.g., fire or emergency medical technicians), airport authorities, air traffic control personnel, or a combination of these stakeholders may make these decisions and lead certain components of the response based on the situation and available response protocols or preparedness plans. If a communicable disease is confirmed, CDC is to follow established protocols and work with state and local public health authorities to assess and provide interventions to other travelers onboard the aircraft, if necessary. Passengers infected with respiratory, gastrointestinal, or blood-borne communicable diseases may contaminate aircraft or airports with bodily fluids. Whether any measures beyond routine airline- and airport-cleaning practices are necessary will depend upon the characteristics of the disease in question, according to CDC guidance. For international flights, CDC may require additional cleaning or disinfection to prevent the transmission of a communicable disease. Airline representatives told us that they may also opt for a more thorough decontamination as a precaution. Decontamination may be carried out by airport or airline staff or by contracted aviation-services firms. During flights, cabin crew may clean potentially infectious material to protect other passengers, and CDC provides guidance on how to carry out this targeted cleaning. The occupational health and safety of airline, airport, or contracted aviation- service employees on the ground is overseen by OSHA, which sets and enforces workers’ health and safety standards related to communicable diseases and provides guidance on personal protective equipment, decontamination, and handling waste contaminated by potentially infectious material, with certain exceptions. On board an aircraft in operation, responsibility for the health and safety of employees is divided between the FAA and OSHA. FAA is responsible for all working conditions of flight crew (i.e., pilots, flight engineers, and flight navigators) and for most, but not all, working conditions of cabin crew (e.g., flight attendants). According to DOT’s origin-and-destination ticketing data, almost 52- million air passengers entered the United States from other countries in 2014, including returning U.S. citizens. While the United States does not receive non-stop commercial flights from all countries, including the West African countries that suffered the recent Ebola outbreak, passengers come from every corner of the globe and fly into airports both large and small. Figure 3 shows passenger arrivals from five regions of the world and the top five airports receiving passengers whose travel originated from each of these regions in 2014 (for a total of 12 airports), based on the original departure airport in the ticket itinerary. Together, these 12 airports received about 50 percent of the total number of passengers coming into the United States from abroad in 2014—accounting for more than 25 million passenger arrivals. Even if an ill passenger on an international flight is not detected while onboard an aircraft, he or she may be identified after arrival during the customs and immigration inspection process. After an international flight arrives in the United States, passengers are to undergo routine inspection or possibly enhanced screening for communicable diseases under authorities held by HHS and DHS (by agreement). During primary inspection, CBP staff are expected to visually observe arriving international travelers for certain signs and symptoms of communicable diseases during their routine interactions with travelers and then notify CDC, as appropriate. CBP and CDC may investigate further by asking specific questions during primary inspection—such as inquiring about travel to affected areas—or by conducting additional assessments such as taking body temperatures during secondary or tertiary screening. For passengers who are asymptomatic (not displaying symptoms) but at heightened risk for a communicable disease, CDC officials may also establish a means of ongoing monitoring. For example, asymptomatic passengers from Ebola-affected countries—Guinea, Liberia, and Sierra Leone—receive a Check and Report Ebola (CARE) kit upon arriving in the United States if they are found to be at heightened risk of exposure. The kit contains guidance and tools to measure and report symptoms to local public health officials for the 21-day disease incubation period. Local public health authorities are responsible for protecting public health within their jurisdictions. While CDC and state and local public health agencies coordinate closely on many issues, state and local public health authorities may, at their discretion and based on their legal authority, impose restrictions or requirements in their jurisdictions that are more stringent than those issued by CDC. In certain extraordinary circumstances, passengers or flights from areas experiencing a communicable disease outbreak could be redirected to designated U.S. airports with the capacity to receive them. This process is commonly referred to as “funneling,” and it may involve re-routing passengers by changing their itineraries or directing flights to certain airports. Beginning in October 2014, for example, CBP directed all flights to the United States with passengers whose recent travel included Ebola- affected countries to be routed to one of the following five designated airports where CBP and CDC staff conducted enhanced entry screening procedures: Newark Liberty International Airport, Washington Dulles International Airport, Hartsfield-Jackson Atlanta International Airport, and Chicago O’Hare International Airport John F. Kennedy International Airport, Prior to passenger re-routing by airlines, these five airports accounted for 94 percent of existing arrivals from the affected countries in West Africa (Guinea, Liberia, and Sierra Leone), according to CBP officials, all of which arrived on connecting flights through other countries. Travelers who might have arrived at a different airport are now re-routed by airlines—or “funneled”—to arrive at one of these five designated airports. Any non-military U.S. health personnel returning from the Ebola-affected countries also have to return via these airports and go through enhanced screening. Airports that are not designated to receive passengers from areas affected by a communicable disease outbreak may still encounter individuals who have recently traveled from affected areas even when funneling has been put in place. There have been Ebola-related responses at airports that were not identified for funneling, for example. One way this could happen is by a passenger traveling on a “broken ticket”—a separate itinerary for travel between the affected country and the United States via an intermediate destination, such as a country in Europe. In this instance, a passenger from an affected country may have bought two tickets—one to Europe and a separate ticket to the United States, following a layover. Another scenario is that a passenger could have transferred to a domestic flight after passing through a designated airport and developed symptoms of infection on the later flight. In cases such as these, CDC officials or local public health authorities, or both, may conduct public health assessments and follow-up activities. All of the 14 airports and three airlines we reviewed have plans—often contained in multiple documents—in place for responding to communicable disease threats from abroad. The plans in place for each airport and airline generally address the high-level components that we identified as common among applicable federal and international guidance. We identified these components to provide a basis for assessing the breadth of the plans, not to evaluate the sufficiency of the plan’s contents or the level of preparedness that the plans provide. We found the plans in place at each of the 14 airports addressed the following six high-level components: 1. Establishment of an incident command center. 2. Coordination among various stakeholders. 3. Selection and use of personal protective equipment for various stakeholders. 4. Training for various stakeholders. 5. Some protocols for responding to a threat, such as meeting the aircraft, maintaining a quarantine area, or transporting a suspected ill passenger. 6. Protocols for decontamination. The plans in place at each airport were developed by, or in collaboration with relevant airport stakeholders, including airport operators, first responders, state and local public health representatives, and officials from CDC and CBP, as applicable. Not all airports had a separate communicable-disease preparedness plan that alone addressed all six high-level components. For example, when asked about communicable- disease preparedness planning, representatives from 11 of the 14 airports reported that the procedures for responding to these threats at their airport were contained in multiple documents, ranging from a general emergency preparedness plan, such as the airport-emergency plan (AEP)—required by FAA regulations, but not required to specifically address communicable diseases—to a disease-specific preparedness plan, such as a pandemic influenza response plan. Other types of documents included a checklist for first responders; standard-operating procedures for a specific disease, such as Ebola; and CDC’s communicable-disease response plans (which are discussed more below). During the Ebola outbreak, representatives from eight airports that we reviewed reported developing an additional Ebola-specific response plan or adapting an existing plan. All three of the airlines we reviewed have a preparedness plan for responding to communicable disease threats. The plans themselves were not available to us because of their proprietary nature; however, based on our conversations with airline representatives and a review of summary information regarding their plans, we can report that the three airlines’ plans addressed the following four high-level components: 1. Establishment of emergency response team and designation of emergency response center. 2. Description of the triggers that inform the level and nature of a response. 3. Activation triggers for the response team and response center. 4. Identification of roles and responsibilities for relevant stakeholders. Furthermore, all three airlines stated that they carry universal precaution kits that include equipment to respond to suspected communicable diseases onboard aircraft flying internationally. As noted above, some airports have in place a CDC communicable- disease response plan (CDRP)—specifically 18 total airports that currently have (or had) a CDC quarantine station on site, 11 of which were included in our review. CDRPs fulfill part of WHO’s IHR obligations for establishing core capacity at designated points of entry. The CDRPs, according to CDC officials, were developed in coordination with relevant stakeholders and partners at each airport and based on a framework provided by CDC to airport quarantine stations. The existence of a CDRP at an airport does not preclude an airport operator or other airport stakeholders from developing and maintaining one or more additional preparedness plans or documents. In fact, representatives from all but one of the airports that we reviewed that have a CDRP reported having additional preparedness documents (10 of 11 airports). Representatives from 3 of those 10 airports that reported having plans contained in multiple documents do not view the CDRP as the airport’s main preparedness plan for communicable diseases. One CDC official from the Quarantine and Border Health Services Branch noted that CDRPs at some airports are more developed than others and recognized that CDC quarantine staff in collaboration with relevant airport stakeholders are continually updating and improving the CDRPs. Figure 4 shows the 16 U.S. airports that currently have a CDC quarantine station and the 2 airports that formerly had one. Each of these quarantine stations is also responsible for enforcing quarantine regulations at all airports within its assigned jurisdiction. DOT officials told us that in 2010, DOT and the aviation industry requested that CDC expand its outreach to further the development of CDRPs beyond airports with quarantine stations on site to airports without them. CDC officials told us that since that request, the agency has been working to expand the coverage of CDRPs to select U.S. airports. These officials told us that they are in the process of identifying priority airports using criteria that include the number and origins of arriving international passengers. These officials reported that at least three airports (one of which was included in our review) without quarantine stations on site have already collaborated with CDC to develop an airport preparedness plan for communicable disease threats. According to these officials, however, CDC response efforts to disease outbreaks, such as the cholera outbreak following the Haiti earthquake in 2010, have slowed these outreach efforts. CDC officials said they hope to complete this effort in the next several years, but do not have a specific established completion date. FAA officials told us that they encourage airports and airlines to develop preparedness plans for communicable disease threats. For example, in July 2009, FAA Office of Airport Safety and Standards issued a CertAlert to FAA airport inspectors to encourage airport operators to either update their pandemic flu plans—plans that airports may have developed in response to the avian influenza threat of H5N1 that began in 2003—or, for those that did not have a plan, to develop such a plan for their airport. In late 2014, FAA officials told us that they began planning an update of the July 2009 CertAlert, but it was delayed due to the Ebola response. As of August 2015, no updated CertAlert has been issued. FAA officials told us that they do not track or review airport or airline plans—in part because they lack adequate public health expertise, which they believe CDC would have, to assess whether an airport’s or airline’s plan would be effective at preventing or reducing the spread of communicable diseases. FAA officials further noted that communicable diseases rarely threaten the safety of flight, which is FAA’s primary regulatory jurisdiction. CDC officials in the Division of Global Migration and Quarantine office told us that they review CDRPs every 2 years at the 16 airports with quarantine stations currently on-site, and the CDC quarantine station staff at the airport review them during the in-between years. However, CDC officials noted that they do not formally track the development of any preparedness plans for communicable disease threats at airports that do not currently have a quarantine station on site. Airports and airlines are not required to develop and maintain preparedness plans for communicable disease threats. And neither FAA nor CDC systematically tracks which airports and airlines have such plans. Thus, FAA and CDC officials could not tell us the full extent to which airports that receive international passengers and airlines that operate international flights have preparedness plans in place. The 18 airports with CDRPs accounted for about 58 percent of the international arriving passengers to the United States in 2014. These 18 airports— together with the 3 airports we reviewed without CDRPs, but with their own preparedness plans—accounted for about 65 percent of the international arriving passengers in 2014 (or about 34 million of the almost 52-million total). A variety of entities, including FAA, CDC, state and local public health entities, and international sources, provide resources to help airports and airlines develop communicable-disease preparedness plans. In 2006, DOT, in coordination with CDC, published the National Aviation Resource Manual for Quarantinable Diseases, which provides guidance for airports and airlines on how to develop a communicable-disease preparedness plan that can be adapted and implemented for a variety of sizes and types of communicable disease threats. When we asked representatives from three airports specifically about DOT guidance for preparedness plans during interviews, representatives from two airports were familiar with the Manual, but noted that it was outdated. An official from DOT’s Office of Intelligence, Security, and Emergency Response told us that DOT has no plans to update it, in part, because in DOT’s view, everything contained in the document can be found on other websites and doing so might create a document that could not be rapidly updated, as might be necessary in facing an emerging public health threat. The guidance for specific disease threats that is published by CDC also provides some information for those attempting to develop plans or procedures for responding to a specific disease threat. For example, CDC published several guidance documents for airport and airline employees regarding Ebola, including guidance for personal protective equipment for airport and airline cleaning crews and interim guidance about Ebola infection for airline crews, cleaning personnel, and cargo personnel. Local public health entities also provide resources, such as public health advisories, that airports reported using to help develop such plans. Finally, some international guidance and technical assistance is available to airports and airlines in developing communicable disease plans. For example, in 2009, ACI, in collaboration with ICAO, published the Airport Preparedness Guidelines for Outbreaks of Communicable Disease to help airports. Through CAPSCA, ICAO works to bring international, regional, national, and local organizations together to combine efforts and develop a coordinated approach to respond to public health risks. CAPSCA’s efforts include providing voluntary visits to airports to help them prepare for communicable disease threats. In 2007, ICAO adopted a standard that obligates each ICAO member state to establish a national aviation-preparedness plan for communicable disease outbreaks that pose a public health risk or public health emergency of international concern. In 2010, ICAO, by resolution, further urged member states to ensure that the public health sector and the aviation sector collaborate to develop a national preparedness plan for aviation to help prevent the spread of communicable diseases through air travel, and that member states establish requirements for the involvement of stakeholders, such as airport operators and airlines, in the development of the plan. In guidance to member states for developing a national aviation-preparedness plan, ICAO recommends that such a plan include guidance that is generic to all communicable diseases. This guidance can then be adapted for specific diseases. Officials from the DOT office responsible for coordinating U.S. policy for presentation to ICAO told us that it is the responsibility of each member state to either implement regulations or other appropriate measures to comply with ICAO standards or to file a “difference” with ICAO. While the United States has not developed a national aviation- preparedness plan for communicable disease outbreaks, DOT and CDC officials contend that some elements of such a plan already exist. Specifically, officials from DOT’s Office of Intelligence, Security, and Emergency Response and FAA’s Office of National Security Programs and Incident Response, as well as CDC’s Division of Global Migration and Quarantine, told us that some elements of a national aviation- preparedness plan are encompassed in various documents that include airports’ individual plans, including CDRPs at airports with quarantine stations on site. However, FAA reported to ICAO in 2010—by way of answering an ICAO questionnaire on member states’ fulfillment of this standard—that individual airport plans are intended to handle one or two flights with inbound passengers and not respond to a full epidemic, which may require a response involving multiple airports on a national level. Officials from CDC’s Division of Global Migration and Quarantine also told us that while the United States does not have a national aviation- preparedness plan, past planning efforts, such as the 2005 National Strategy for Pandemic Influenza and its associated 2006 implementation plan developed in response to the avian influenza threat of H5N1, as well as CDC’s Risk-Based Border Strategy (RBBS), helped inform their decision making in the national-level response to Ebola as this response pertained to the screening and risk assessment of passengers arriving from the affected countries. The pandemic influenza national strategy and implementation plan, however, are neither aviation-specific nor designed to address communicable disease outbreaks of various types (e.g., different diseases), as we have found in past work. Furthermore, CDC officials told us that the RBBS has been superseded by CDRPs, which represent the most up-to-date preparedness efforts at U.S. airports. DOT and CDC officials also told us that while a national aviation- preparedness plan could have value, they do not believe that their respective agencies should be the lead in the development of such a plan. DOT officials said that a national aviation-preparedness plan for communicable disease outbreaks would be valuable to support a unified approach where multiple entities, including DOT, have input into the plan’s development and can then test and exercise the plan. These officials also noted that while DOT’s Office of the Secretary serves as the liaison to ICAO for Annex 9 to the Chicago Convention, in which the relevant ICAO standard is contained, complying with an ICAO standard could be led by any number of other federal agencies. DOT officials believe that while DOT should be a key contributor to the development of a national aviation-preparedness plan, HHS should be the lead federal agency in developing such a plan, in part because DOT does not have sufficient public health expertise, which they believe HHS does. CDC officials noted that since communicable disease is just one of many threats to the commercial aviation sector, a broader, all-hazards national aviation plan that includes communicable disease as a component may be more prudent or warranted. These officials also noted that a stand- alone plan may not be necessary, as they believe that the elements currently in place are sufficient, as reflected in the successful national Ebola response effort. Yet these officials also stated that they could see value in aspects of a national aviation-preparedness plan where stakeholders come together to discuss preparedness, resulting in a document that is collaborative and likely agreed upon by relevant parties. CDC officials told us that if a national aviation-preparedness plan were to be developed, DOT would be in the best position to lead the effort because FAA and DOT have stronger and deeper ties to the relevant stakeholders that would be involved in such a broad effort. While the DOT and CDC may not agree on which agency should lead the development of a national aviation-preparedness plan, DOT’s Office of the Secretary is the liaison to ICAO for Annex 9 to the Chicago Convention, in which the relevant ICAO standard is contained, and is responsible for overseeing the aviation sector. ICAO’s guidance to member states in developing a national aviation- preparedness plan also recommends that such a plan contain guidance that is generic to all communicable diseases and can be adapted to specific diseases. It also recommends that specific measures adopted at individual airports correspond to defined communicable disease threat alert levels, such as WHO’s pandemic alert phases or a national public- health authority’s alert levels, to help ensure that procedures are scaled up and down as circumstances of the public health threat change. Adopting measures that correspond to different risk levels or types of diseases would provide individual airports with an adaptable and scalable framework with which to align their plans—without which airports could find it challenging to prepare for a national response effort. For example, representatives from four airports that were not designated to conduct enhanced-screening for Ebola reported developing their own Ebola- specific response plans during the Ebola outbreak—sometimes with and sometimes without input from federal stakeholders. The airports did this in part because they did not know what their responsibilities would be in the long run or to what extent they would need to have procedures in place in the event that a suspected ill passenger was traveling on a broken ticket. An adaptable and scalable framework would subsequently improve harmonization of individual plans across airports and airlines— helping ensure that the individual plans work in accordance with one another for a national level response effort—and serve as the basis for training airport and airline staff and crew. For example, representatives from one airport told us that, in their view, many airports had good efforts under way to respond to Ebola, but that the efforts were fragmented across airports leaving passengers and airlines to deal with differences in how travel is handled at each airport. ICAO guidance to member states for developing a national aviation- preparedness plan for communicable disease outbreaks states that implementation of any measures within a preparedness plan should be a well-coordinated multi-agency effort to avoid confusion, inconsistencies, and duplication of resources, as well as minimize inconvenience to travelers. DOT officials reported not being involved in or consulted on the decision to funnel passengers from Ebola-affected countries to five airports and implement enhanced entry-screening procedures. And while the officials believe that funneling passengers was a good decision in the case of the Ebola threat, they expressed concern about what might happen during future national-level communicable disease response efforts if decisions affecting aviation are made without their input. For example, in response to the avian influenza threat of H5N1 that began in 2003, national efforts included discussions on funneling all international passengers through 30 U.S. airports and screening all arriving passengers—an option provided under RBBS. Representatives from three of the four airports that we spoke with about this issue, as well as ACI representatives, expressed concern that funneling all arriving international passengers to 30 airports and screening them was unrealistic due to the resource requirements it would impose on airports and delays that could ripple across the national airspace system. DOT officials further noted that from an air traffic control perspective, many major U.S. airports are already at or near full capacity and shifting a significant amount of air traffic to these airports could result in gridlock. CDC officials acknowledged that funneling passengers to 30 airports and screening them all was a worst-case scenario and pointed out that RBBS was designed to be flexible and scalable and to serve as an adaptable framework for entry-screening at airports, as the RBBS framework did for the Ebola outbreak. DOT officials highlighted that because the number of passengers coming from the Ebola-affected countries is quite small relative to the total number of international passengers entering the United States (less than 25,000 out of almost 52- million total passenger arrivals in 2014), the impact from re-routing passengers to five airports and the time and resources needed to conduct the enhanced screening did not result in an unreasonable impact on the national aviation system. This may not be the case if the communicable disease threat were to come from China, for example, or another region with large numbers of passengers or flights to the United States. DOT officials also reported that they did not always have the opportunity or were provided insufficient time to review or comment on CDC Ebola guidance or fact-sheets addressed to aviation stakeholders. For example, a DOT official told us that because similar information was often posted in multiple places and because documents that the officials had reviewed in the past got renamed, revised, and re-published, the DOT official had to continue to watch out for published CDC guidance that included recommendations for the aviation industry. The DOT official highlighted that CDC guidance did not always have the DOT issues portrayed correctly and that failure to adequately coordinate with DOT could have safety consequences in some circumstances. For example, if a disinfectant that is used to clean suspected Ebola contamination is not compatible with the aircraft materials (e.g., aluminum) or is used in the wrong manner, such as using too concentrated a solution, the aircraft could be damaged, which could negatively affect its airworthiness. Officials from CDC’s Division of Global Migration and Quarantine acknowledged that some CDC webpages about Ebola developed prior to the beginning of the Ebola outbreak in 2014 contained misinformation related to aircraft disinfectants, but noted that the information was promptly removed once officials became aware of the problem. These officials also told us they sought DOT’s input in guidance relevant to aviation, but acknowledged that at times during the Ebola outbreak, things moved very quickly and webpages were reorganized to make information easier to find. While aviation stakeholders we spoke with reported having plans that address communicable diseases, they also reported facing multiple challenges in responding to threats and taking actions to address these challenges. Aviation stakeholders that we spoke with reported challenges in responding to communicable disease threats including obtaining guidance, communicating, coordinating among responders, and assuring employees have appropriate training, equipment, and sanitary workplaces. To address these challenges, aviation stakeholders reported taking actions such as developing communication tools and strategies; reviewing, exercising, and improving response plans; and providing training, equipment, and cleaning supplies. A national aviation- preparedness plan could serve as the basis for testing communication mechanisms among responders to ensure those mechanisms are effective prior to addressing a communicable disease outbreak. It could also serve as the basis to ensure that airport and airline staff have received appropriate training and access to properly maintained equipment to reduce the risk of exposure to communicable diseases during an outbreak. Stakeholders at 12 of the 14 airports we spoke with reported challenges in obtaining guidance on how to respond to communicable disease threats or in communicating during specific incidents. Various stakeholders including federal agencies, airports, airlines, and contracted aviation-services employers reported taking actions to improve access to timely guidance and communication. As we have found in prior work, planning efforts and exercises can help develop relationships between federal officials and stakeholders that are useful in responding to communicable diseases. Moreover, ICAO recommends that national aviation-preparedness plans include a communication system and emphasizes the importance of periodically testing this communication system. Representatives at 7 of the 14 airports we spoke with reported difficulties obtaining prompt and clear guidance from federal agencies including CDC, CBP, and FAA on how to respond to specific communicable disease threats including Ebola. According to CDC, CDC Quarantine Station officials referred airport questions about the Ebola response to CDC headquarters to ensure airports received consistent guidance that reflected the most up-to-date information. However, representatives at 3 of the 10 airports with Quarantine Stations with whom we spoke said that CDC headquarters did not provide requested guidance within short time frames. Representatives from two airports said the initial federal response to Ebola was not clear because it did not correspond to a national plan or unified approach with which the representatives were familiar. In addition, a representative at another airport reported experiencing confusion determining the magnitude of the threat that Ebola posed and what guidance to follow given that FAA did not address these issues. CDC officials we spoke with described inherent challenges to providing prompt guidance on the recent Ebola threat, as well as actions the agency took to address airports’ and airlines’ information needs during the response. Communicable disease outbreaks are unpredictable by their very nature. CDC officials told us that information evolved during the Ebola response and that answers to particular questions were not always readily available. In these instances, CDC formulated responses with the assistance of leadership and subject-matter experts. According to officials, CDC dedicated additional resources to provide in-depth and timely Ebola guidance, and met with aviation industry partners both collectively and individually. Airport-emergency responders at 6 of the 14 airports we interviewed told us that airlines sometimes do not provide them with information that is as complete, accurate, and immediate as they would like when a traveler becomes ill during a flight. CDC officials also told us that information provided by airlines or air traffic control to CDC was often incomplete or inaccurate. While CDC requires pilots on international flights to U.S. airports to immediately notify CDC of ill travelers suspected of having a communicable disease—as determined by signs and symptoms—flight crews must focus on safely operating the aircraft during critical phases of a flight such as takeoffs and landings. This situation may preclude immediate notification, according to CDC officials. Furthermore, CDC officials and some responders we spoke with said that ill travelers or their caregivers may be reluctant or unable to share information, cabin crew may lack expertise in assessing relevant medical conditions, and information may develop inaccuracies as it passes from passenger to flight attendant to pilot to various ground-based responders. CDC officials also stated that a lack of proper equipment (thermometer, for example) on the aircraft may limit flight crews from providing a rapid and detailed notification of illness. CDC makes available guidance and tools to report traveler death or disease that outline reporting requirements and requested information. However, airport responders and CDC officials said that airlines do not use a common template to record or communicate this requested information. Inaccurate or untimely information can slow down an appropriate response (such as conducting assessments before travelers have exited the aircraft) or trigger precautions unnecessarily. For example, representatives at one airport described launching an Ebola response after being alerted by an airline of a suspected case, only to discover that the passenger was traveling from East Africa—rather than an Ebola-affected area in West Africa—and suffering from a fear of flying rather than a physical illness. Airport, airline, and other stakeholders have taken actions to improve communication about ill travelers during flights including real-time consultations with emergency medicine consultants, evaluating telemedicine technologies, and dedicating a radio frequency for emergencies to enable communication during flight with ground-based medical responders. See appendix II for additional information about technologies used to respond to communicable disease threats. In addition, CDC officials said that they conduct follow-up investigations when they receive reports of suspected communicable disease incidents on flights that airlines did not report and that CDC addresses with airlines any deficiencies found. Aviation stakeholders have developed various tools to improve their communication about and response to medical problems. For example, one U.S. airline uses a checklist form to guide flight attendants in collecting and sharing traveler information with emergency medicine professionals who are remotely located. Another example comes from the AIRSAN Project, a stakeholder network that addresses response at the European Union level to public health threats in air transport. The AIRSAN Project developed operational tools including a flow chart and questionnaires to help cabin crew with decision making and information gathering to assess public health risks, communicate with ground-based responders (including public health officials who use the same tools), apply public health measures during the flight, and minimize interference with international traffic. Representatives at two of the three airlines we spoke with said that CDC does not routinely notify airlines of the results of an ill passenger’s screening or diagnostic tests unless a positive diagnosis confirms a communicable disease. Representatives from one airline stressed that it experienced challenges obtaining information about the status of ill passengers or passengers who were not ill during flight but screened positive for risk of Ebola after leaving the aircraft. Representatives from this airline said these challenges impact their operations as well as their relationships with employees and customers. CDC officials confirmed that CDC does not routinely notify airlines when CDC determines a passenger’s condition is not of public health concern or before diagnosing passengers suspected of communicable diseases. However, CDC protocols call for notifying airlines when a positive diagnosis confirms a communicable disease of public health concern. CDC officials also said that if there were suspicions but no diagnosis of a communicable disease, CDC might provide airlines with general information when media coverage or other concerns arise but would not provide personally identifiable information. All of the employees we spoke with from two contracted aviation-services firms that conduct aircraft cabin cleaning said that after incidents when a traveler became ill during a flight, the cabin crew does not always notify them of potentially infectious bodily fluids that had contaminated the aircraft. In its general infection-control guidance to airlines, CDC recommends that cabin crews notify cleaning crews of where and how ill passengers may have contaminated the aircraft and remind cleaning crews that additional personal protective equipment may be required. Given that it is typically unclear whether or not an illness that develops during a flight is contagious, CDC recommends treating any bodily fluid as potentially infectious regardless of whether or not an identified communicable disease outbreak threatens to spread to the United States. Aircraft cleaners we spoke with said that cleaning crews often have limited time to clean an aircraft before the boarding process begins for the next flight, and so may need to request additional time to conduct additional cleaning necessary to decontaminate the aircraft. Some of the airlines and the contracted aircraft-cleaning employers we spoke with reported taking steps to improve communication about travelers’ health status after leaving the aircraft and any contamination that cleaners may need to address. For example, a foreign airline has developed a paper-based form for cabin crews and public health officials to record and share information about potentially contaminated items on the aircraft and the disinfection agents the cleaning contractor should use. The contracted aircraft-cleaning employer we spoke with reported directing employees who clean international flights at one of the five enhanced screening airports for Ebola to notify their crew lead of any bodily fluids they encounter and to treat these fluids as potentially infectious. Keeping the traveling public informed about communicable disease risks and implications can help manage public anxiety to avoid unnecessary social disruption and economic losses, according to the WHO. WHO notes that intense public scrutiny may accompany a communicable disease incident, and DOT recommends in its National Aviation Resource Manual for Quarantinable Diseases that airports plan “how they will handle the onslaught of media inquiries and reports from the very outset of the communicable disease incident.” We interviewed airport representatives and their partners, such as emergency-management and public-health officials, and found that the need to provide public information in the wake of the Ebola outbreak and related airport incidents could create a variety of challenges. The 14 airports we spoke with and their partners provided the following examples: Responding quickly enough to rapidly developing public concern: Some airport representatives said that suspected communicable disease incidents could quickly generate public concern. Representatives at three airports we spoke with emphasized the need to provide information quickly, and representatives at two of these airports stated a preference for a proactive rather than reactive approach to the media. Providing partners useful information: Emergency management officials at one airport conducting enhanced screening for Ebola and state public health officials working with this airport said they did not receive information needed to respond to media requests or inform senior public officials. However, representatives at another airport that conducts enhanced screening for Ebola noted that sharing information about a suspected communicable disease incident too broadly could cause unnecessary alarm. Addressing the volume of concerns: Representative from two airports said that addressing public information requests could require significant resources or create a challenging work environment. Some airport representatives and union representatives also identified instances when information was requested that they believe should not be made available or could be better secured. For example, union representatives for cabin crews expressed concern that co-workers can identify crewmembers on a flight with an ill passenger and subsequently avoid working with them or even make their identities public via social media. Union representatives suggested that airlines could do more to protect crewmembers’ identities after a potential communicable disease incident, but an airline we spoke with said that crewmembers’ identities could be discovered by a variety of means outside of the control of the airline, including direct observation. Airport and airline representatives we spoke with identified actions they took to provide information about the Ebola threat to better inform the public. For example, three airports we spoke with highlighted using social media to provide information or respond to concerns in real time. Representatives from one of these airports and one of the three airlines we spoke with noted that it was useful to disseminate public information developed by CDC because of its credibility. Representatives at 8 of the 14 airports that we interviewed identified challenges coordinating various entities’ roles and actions when conducting communicable disease responses or exercises. Representatives from the 14 airports we spoke with and their partners reported challenges with: Lines of authority and plan alignment: Representatives from four airports we interviewed reported challenges determining lines of authority, such as whether CDC or fire department officials lead emergency medical services, or aligning stakeholders’ response plans, such as airlines’ plans, with the airports’ response plans. Unnecessary interference: Representatives from three airports we spoke with reported that the actions of one type of responder had negative implications for another responder or for airport operations and that these complications were avoidable. For example, during the response to a passenger suspected of Ebola, responders blocked off a road to provide themselves with space to put on personal protective equipment. However, in so doing they blocked all baggage-handling trucks’ access to the baggage claim area, and in turn, the baggage- handling trucks blocked other responders’ access to the aircraft. Coordinating with contracted aviation-services firms: Representatives at two airports said that after completing the questionnaire we provided them, they realized that they likely should do more to coordinate with contracted aviation-services firms that operate at the airport. Airport representatives reported taking various approaches to improve their coordination during a response. Airport officials reported using strategies such as conducting meetings or training with aviation stakeholders to provide information and clarify lines of authority in responding to communicable diseases, using centralized notification and communication hubs, and coordinating response activities through emergency operations centers or unified command structures. In addition, airport representatives at 2 of the 14 airports we interviewed highlighted their practice of reviewing the response plans of each airline operating at the airport to understand airlines’ approach and assist with any gaps that the airport might identify. Representatives from each of the 14 airports we spoke with used some level of exercises and debriefs to improve the efficiency and effectiveness of their response, including four airports that conducted full-scale exercises that address simulated communicable- diseases incidents. In addition, airports debriefed staff involved with actual incidents that involve communicable disease response to assess and improve their operational capability. However, neither DOT nor HHS requires airports to conduct communicable disease exercises and debriefs, and the communicable disease exercises conducted by airports, varied in comprehensiveness from table-top to full-scale exercises, according to airport officials with whom we spoke. According to an aviation medicine expert at ICAO, collaboration between aviation and public health officials presents the biggest challenge in managing communicable diseases in the aviation sector. For example, under airport all-hazards plans, officials typically isolate aircraft away from the terminal in order to minimize suspected threats (e.g., bomb threats), but in a public health emergency it may be more appropriate to park an aircraft near the terminal to provide emergency responders access, according to this expert. Representatives from 3 of the 14 airports we interviewed mentioned adapting their practices during the Ebola outbreak or recent exercises to park incoming aircraft with ill travelers suspected of communicable diseases at or near the gate rather than at a remote location. Contracted aviation-service employees—including airport cleaning, aircraft cleaning, and passenger-service employees (e.g., wheelchair attendants), and associated union representatives we interviewed— expressed concern that these service employees did not receive adequate communicable disease training and reported challenges accessing appropriate personal protective equipment, cleaning equipment, and cleaning supplies. Inadequate training, equipment, and supplies could lead to employee exposures to pathogens that could in turn result in infections. This risk could extend to passengers since they share the same aircraft environment. OSHA violations provide some evidence for concerns and challenges related to appropriate pathogen- exposure-control planning, training, vaccinations, and personal protective equipment. OSHA’s blood-borne pathogens standard requires employers to provide employees who encounter blood, certain bodily fluids, and other potentially infectious materials while carrying out job duties with: Training: initial and annual training—including the opportunity to ask questions of a knowledgeable trainer—on methods to control exposures to pathogens and additional training when changes occur that affect employees’ occupational exposure to potentially infectious materials. Personal protective equipment: appropriate personal protective equipment such as gloves, gowns, eye protection, and masks. Sanitary work surfaces: work surfaces that have been noticeably contaminated by potentially infectious materials must be decontaminated with an appropriate disinfectant immediately or as soon as feasible. Vaccination and post-exposure evaluation and follow-up: employees must be offered the hepatitis B vaccination and be evaluated and provided follow-up after an exposure incident. We spoke with nine workers employed by aviation-services firms that contract with airports or airlines. Collectively, these nine employees worked for four different firms at four separate airports. Employees and union representatives we spoke with reported gaps in training, equipment, supplies, and time to decontaminate aircraft. No routine or outbreak-specific training: Employees from three of the four contracted aviation-services firms that we spoke with said that employers do not provide formal, hands-on training to understand risks and minimize workers’ exposure to potentially infectious materials, and that employers did not provide hand-on training to respond to specific disease outbreaks such as Ebola. For example, aircraft cabin cleaners from one firm reported not knowing where to dispose of hazardous material and so sometimes simply disposed of it with non-hazardous garbage. Inadequate personal protective equipment: Aircraft cabin cleaners we spoke with from the two firms that conduct cabin cleaning reported that the gloves employers provided were too thin and that they could not replace gloves immediately if they ripped because of the need to clean aircraft quickly. Unsanitary conditions and unavailable resources to clean: Wheelchair attendants at both airports where we interviewed passenger-service employees reported that wheelchairs were not always decontaminated after coming in to contact with potentially infectious materials such as feces. Employees with each of the three firms that conduct airport or aircraft cabin cleaning reported lacking sufficient and clean towels. For example, one employee said that cabin cleaners sometimes use the same towels to clean potentially infectious materials and later to clean food service equipment such as coffeemakers. Employees at two of the three firms that conduct cleaning reported difficulties accessing cleaning solutions, and employees we interviewed from one of the two firms that conduct aircraft cabin cleaning said that cleaning solutions sometimes are not properly labeled, causing them to use the wrong concentration. Insufficient time to clean: A union representative and employees we interviewed from one of the two firms that conduct aircraft cabin cleaning noted that some cleaning solution instructions indicate that the solution should sit for a period of time on potentially contaminated surfaces before cleaning, but that this was not always possible when cleaners have to quickly prepare the aircraft for another flight. Violations of state and federal occupational health standards by contracted aviation-services employers provide some support to employees’ concerns that aviation services’ employers do not always ensure that their employees received blood-borne pathogen training and personal protective equipment. Union representatives provided us with examples of citations between December 2012 and July 2015 resulting from complaints aviation-service employees filed with the union’s assistance. We used publicly available information from OSHA to confirm that at least 11 of these citations resulted in violations of OSHA’s blood- borne pathogens standard or analogous state standards that are at least as effective. Among these violations were instances when aviation- services employers did not provide employees with appropriate pathogen exposure control planning, training, vaccinations, and personal protective equipment. Eight of the 11 violations were designated serious violations, which indicates a substantial probability that death or serious physical harm could result, unless the employer did not, and could not with the exercise of reasonable diligence, know of the presence of the violation. In total, OSHA found that these 11 violations led to 680 instances when conditions did not meet OSHA’s blood-borne pathogens standard, and almost all of these instances (676 out of 680) affected over 100 employees. OSHA records indicate that employers took corrective actions to address these violations. We interviewed representatives from two aviation-services employers that contract with airports and airlines, and both said that they comply with training, personal protective equipment, and decontamination standards required by regulation. Representatives from the firm that conducts aircraft cabin cleaning said that airlines provide employees with labeled cleaning products that trained managers dilute to ensure that the products used are appropriate as indicated by the original-equipment manufacturer. In addition to providing employees with required training, personal protective equipment, and supplies, representatives from both aviation-services firms that we spoke with reported taking additional precautions during the Ebola outbreak such as providing employees with additional hands-on training, personal bottles of hand sanitizer, and information about Ebola on tablet devices that some employees use to carry out job duties. In addition, airports, airlines, and union representatives we spoke with reported taking steps to mitigate aviation-service employees’ exposure to communicable diseases, especially since the Ebola threat emerged. For example, representatives from two airports we spoke with have established airport minimum standards—including hazardous material training—to qualify or license aviation-services firms that operate at the airport. Representatives from all three airlines we spoke with said that they provided contracted firms with additional information to help them prepare for the Ebola threat and reported taking steps to ensure that contracted employers provide employees appropriate training and personal protective equipment. Union representatives also reported providing training on infection control for aviation-service employees at some international airports during the Ebola outbreak. Air travel—more than any other mode of transportation—creates the potential for infected persons to move quickly from one part of the world to another while sharing confined quarters with other travelers. With the anticipated growth in international air travel, the recurring threat of communicable diseases from abroad, and the potential economic cost of disrupting air travel, it is imperative that the U.S. aviation system is sufficiently prepared to help respond to any communicable disease threat. The 14 airports that we reviewed (11 of which have CDC-developed CDRPs) had a plan or plans in place that in combination with one another met the six high-level components that we identified as common components in federal and international guidance. CDC is working to expand development of CDRPs to select U.S. airports that the agency is currently identifying, using criteria involving the origins and the total volume of international arriving passengers, but it is uncertain when CDC will be able to complete this effort. Furthermore, Annex 9 to the Chicago Convention obligates member states to establish a national aviation- preparedness plan—a plan intended to provide a mechanism for the public health sector to coordinate with the aviation sector in the event of a communicable disease threat. Yet DOT and CDC officials acknowledge that only certain “elements” of a national aviation-preparedness plan are in place. Such a plan could help maximize an effective response to a public health threat, while minimizing potential inefficiencies in the national response effort and unnecessary disruptions to the national aviation system. A national aviation-preparedness plan that is generic to all communicable diseases and can be adapted for specific diseases would provide individual airports and airlines with an adaptable and scalable framework with which to integrate their individual plans and promote harmonization of individual plans across airports and airlines. As such, the plan could also serve as the basis for testing communication mechanisms among responders to help ensure those mechanisms are effective. In addition, it could help ensure that airport and airline staff have received appropriate training and access to properly maintained equipment during an outbreak to reduce the risk of exposure to communicable diseases. Finally, DOT officials expressed concern about their lack of involvement in decisions made during the Ebola outbreak that involved the aviation sector. Developing and maintaining a national aviation-preparedness plan could foster a shared understanding and agreement among all relevant stakeholders, and help balance the needs of the aviation and public health sectors. To help improve the U.S. aviation sector’s preparedness for future communicable disease threats from abroad, we recommend that the Secretary of Transportation work with relevant stakeholders, such as the Department of Health and Human Services, to develop a national aviation-preparedness plan for communicable disease outbreaks. Such a plan could establish a mechanism for coordination between the aviation and public health sectors and provides clear and transparent planning assumptions for a variety of types and levels of communicable disease threats. We provided a draft of this product to DOT, HHS, DHS, Labor, and State for comment. In its written comments reproduced in appendix III, DOT partially concurred with our recommendation. State did not provide comments to include in this report. HHS, DHS, and Labor only provided technical comments that we incorporated, as appropriate. With regard to our recommendation, DOT agreed that there is a need for a national aviation-preparedness plan for communicable diseases to help improve the U.S. aviation sector’s preparedness for future communicable disease threats. DOT further proposed that those agencies that have both legal authority and expertise for public health take the lead role in developing such a plan within the existing interagency framework for national-level all-hazards emergency preparedness planning, for which DOT stands ready to participate. We agree that public health expertise is needed in developing a national aviation-preparedness plan. However, as stated in our report, DOT has primary responsibility in overseeing the aviation sector and DOT’s Office of the Secretary is the liaison to ICAO for the Annex to the Chicago Convention that obligates member states to establish a national aviation-preparedness plan. As such, we believe that DOT is in the best position to work with its relevant stakeholders, including those that have the needed public health expertise, to develop a national aviation-preparedness plan. DOT also provided technical comments that we incorporated, as appropriate. We are sending copies of this report to the Secretary of the Department of Transportation, the Secretary of the Department of Health and Human Services, the Secretary of the Department of Homeland Security, the Secretary of the Department of Labor, the Secretary of the Department of State, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. GAO was asked to review the preparedness of the U.S. aviation system in responding to communicable disease threats from abroad. This report examines: 1. The extent to which selected U.S. airports and airlines have preparedness plans to respond to communicable disease threats from abroad and the extent to which a national aviation-preparedness plan guides preparedness. 2. Challenges that U.S. airports and airlines including contractors have faced when responding to threats and the actions have they taken to help address those challenges. Characteristics of communicable disease threats from abroad: We considered the following characteristics as applicable to the scope of our review: communicable disease of public significance (e.g., non-routine diseases, including Ebola, SARS, and MERS), international arriving traveler, suspected ill traveler is identified onboard an arriving aircraft, or suspected ill traveler is identified in an airport after deplaning aircraft. We considered the following characteristics as not applicable to the scope of our review: bioterrorism (i.e., traveler using communicable disease as a weapon), traveler who is ill with seasonal flu or other routine disease that is not of public significance, solely domestic travelers, threat of communicable disease spread by cargo or animals, continuity of Operations, and known medical transport (ill person is identified prior to departing host country). Selected airports and airlines: We selected for review 14 airports—which accounted for about 53 percent of total international arriving passengers in 2014—that met one or more of the following criteria (see table 1): have enhanced passenger entry-screening procedures in place for international passengers arriving from the three current or past Ebola- affected countries in West Africa; received the first and second largest number of international passengers from each of five world regions in 2014; large hub airports with a Centers for Disease Control and Prevention (CDC) quarantine station on site at the time of our review; large hub airports without a CDC quarantine on site, but still receiving a larger number of international passengers relative to other large hubs without a CDC quarantine station on site; experienced a confirmed Ebola case; have a station manager from one of the three U.S. airlines in our review; and are located within proximity to a GAO office. We selected for review the three U.S. airlines that handle the largest quantity of international passengers—American Airlines, Delta Air Lines, and United Airlines. Departments and components: Our review involved five federal departments—the Departments of Transportation (DOT), Health and Human Services (HHS), Homeland Security (DHS), State, and Labor. We selected these departments because they represent the key federal departments with responsibilities for preparing for and responding to communicable disease threats from abroad. Within these five departments we collected and reviewed available documentation and interviewed officials from various components that play a key role at their respective departments for these matters, principally DOT’s Federal Aviation Administration (FAA), HHS’s CDC, DHS’s U.S. Customs and Border Protection (CBP), and Labor’s Occupational Safety and Health Administration (OSHA). To examine the extent to which airports and airlines have plans in place to respond to communicable disease threats from abroad, we developed and administered a questionnaire to airport operators of the 14 selected airports on general preparedness at their airport. The questionnaire included questions about communication with local stakeholders about communicable diseases, guidance used to develop any plans for communicable disease response, and plans or procedures that the airport had in place for a variety of situations or stakeholders, such as establishing the parking location for an aircraft and training for airport employees. We then conducted follow-on interviews with the 14 airport operators and relevant local stakeholders, who generally included first responders, local public health officials, CBP officials, and CDC officials, if applicable, about their preparedness. We also collected from the 14 selected airports and 3 selected airlines relevant and available preparedness plans for communicable disease threats. We identified and reviewed applicable federal requirements and international obligations, including the International Civil Aviation Organization’s (ICAO) Standards and Recommended Practices, and guidance for U.S. airports and airlines with international air traffic. We identified high-level components that were common across applicable federal and international guidance, obligations, and requirements, as well as corroborating information collected from aviation stakeholders with whom we spoke. We then developed a list of high-level components for airports’ and airlines’ communicable-disease preparedness plans to provide a basis for assessing the breadth of the plans. We compared these high-level components against the available plans collected from the 14 airports and three airlines as a method to assess the breadth of the plans. We then reviewed the structure and contents of these plans, but did not evaluate the plans for sufficiency or level of preparedness. We reviewed available documents from the five selected federal departments and their relevant components and interviewed officials from these departments. We also interviewed representatives from federal and international airport, airline, and flight-attendant industry associations, and ICAO about preparedness plans generally and potential opportunities to improve preparedness. To examine challenges that U.S. airports and airlines, including contractors, have faced when responding to communicable disease threats, including Ebola, and the actions they have taken to help address those challenges, we first identified challenges through interviews with selected airports and airlines as discussed above, as well as interviews with representatives from the labor union representing airport- and airline- service employees, and airport- and airline-contract employers of service employees. We consulted with representatives from the union that represents these employees to identify nine aviation-service employees with whom we spoke, and we conducted interviews with two of the four firms that these nine employees worked for, as well as three of the four airports they worked at. We also identified challenges to responding to communicable disease threats and actions taken by stakeholders during our attendance at a Global Symposium—convened by ICAO in collaboration with the World Health Organization (WHO)—of the Collaborative Arrangement for the Prevention and Management of Public Health Events in Civil Aviation (CAPSCA) program, which is a global, collaborative arrangement that works to bring together international, regional, national, and local organizations to develop a coordinated approach to preparedness and response. We also collected and reviewed available after-action reports that airports used to assess their responses to simulated communicable disease incidents. In addition, to corroborate comments we heard from airline-service employees (e.g., aircraft cabin cleaners or wheelchair attendants) and their union representatives, we reviewed summaries of inspections and violations related to OSHA’s blood-borne pathogens standard that were initiated by employees with the support of their union. The challenges faced by U.S. airports, airlines, and contracted aviation-services firms and the actions taken to address these challenges that we describe in this report represent information provided to us during interviews and site visits, but may not capture all of the challenges and actions taken by the airports, airlines, and aviation- services firms we spoke with. We conducted this performance audit from November 2014 to December 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. A few technologies have been implemented, or are being developed, to identify or mitigate potential outbreaks of communicable diseases through air travel. These include screening passengers to detect travelers who may have an infectious disease (ill travelers), utilizing temperature screening to diagnose ill travelers, and using data analysis to identify disease outbreaks and potential traveler movement patterns. U.S. Customs and Border Protection (CBP) personnel routinely observe travelers during their primary inspection and refer those that show symptoms of a communicable disease—or those recently traveling from an area of concern—for further assessment by Centers for Disease Control and Prevention (CDC) staff or other health authorities. Additional screening, such as was carried out during the Ebola threat, may include more targeted assessments during enhanced screening. According to the CBP, there are three main screening methods used for identifying passengers entering the United States who may have communicable diseases during the Ebola outbreak: (1) collecting advance passenger information, (2) visual inspection or taking of temperatures, and (3) questioning travelers. All of these methods are used for screening airplane passengers arriving in the United States, but only temperature measurement is associated with on-site health technology. Temperature checks may be conducted with contact or noncontact thermometers, but outside of the current Ebola response, this check is typically not common and only done in the setting of assessment of a suspected ill traveler reported to CDC, according to CDC officials. Entry screening for Ebola at enhanced screening airports in the U.S. includes using non-contact infrared thermometers, under the enhanced screening protocols put in place to address the disease threat. Non- contact thermometer-based temperature measurement is a simple enough premise, but an agency official suggested it has both low sensitivity and specificity for detecting passengers with infectious disease. In other words, such temperature measurement alone has a low chance of correctly identifying ill travelers and a low chance of correctly excluding healthy travelers. In the case of enhanced screening for Ebola, CDC officials or CBP contractors use thermometers that are commercially available following primary inspection by CBP personnel. To date, no mass screening of airplane passengers—where every passenger’s temperature is taken—has been conducted at a U.S. airport. During the recent Ebola outbreak, for example, only passengers with recent travel to, from, or through outbreak countries, such as Sierra Leone, were identified for temperature screening. Internationally, both non-contact infrared thermometers as well as thermal scanners have been used for entry and exit passenger screening for communicable diseases, including Ebola and severe acute respiratory syndrome (SARS). For example, in 2009, during the H1N1 influenza pandemic, many international airports—but not U.S. airports— implemented temperature-screening procedures. However, the literature reports questionable effectiveness from temperature screening, stemming in part from the aforementioned low sensitivity. Some of the performance issues result from variabilities in temperature measurements exceeding the threshold for fever indication—variabilities of up to 3 degrees Celsius under some circumstances, such as after smoking, whereas fever can be indicated by an elevation of 1 degree Celsius for Ebola, for example. Temperature variability results from several factors including metabolism, medication, environment, and conditions, such as certain cancers, which are not quarantinable diseases of concern for airport screening purposes. Further, passengers who are in the incubation period of illness may not exhibit fevers, given that such periods of several infectious diseases typically last longer than most flights. According to scientific literature, camera-based thermograms have been used internationally. For example, camera-based temperature measurement, followed by ear- based temperature measurement, has been tentatively shown to be effective for monitoring Dengue Fever in Taiwan. Dengue fever is not a U.S. quarantinable disease, and another study indicated uncertainty that temperature screening is effective for mitigating community transmission of this disease. Generally, however, thermal cameras are more expensive than thermometers and their precision is not better. A possible reason for deploying thermal cameras is the eventual capacity to screen large numbers of travelers rapidly, but the benefits of this approach have not been established. The CDC has developed a “Big Data” approach for identifying and tracking communicable disease outbreaks through data collection and analysis. Information provided by BioMosaic can be used to help determine the risk of international spread of disease and to target potential CDC intervention by identifying potential threats, although it cannot be used to identify specific ill individuals. Launched in 2011 by the CDC’s Division of Global Migration and Quarantine, BioMosaic is a data analytics tool that works with collections of data, including news sources, historical travel information, and public databases to map the health and demographics of foreign-born populations within the United States (e.g., diaspora), as well as disease outbreaks internationally. For example, in 2014, the CDC reported the first confirmed cases of Middle East Respiratory Syndrome (MERS) infections in the United States and, by using BioMosaic, was able to identify the major points of entry into the country, as well as the volume of travelers entering from Saudi Arabia and the United Arab Emirates. CDC was able to identify five cities within the United States that accounted for 75 percent of arrivals from those two countries. Several technologies are being evaluated for potential use in tracking or mitigating disease, including communicable diseases. These technologies could potentially be used for air travel. These include (1) telemedicine, (2) air circulation control, (3) genetic sequencing of airplane lavatory waste, and (4) point-of-care diagnostic technology. These technologies are at various stages of development and their effectiveness and cost considerations are not established. Airlines use a variety of approaches in responding to ill passengers during flight. United Airlines is currently exploring the use of telemedicine, whereby some technology can be used on board an aircraft to provide a remotely located doctor with information—such as vital signs—needed for diagnosis and determining whether a flight diversion is needed. Development of altered air circulation devices may also mitigate the spread of communicable diseases. Currently, air in an aircraft is filtered by high-efficiency particulate air (HEPA) filters, but effectiveness relies on the air’s passing through the filters. If a pathogen circulates widely within an aircraft cabin prior to being filtered, there may be an increased chance of person-to-person transmission of the disease. Air-circulation-altering devices may provide more isolated air environments for each passenger, but these devices have not yet been developed to the point where they have been tested or validated. Recent research used meta-genomic examination of the content of airplane lavatories by sequencing and detecting the relative abundance of select pathogens (not quarantinable infectious diseases, however). By isolating and determining the sequence from genetic material found in passenger bio-waste, the researchers were able to determine the types of antibiotic resistance carried by passengers’ microbes. Researchers were also able to identify specific pathogens, as well as their relative abundance based on the geographic origin of the samples. This method is potentially useful for global surveillance of communicable diseases, antibiotic resistances, and transmission routes. However, there are potential challenges to implementing this approach. For example, the researchers identified that implementing this method from all flights on a weekly basis would be challenging, given the current state of technology. Additionally, developments in point-of-care technology—methods that can be used in doctor’s offices, hospitals, or on the field (e.g., at an airport), instead of a laboratory—are increasing the speed of diagnosis as well as the variety of diseases that can be targeted. For example, companies have developed FDA-approved tests for human immunodeficiency virus (HIV) that do not require laboratory equipment and can provide results in as little as 20 minutes. Some tests for communicable diseases, such as influenza, have been developed and FDA-approved, but studies have indicated the sensitivities can be variable. Future improvements may lead to feasible screening based on, for example, microfluidics devices that can identify multiple concomitant infections. In addition to the contact named above, the following individuals made important contributions to this report: Paul Aussendorf, Assistant Director; David Hooper; Hayden Huang; Molly Laster; David Lysy; Jacob McAuliffe; Josh Ormond; Sarah Resavy; Gretchen Snoey; Russell Voth; and Amelia Weathers. Emerging Infectious Diseases: Asian SARS Outbreak Challenged International and National Responses. GAO-04-564. Washington, D.C.: April 28, 2004. Influenza Pandemic: Further Efforts Are Needed to Ensure Clearer Federal Leadership Roles and an Effective National Strategy. GAO-07-781. Washington D.C.: August 14, 2007. Influenza Pandemic: Opportunities Exist to Address Critical Infrastructure Protection Challenges That Require Federal and Private Sector Coordination. GAO-08-36. Washington D.C.: October 31, 2007. National Response Framework: FEMA Needs Policies and Procedures to Better Integrate Non-Federal Stakeholders in the Revision Process. GAO-08-768. Washington, D.C.: June 11, 2008. Public Health And Border Security: HHS and DHS Should Further Strengthen Their Ability to Respond to TB Incidents. GAO-09-58. Washington, D.C.: October 14, 2008. Influenza Pandemic: Sustaining Focus on the Nation’s Planning and Preparedness Efforts. GAO-09-334. Washington D.C.: February 26, 2009. Influenza Pandemic: Increased Agency Accountability Could Help Protect Federal Employees Serving the Public in the Event of a Pandemic. GAO-09-404. Washington D.C.: June 12, 2009. Influenza Pandemic: Monitoring and Assessing the Status of the National Pandemic Implementation Plan Needs Improvement. GAO-10-73. Washington D.C.: November 24, 2009. Disaster Response: Criteria for Developing and Validating Effective Response Plans. GAO-10-969T. Washington, D.C.: September 22, 2010. National Preparedness: DHS and HHS Can Further Strengthen Coordination for Chemical, Biological, Radiological, and Nuclear Risk Assessments. GAO-11-606. Washington, D.C.: June 21, 2011. Influenza Pandemic: Lessons from the H1N1 Pandemic Should Be Incorporated into Future Planning. GAO-11-632. Washington D.C.: June 27, 2011. FEMA Has Made Limited Progress in Efforts to Develop and Implement a System to Assess National Preparedness Capabilities. GAO-11-51R. Washington, D.C.: October 29, 2010. Influenza Pandemic: Agencies Report Progress in Plans to Protect Federal Workers but Oversight Could Be Improved. GAO-12-748. Washington, D.C.: July 25, 2012. Emergency Preparedness: Opportunities Exist to Strengthen Interagency Assessments and Accountability for Closing Capability Gaps. GAO-15-20. Washington, D.C.: December 4, 2014.
Past communicable diseases, such as the recent Ebola epidemic, have resulted in many deaths and highlight the potential economic cost of disruptions to air travel and the U.S. and global economies. GAO was asked to review the preparedness of the U.S. aviation system to respond to communicable diseases. This report examines (1) the extent to which selected U.S. airports and airlines have plans for responding to communicable disease threats from abroad and to which a national aviation-preparedness plan guides preparedness, and (2) the challenges that U.S. airports and airlines have faced when responding to threats and any actions taken to address them. GAO reviewed available documents and interviewed representatives from 14 U.S. international airports—selected to reflect a range of activities and facilities—and the 3 major U.S. airlines. GAO also reviewed applicable federal requirements and international obligations and guidance for U.S. airports and airlines, and interviewed officials and reviewed documents from federal agencies and aviation stakeholder groups. All of the 14 airports and 3 airlines GAO reviewed have plans for responding to communicable disease threats from abroad, although the United States lacks a comprehensive national aviation-preparedness plan aimed at preventing and containing the spread of diseases through air travel. U.S. airports and airlines are not required to have individual preparedness plans, and no federal agency tracks which airports and airlines have them. Consequently, it is not clear the extent to which all U.S. airports and airlines have such plans. The plans GAO reviewed generally addressed the high-level components that GAO identified as common among applicable federal and international guidance, such as establishment of an incident command center and activation triggers for a response. GAO identified these components to provide a basis for assessing the breadth of the plans. The plans GAO reviewed for each airport were developed by, or in collaboration with, relevant airport stakeholders, such as Centers for Disease Control and Prevention's (CDC) airport staff. As provided in Annex 9, the Chicago Convention, an international aviation treaty to which the United States is a signatory, obligates member states to develop a national aviation-preparedness plan for communicable disease outbreaks. The Department of Transportation (DOT) and CDC officials contend that some elements of such a plan already exist, including plans at individual airports. However, FAA has reported that individual airport plans are often intended to handle one or two flights with arriving passengers, rather than an epidemic, which may require involvement from multiple airports on a national level. Most importantly, a national aviation-preparedness plan would provide airports and airlines with an adaptable and scalable framework with which to align their individual plans—to help ensure that individual airport and airline plans work in accordance with one another. DOT and CDC officials agree that a national plan could add value. Such a plan would provide a mechanism for the public-health and aviation sectors to coordinate to more effectively prevent and control a communicable disease threat while minimizing unnecessary disruptions to the national aviation system. Aviation stakeholders GAO spoke with identified multiple challenges in responding to communicable disease threats and actions they took or would take in response. For example, airline and airport representatives told GAO they sometimes experienced difficulties sharing timely and accurate information about threats, and some reported that they improved communication by developing tools, such as standardized forms, to collect and share relevant information. Employees at aviation services firms that GAO spoke with—including contract workers who clean aircraft—raised concerns about the availability of training and access to equipment to control exposure to communicable diseases. Some airports GAO reviewed developed additional mechanisms to ensure adequate training and preparation during the Ebola threat. A national aviation-preparedness plan could serve as the basis for testing communication mechanisms among responders to ensure those mechanisms are effective prior to a communicable disease outbreak as well as to provide the basis for ensuring that airport and airline staff receive appropriate training and equipment to reduce their risk of exposure to communicable diseases during an outbreak. GAO recommends that DOT work with relevant stakeholders, such as the Department of Health and Human Services, to develop a national aviation-preparedness plan for communicable diseases. DOT agrees a plan is needed, but suggests public health agencies lead the effort. GAO continues to believe the recommendation is correctly directed to DOT, as discussed in this report.
FDLP loan consolidation begins with a borrower sending EDS an application for a consolidation loan. The borrower lists each loan he or she wants to consolidate and the party holding or servicing the loan—the FDLP servicing center for FDLP loans and private lenders for FFELP loans. For FDLP loans, EDS obtains balance information from the servicing center. For FFELP loans, EDS sends a verification certificate to each lender to verify each loan and the amount owed. Lenders complete the verification information and return the certificates. Upon receiving and validating all loan verification information, EDS sends a promissory note to the borrower for signature. After the borrower signs and returns the note, EDS pays off each lender for the underlying FFELP loans and records the consolidation loan for servicing purposes. According to Education officials, EDS sends new loan transactions to the central FDLP database, managed by Computer Data Systems, Incorporated/AFSA Data Corporation (CDSI/AFSA), the Education contractor that services all direct loans. Information from the central database is then sent to the FDLP servicing system, also managed by CDSI/AFSA, for loan servicing and collection. FDLP consolidations were first made available in March 1995 when CDSI/AFSA operated the consolidation program along with its other direct loan servicing responsibilities. Education subsequently awarded a contract to EDS to take over FDLP loan origination operations, including consolidation processing. EDS began operating the consolidation program and processing FDLP consolidation loans in September 1996. EDS’ responsibilities included obtaining verification certificate information, generating promissory notes, ensuring that the promissory notes were returned, and making payments to lenders. But beginning shortly after September 1996, a backlog of unprocessed consolidation loan applications developed and grew steadily. In August 1997, with the backlog having reached about 84,000 unprocessed applications, more than half of all applications that EDS had received, Education closed down the FDLP consolidation program to new applications until December 1, 1997. EDS and Education took steps during the shutdown to resolve the backlog of applications. By mid-January 1998, about 3,800 applications from the original backlog remained unresolved, and according to Education officials, only 15 remained unresolved in late March 1998. Lenders’ representatives said that the primary problems they had with FDLP consolidations were (1) loan verification certificates EDS them sent that contained errors and (2) inaccurate payments EDS sent to pay off loans. EDS acknowledged the systemic nature of these problems and generally attributed them to inaccurate data or inefficiencies in its processes. For example, because EDS staff relied on inaccurate data sources for loan information, EDS sometimes sent verification certificates to lenders with the wrong information. In addition, glitches in EDS’ editing processes resulted in duplicate certificates being sent to lenders after original certificates had been completed and returned to EDS. With regard to payments, certain EDS errors, such as data entry mistakes or problems with multiple certificates, resulted in payments to lenders for loan amounts that were much too high and, at times, that double-paid a borrower’s loans. Similar errors also caused payments to lenders that were too low, leaving a borrower with a remaining balance with the lender when the borrower’s account should have been closed. While lenders focused on verification and payment problems, during the course of our work we discovered an additional system flaw: Certain differences between EDS’ and the FDLP servicing center’s systems, such as differing edit checks, meant that some corrections to borrowers’ accounts were not recorded in the FDLP servicing system in a timely manner. Borrowers were thus left with incorrect loan balance information so long as the corrections were not posted, sometimes for many months. The process EDS used to verify the loan amounts that borrowers wanted to consolidate was prone to error. It was designed so that lenders would verify information that EDS had in its system to determine balances that would be paid to lenders upon the consolidation of the loans. Because the process relied on faulty data sources and did not contain effective controls, lenders sometimes received a certificate with one of three problems: it contained incorrect information, it was sent to the wrong address, or it was sent after a certificate had already been sent and returned. First, lenders sometimes received certificates containing incorrect information. EDS generally sent certificates to lenders that contained a lender’s name and address; the borrower’s name, address, and social security number (SSN); and the type of loans to be consolidated so that lenders could identify the loans to be certified. However, lenders’ representatives told us that they received certificates containing various types of mistakes, such as a wrong name or address for a lender or names or SSNs of borrowers whose loans the lender did not own. Lenders were sometimes required to research borrowers’ accounts to determine, if a certificate did not match, whether it was for the wrong loan type (such as a subsidized loan inaccurately identified as an unsubsidized loan) or for a borrower whose loan was with a different lender. Second, lenders said verification certificates were sometimes sent to a wrong address. For example, one lender with several servicing centers around the country received certificates at one center for borrowers’ loans serviced by a different center. Another lender received certificates addressed to its corporate headquarters, to which borrowers’ correspondence—servicing information or payments—is not normally addressed. EDS officials said both these problems were in part the result of its system’s reliance on faulty data sources to obtain loan and lender information. EDS relied heavily on information a borrower provided on his or her application regarding lender name and address and loan type, and EDS staff did not attempt to verify this information before contacting the lender. However, borrowers did not always provide complete loan information on their applications or may have provided wrong information, such as the wrong lender’s name. In addition, EDS staff relied on a computerized file of FFELP lender names and addresses, compiled and provided by Education, and EDS staff matched lenders’ names and addresses provided by a borrower on his or her application to those in the file. EDS did not attempt to verify the accuracy of the information prior to sending a certificate to the lender. However, some lenders were listed with several addresses or with a wrong address, and some had names similar to those of guaranty agencies, which were also in the file but whose names were not well distinguished. The third problem lenders mentioned was that they sometimes received more than one certificate for a particular borrower. EDS officials acknowledged that its system would sometimes sent multiple copies of the same verification certificate to a lender, even if the lender had already provided the requested information to EDS. The officials said this occurred in part because of a glitch in one of EDS’ edit processes. As lenders returned completed verification certificates, EDS scanned them into a computer imaging system and, if certificates passed an edit check, generally sent them for entry into the data system. However, if a certificate being scanned had incorrect or missing data, it was set aside for manual editing. After a certain period of time elapsed without data from a certificate being entered into the data system, the system automatically generated a new certificate to be sent to the lender. Furthermore, when borrowers or lenders called EDS to inquire about the status of a loan’s verification certificate, EDS customer service representatives, who had access to both data and imaging systems, would sometimes check only the data system, not the imaging system. If they noted that data were missing in the data system, they would assume the verification certificate had not been returned. EDS would then send lenders another certificate. EDS’ failure to enter data from completed verification certificates also resulted in its sending letters to borrowers and lenders, inaccurately stating that the lender had not returned a certificate. EDS’ system automatically generated a standardized letter if no data were entered into a borrower’s file 60 days after a certificate was sent to a lender. This letter, sent to the borrower with a copy sent to the lender, said that the consolidation was delayed because the lender had not provided requested information. Lenders said they believed they were being blamed for loan consolidations being delayed when, in reality, they had returned the verification certificate. EDS’ system to pay lenders for the loans that borrowers wanted to consolidate did not always result in accurate payments. Lenders sometimes received large overpayments while at other times they received underpayments. These payment inaccuracies resulted from errors in processing verification certificates and data entry errors. In addition, in some cases a borrower’s loans were charged to a second borrower’s account. EDS’ system is designed to slightly overpay each loan to ensure that the borrower’s original account was paid in full and closed. Lenders expect such overpayments, which enable them to close borrowers’ accounts while they reconcile final payments with EDS. Large or unjustified overpayments, however, were sometimes made to lenders for a variety of reasons: EDS officials attributed one cause of overpayments to the multiple verification certificates that EDS erroneously sent to lenders and that lenders returned. EDS’ system paid lenders on the basis of certificates that were returned. At times, therefore, EDS would receive a certificate, make a payment to a lender to pay off a borrower’s loan account, and then subsequently discover a second completed certificate. EDS would then make an additional payment to the lender. For example, one lender’s official said that, after completing two certificates, the lender received two checks for a borrower to pay off her loan. The two checks were issued on the same day, but they were for slightly different amounts—$58,354.46 and $58,349.02. The lender should have received only one of the checks. Another lender was asked twice to return a certificate to EDS for a borrower with two loans, one for about $3,700 and the other for about $2,000. When EDS sent promissory notes to the borrower, one note included about $8,000—counting the $2,000 loan twice and the $3,700 loan once—and a second note included the $3,700 loan again. EDS double-paid the borrower’s account, sending two payment checks for each loan, totaling about $12,000, or about $6,000 in overpayment. Data entry errors that were not detected by EDS’ systems also led to overpayments. In one example, EDS entered a $16,715.09 loan into the data system as $167,115.09. EDS did not discover or correct this error before sending the lender a check, causing an overpayment of more than $100,000. In another example, a lender certified a loan as $10,953.91, but EDS erroneously entered it as $19,953.91. EDS overpaid the lender by about $10,000. Other EDS processing errors went undetected by its systems and contributed to overpayments. In one example, a borrower wanted to consolidate three subsidized Stafford loans totaling $17,000. The verification certificate the lender returned to EDS showed the borrower’s graduation date of May 1997. Because they were subsidized loans, the lender filled in “zero” for interest due on each loan, with a note saying “info good thru 11/30/97,” the end of the borrower’s grace period. However, EDS’ system did not recognize that the loan was not subject to interest accrual for the 6-month grace period. EDS erroneously added interest to the payments, which it made in October 1997, resulting in overpayments. Education and EDS representatives said that accrued interest should not have been added to the loan payment. In all overpayment cases we analyzed, borrowers signed promissory notes for amounts that exceeded what they owed, which means that borrowers might have been liable for repaying the inaccurate amount on the promissory notes. EDS representatives said that, as with lender information derived from borrowers’ applications, its processes and systems rely on borrowers’ knowledge of their loan amounts to prevent overpayments. They said they now realize that borrowers often believe that promissory notes they receive must be correct, perhaps believing—if they received multiple notes—that the first one they returned needed to be amended. In the cases we analyzed, Education and EDS systems did not identify the overpayment—they were detected only when the lender contacted EDS, while trying to reconcile a borrower’s account, or when we brought it to EDS’ attention. Underpayments to lenders were also a problem. Most of the underpayments that we analyzed resulted from data that lenders provided to EDS not being entered into EDS’ data system, while others resulted from a control EDS put in place to try to reduce duplicate payments or other system problems: In several examples, EDS did not enter data into its system for one of a borrower’s loans when a lender certified a number of loans. EDS did not pay the lender for the omitted loan, so the borrower’s account with the lender was not closed out because a loan remained unpaid. In some cases, these underpayments resulted from EDS sending lenders inaccurate or incomplete verification certificates—for example, the certificate failed to list all a borrower’s loans or loan types. Some underpayments resulted from an overly sensitive edit check that EDS put into place to reduce the likelihood of a duplicate payment. The edit would not allow a borrower to have two loans in the data system with the same “first disbursement date.” EDS mistakenly assumed that if the borrower had two loans disbursed on the same date, they were actually the same loan and one of the loans had been incorrectly entered in its system. However, this is an extreme assumption, because a borrower can have two different loans disbursed on the same date. For example, a student might receive a subsidized and an unsubsidized Stafford loan on the same date, such as the start of a school year. We identified several instances in which this edit led to EDS’ system underpaying lenders because at least one loan a borrower applied to consolidate was not paid off. Another way EDS’ system caused underpayments was the misidentification of loans in default at the time of consolidation. FDLP allows defaulted loans to be consolidated under certain circumstances, and the costs previously incurred to collect the defaulted loan are to be added to the borrower’s amount to be consolidated. Certain data fields in EDS’ data system should have helped ensure that EDS could identify such loans, but EDS’ records did not always contain consistent information in these fields. For one of the examples we analyzed, the data system showed that a borrower’s loan was not in default, but the borrower was assigned collection costs, a system inconsistency. In this case, the loan actually was in default, but the system did not identify the inconsistency in the data. Because EDS’ data system relied on the information in the “loan default” field, it did not include collection costs. Had the system also checked the field showing whether any collection costs were due, it would have seen that the account had collection costs, the consolidation loan would have included them, and the guaranty agency would have been reimbursed for them. Perhaps the most serious examples of incorrect payment problems were those in which two borrowers’ accounts were not kept separate. In these instances, during EDS’ process of entering loan data into one borrower’s account, EDS staff erroneously entered loan data pertaining to another borrower. The first borrower’s account then included a loan with the first borrower’s SSN in some places and a second borrower’s SSN in others. The first borrower’s account reflected the charges for these loans, in addition to his or her own. For example, one borrower tried to consolidate loans totaling less than $100,000 but eventually accumulated a $190,000 balance in the data system because, among other errors, his account was charged for a second borrower’s loans. EDS overpaid the lender by more than $90,000 on this borrower’s account, and about $47,000 of this excess was for loans that belonged to the second borrower. When the lender received the payment checks, it saw the second borrower’s SSN on some of the checks, saw that the second borrower’s account had already been paid in full, and returned the checks to EDS. In another example of this type, EDS sent three checks to a lender for a borrower with two loans. The third loan had a different nine-digit account number that was actually the SSN of a different borrower. In all, we found four instances of this type of mistake. Education’s oversight of the data transfer process between EDS and the FDLP servicing center failed to ensure that adjustments to borrowers’ accounts were credited in a timely manner. EDS’ system sent loan consolidation transactions, including new loans and subsequent adjustments, to the central FDLP database for entry into the FDLP servicing system. Such adjustments included credits for refunds made by lenders on behalf of borrowers. According to Education officials, consolidation data were not always smoothly transmitted between EDS’ system, the central database, and the FDLP servicing system—some transactions were rejected when being moved from one system to the next, and these transactions were sent to a “suspense” file. This caused an accumulation of loan accounts showing incorrect balances until the adjustments could be properly posted. In some of the examples we reviewed, the adjustments had yet to be made. In particular, we found that some overpayments that lenders returned to EDS were not credited to borrowers’ servicing accounts. One borrower, discussed earlier, signed promissory notes totaling about $190,000, although he actually owed only about $90,000. One of his lenders received overpayments totaling over $90,000 and sent refunds of this amount to EDS in May and September 1997. As of February 1998, however, the FDLP servicing system continued to show that the borrower owed $190,000. Education officials, in attempting to explain why the borrower’s account was not properly updated, said that when the borrower’s account is eventually corrected, an adjustment would be made retroactive to the date of the overpayment, so that the borrower would not be liable for any interest that accumulated since then. While this was the largest erroneous dollar amount we found, it was not an isolated incident. For the borrower described earlier whose $58,000 loan EDS had paid twice, FDLP servicing system records continued to show the additional $58,000 as part of her loan balance in February 1998, although EDS received the lender’s refund in May 1997. In all, we found 11 examples of borrowers whose refunds had not been properly credited when we completed our audit work. (For details on these and other examples we analyzed, see app. II). The lenders we spoke with agreed that FDLP loan consolidation problems have created difficulties for them. These difficulties included having reduced productivity, having to redeploy or hire new staff, and having their relationships with borrowers damaged. However, while lenders provided several examples of additional costs these difficulties brought, they generally could not assign a dollar value to them. Lenders’ representatives expressed concern that one EDS requirement—that verification certificates must be filled in manually rather than being electronically generated in the lender’s own format—has reduced their staffs’ productivity. For example, representatives from one lender said that its staff could electronically generate (that is, electronically complete borrower verifications) about 116 certificates per hour. However, this lender’s staff could manually complete only 12 certificates per hour. Representatives from another lender indicated that electronically generating loan information for certificates received from EDS takes only 2 hours, but the manual copying of loan information onto the certificates can take a staff person an additional half a day to complete. Lenders also said that they have had to redeploy or hire personnel to handle problems that have resulted from the FDLP loan consolidation process. One lender’s representative said that it has shifted staff to deal with inaccurate loan payment problems and that other areas in his unit have been left understaffed. Lack of staff in these other areas resulted in delays in lenders’ posting payment checks and, therefore, delays in updating borrowers’ accounts to show that their loans have been paid off. Another lender’s representative said that the time needed to handle duplicate verification certificates, return overpayments, or make additional payment requests has resulted in extra work for the staff, although it is hard to quantify its amount. The effect that delayed consolidations had on lenders’ relationships with their borrowers was of particular concern to lenders. One lender’s representative expressed concern that borrowers’ mistrust of lenders increased because some borrowers’ loans were not being consolidated promptly. EDS sends a letter to each borrower after sending payments to lenders, telling the borrower that his or her new consolidated loan account is active and that all underlying loans have been paid off. However, according to the representative, if a lender receives inaccurate payments for any of the borrower’s loans, it can take time to resolve the difference with EDS. The representative expressed concern that borrowers assume or believe that lenders are holding up their loan consolidation, thereby increasing distrust of the lenders. Another lender’s representative said that her customer service staff has received calls from borrowers asking why they are receiving late payment notices from the lender after they have been notified that their loans had been consolidated. All four lenders we talked with agreed that their problems with the FDLP loan consolidation process had affected their operations. However, none of the lenders’ representatives were able to assign a dollar cost to their experiences. Since the shutdown of the FDLP loan consolidation program between August 27 and December 1, 1997, both Education and EDS have taken steps to improve the process and reduce the problems that contributed to the buildup of the backlog. For Education, these steps include a more coordinated internal approach to overseeing the program, changes to the contract with EDS that emphasize performance measures, and closer monitoring of the consolidation process and the transfer of data to the FDLP servicing system. EDS has taken new quality control steps in the consolidation process aimed at getting more accurate loan information in a timely manner. In addition, EDS has made changes to its automated system and has incorporated greater use of electronic data in some of its processes. EDS has begun evaluating these changes, but the final results are not yet available. According to Education officials, several changes since the shutdown of the loan consolidation program in August 1997 will lead to improved performance. First, Education has established a team focused on managing FDLP consolidations, made up of staff on full-time detail from four units within the Department responsible for different aspects of the consolidation process—contract management, systems management, program management, and financial management. Before establishing this team, Education had not designated a person or team to manage FDLP consolidations. Instead, it used staff from a number of units to manage the program, but these staff had responsibilities in their own areas and were able to devote only part of their time to consolidations. Furthermore, little coordination existed internally between the various units. Education hopes this new team, which began meeting in mid-January 1998, will provide much more coordinated oversight within the Department. Among other tasks, the consolidation team is working directly with lenders to try to resolve consolidation problems. For example, according to Education officials, the team is working with certain lenders to create an electronic certification process. Second, EDS’ contract with Education was amended to tie contract payments to EDS to performance under the contract. According to Education staff, under the original contract between Education and EDS, the terms surrounding consolidation responsibilities, systems, and processes lacked specificity. The modification signed on January 27, 1998, includes provisions for increased payments to EDS but at a level that depends on its timeliness in processing consolidation applications. For example, EDS will be paid a per-unit price for each application EDS completes within a target number of days, and as an incentive to complete applications quickly, it will be paid a bonus for each day in advance of the target that consolidations are achieved. In addition, the contract provides an additional incentive payment for each consecutive 3-month period in which EDS meets a set of performance criteria that is to be developed by the company. The modification also includes a financial penalty for performance shortcomings, such as not meeting performance measures in a consecutive 3-month period. Third, Education is monitoring the consolidation process more closely now than before the shutdown. Education officials said they meet with EDS staff early mornings three times a week to discuss problems. In addition, Education staff receive much more, and more detailed, information on performance statistics than EDS made available during the first year under the contract. For example, EDS sends Education staff daily summary statistics detailing how many applications are at each stage of the consolidation process. One summary report shows, for example, that on January 19, 1998, EDS had received more than 13,000 applications since reopening the consolidation process on December 1, 1997. About 1,430 applications had been deactivated or rejected or were waiting to be processed. Of the remaining applications, about 7,700 were awaiting lenders’ return of verification certificates and another 1,260 were awaiting promissory notes returned from borrowers or review by EDS staff. About 2,750 applications, or about 20 percent of all applications received since the December 1 reopening, had been completed. By March 30, 1998, about 17,000 of 41,000 applications received, or 41 percent, had been completely processed, according to Education officials. Finally, Education is working to ensure that transactions flow smoothly between EDS and CDSI/AFSA electronic systems. Education officials said that EDS and CDSI/AFSA have been working since October 1997 to reduce a large number of transactions that had not been successfully transferred from EDS to CDSI/AFSA. Education officials also said that changes are under way that are intended to improve the transfer of such transactions in the future. For example, transactions reflecting adjustments to borrowers’ accounts were not numbered in such a way that the adjustment could be traced back to the original transaction being adjusted. Now, the adjustment is linked to the original transaction, making it easier for the servicing system to trace adjustments to borrowers’ accounts. EDS has also taken new steps in its process for consolidating loans since the August 1997 shutdown, including adding three new quality control teams, making system changes, experimenting with electronic data submission, and conducting an evaluation of the changes. These changes affect applications that have been received since the reopening of the consolidation program in December 1997. According to EDS and Education officials, EDS has expanded and redirected staff to provide quality control at three points in the consolidation process. At the front end of the consolidation process, EDS has put in place a team called the exam entry team, which will examine application information to make sure it is ready for data entry. According to EDS, the first place in which problems developed in the consolidation process was its use of incomplete or inaccurate loan information shown on the application. It said that obtaining complete and accurate applications from borrowers is critical to making consolidation work. Therefore, exam entry staff will be working more closely with borrowers and using a variety of other information sources to ensure that information about a borrower’s loans, such as lender’s name and address, is complete and accurate. Exam entry staff will be matching application materials with data keyed into EDS’ data system. Staff will be looking for keying errors and the accuracy of the loan holder’s information. When information is missing or inaccurate, staff will attempt to be more aggressive than in the past by using available sources, including telephoning the borrower, to get complete and accurate loan information. Staff also make use of the National Student Loan Data System—an Education system that contains current student loan information for borrowers—to obtain information on a borrower’s loan holders. Finally, EDS now requests applicants to include with their application a copy of a page from their payment book or a servicing notice for each loan they are consolidating. EDS has set up a second team to help reduce the inventory of outstanding verification certificates and to keep the inventory low. The certification team receives a daily report of verification certificates that are overdue. According to EDS staff, certification team members, who are organized geographically and assigned to specific lenders, work with lenders that have overdue verification certificates to get the certificates returned quickly. Certification team members also work with lenders who return incomplete certificates. In its final quality control in the consolidation process, EDS has set up a third team, referred to as the promissory note underwriting team, to review all borrower application documentation before it pays lenders and issues borrowers a promissory note. According to EDS staff, this team was set up during the shutdown to provide a critical quality review before a borrower’s loans are consolidated. Promissory note underwriting staff trace loan amounts shown on promissory notes back to the verification certificates, the original application, and any supporting documentation to ensure that the promissory note amount is correct. Only after a borrower’s loan application has been reviewed and approved by the promissory note underwriting team can a promissory note be sent to the borrower and the consolidation loan made. Currently, the promissory note underwriting team reviews all applications before notes are made final and sent to borrowers. However, according to Education and EDS representatives, eventually the team will be reviewing a sample of each batch of applications that make it to this stage. EDS staff cautioned that while promissory note underwriting staff should reduce errors, some mistakes can still occur since much of the underwriting staff’s work is based on judgment. In addition to these three teams, EDS representatives said that they have made changes to its automated system that they believe will help reduce errors. Since the shutdown, 86 Education-requested changes to the system—called direct modification requests—have been implemented. The changes include measures aimed at preventing such things as the use of duplicate SSNs, incorrect calculation of loan collection costs on certain defaulted loans held by guaranty agencies, and data from duplicate loan applications being entered for a single borrower. EDS representatives also said they are working with selected lenders to allow the electronic submission of verification certificate information. In a pilot project currently under way, a lender submits loan information to EDS on computer diskettes. EDS said electronic submission of this information should save staff time for both lenders and EDS and will avoid the need to manually copy information onto the form. EDS said that this should also increase the accuracy of loan information. Finally, both Education and EDS officials said that EDS is monitoring the recently implemented changes to its consolidation process through an extensive review of the first applications processed through the new procedures. Since the reopening of the consolidation process in December 1997, EDS has been tracking the first 1,000 consolidation applications, and it hopes its evaluation of these applications will provide information on how well the new changes to the process are working. According to EDS officials, the review will follow each application from the point it is first received from a borrower through loan verification, generation of a promissory note, and, finally, loan payoff and transfer to the FDLP servicing system after the signed promissory note is returned. The review will track how much time applications are spending in each stage of the process. EDS officials also said that the review will use a sample drawn from the first 1,000 applications to determine whether payment accuracy has improved and whether any postdisbursement adjustments have been correctly recorded in the servicing system. Although the evaluation’s results were to be available in mid-January 1998, EDS had not fully completed the evaluation in March 1998, when we completed our audit work. We contacted each of the four lenders included in our study to get their initial reaction to Education’s and EDS’ changes. The representatives we spoke with offered mixed reactions to the changes and said that it is too early to tell whether they will lead to improved outcomes. One lender’s representative noted that, although she has noticed fewer duplicate certificates since the program reopened to new applicants on December 1, 1997, some certificates are still sent to the wrong addressee. Representatives from a lender that is experimenting with electronic transmission of verification certificate data are optimistic that this process will help resolve verification certificate problems, but they are still working out details with EDS. Representatives from all four lenders said that they continue to receive inaccurate payments. However, they said that they cannot determine whether these are for borrowers who were part of the backlog or new applicants since December 1, so they do not know whether the new process is leading to more accurate payments. EDS’ errors in processing FDLP consolidation applications led to a number of problems; lenders had to spend additional time resolving the problems, and borrowers’ applications were not always processed correctly. In addition, Education’s management and oversight of the FDLP consolidation program failed to ensure that borrowers’ applications were processed correctly, and it insufficiently managed the transfer of data between two contractors, EDS and CDSI/AFSA, to ensure that borrowers’ accounts reflected what they actually owed. The changes that Education and EDS are putting into place appear to move in the right direction to address some of the concerns that lenders raised, such as duplicate verification certificates and payment mistakes. However, most of the changes are recent and have not yet been evaluated, and improved outcomes are not yet ensured. Lenders’ representatives we spoke with generally believe it is too soon to determine whether they will see fewer problems now that EDS has resumed taking applications and made process changes. EDS’ current evaluation—consisting of 1,000 applications—will test many of the new processes, but we cannot judge whether payoff accuracy and the quality of information being transferred to the servicing system have improved. Because EDS continues to rely on lenders notifying it of inaccurate payments, it does not know whether payoffs are accurate until several weeks after it makes disbursements to lenders, to allow time for a refund or a claim for underpayment. According to EDS officials, the evaluation of the first 1,000 applications will include an analysis of payment accuracy, and we believe that no conclusion can be reached on systems improvements until this analysis is complete. In addition, EDS said that the review will test whether borrowers’ accounts with the FDLP servicing system are accurately adjusted for any refunds lenders make—another process that has not always been completed successfully. Finally, we are concerned that the new process changes do not address previous applications—from before the shutdown—that had errors during their processing, such as those in our sample that have not yet been corrected. However, for the examples we reviewed, if all transactions that were placed into suspense files can be correctly applied to borrowers’ accounts, most or all errors on the applications would be resolved. The Department of Education, in commenting on a draft of our report, stated that the report presents a fair analysis of the problems we discuss. Education emphasized that new processes, most of which we discuss, should resolve the types of problems lenders experienced during EDS’ first year operating the program. Education offered a clarification to our analysis of the problems involved with transactions that were not applied to borrowers’ accounts in the servicing system, and we revised the draft to reflect the clarification. In addition, Education provided several technical comments, which we incorporated as appropriate. Education’s written comments are in appendix III. We are sending copies of this report to the Secretary of Education, the appropriate program manager for EDS, appropriate congressional committees, and others who are interested. If you or your staffs have any questions or wish to discuss this report further, please contact me or Jay Eglin, Assistant Director, at (202) 512-7014. Major contributors include Nancy Kintner-Meyer and James W. Spaulding. We interviewed officials from four judgmentally selected Federal Family Education Loan Program (FFELP) lenders. We selected the lenders to obtain variety in size, as measured by FFELP loan volume, and different perspectives on lenders’ experiences with William D. Ford Federal Direct Loan Program (FDLP) consolidations. Two performed third-party servicing (under contract) for other lenders and also had other parties perform servicing for them, and the two others serviced all their loans and no loans for other lenders. One was affiliated with a guaranty agency. Although we tried to obtain a variety of examples, these lenders were not representative of all FFELP lenders. We visited each lender and interviewed officials who were familiar with FDLP consolidations. The officials described their problems with consolidation processing. They also provided us with documentation on specific examples of problems with the verification certificate process, overpayments, and underpayments. We selected between 8 and 13 examples from each lender for a total of 40. We discussed with Electronic Data Systems (EDS) and the Department of Education the problems the lenders raised. We also visited EDS and obtained its documentation for each of the 40 examples we had selected. We reviewed some of these cases in detail with EDS to obtain its perspective on the problems the lenders raised. We discussed, and when possible obtained documentation on, changes both Education and EDS were making to the FDLP consolidation process. We then interviewed the lenders’ officials again to obtain their impressions of whether these changes might lead to improvements in the process. Finally, we obtained servicing history information for some of the borrowers’ accounts included in our examples. We noted whether refunds made on a borrower’s behalf from EDS to the lender were properly credited to the borrower’s account. We obtained these data from Education and Control Data Systems, Incorporated/AFSA Data Corporation (CDSI/AFSA). The information we obtained is specific to the four lenders we selected and cannot be generalized to all FFELP lenders. We did not select a random sample of cases from these lenders; rather, the lenders nominated cases for our review on the basis of their perceptions of problems in the program. For this reason, we cannot make judgments regarding the overall frequency or extent of these problems in the program as a whole. This appendix contains more detailed analysis of some of the examples cited by lenders involving overpayment and underpayment problems. In addition, we present details on some examples of borrowers whose accounts with the FDLP servicing system were incorrect at the time of our review, and we include other comments lenders made about the FDLP consolidation process. EDS officials said that one cause of overpayments was that EDS mistakenly sent multiple verification certificates to lenders and that lenders returned them. EDS would pay lenders on the basis of certificates that were returned. So at times, if EDS received a certificate, paid off a borrower’s loan account, and subsequently discovered that it had received another completed certificate it had sent to a lender, it would make a second payment to the lender. For example, one lender returned two certificates for a borrower, verifying two loans, one in March 1997 and one in April. EDS sent two checks, one for each loan, on April 10 and two more—which constituted double-payments—on May 15. In addition to multiple verification certificates, EDS data entry errors, made while entering data from verification certificates into the data system, subsequently led to overpayments. In one example, an EDS representative said that one of three loans a borrower wanted to be consolidated, for $16,715.09, was entered in EDS’ data system as a $167,115.09 loan. However, one of the two other loans, for about $41,000, was entered into the system and subsequently overwritten by an erroneous entry of only $5,000. EDS sent checks totaling $179,531.53 for the three loans, constituting a net overpayment of about $114,000, which the lender refunded to EDS. In another example, a lender listed a borrower’s four loans on a verification certificate as totaling $29,565.97. The first of the four loans was for $10,953.91, but EDS data-entered it as $19,953.91 by mistake. In EDS’ system, this error, combined with slight overpayments for the three other loans, made the total for the four loans almost $40,000, which EDS paid the lender, resulting in a $10,000 overpayment. Verification certificates include a box showing the total amount of the loans being certified. In each of these cases, had EDS staff checked the box showing the total, the data entry error on the individual loan would have been apparent. Other EDS processing errors contributed to overpayments. In one example, a borrower had three subsidized Stafford loans totaling $17,000. The verification certificate the lender returned to EDS showed the borrower’s graduation date of May 1997. Because they were subsidized loans, the lender filled in “zero” for interest due on each loan, with a note saying “info good thru 11/30/97,” the end of the borrower’s grace period. Despite the lender’s notation on the certificate, EDS added interest to the payment, which it made in October 1997. The interest covered 36 days, and the checks were sent out 32 days after the date on the certificate, so enough interest was added to cover 4 days of mailing time. Although small, the interest amounts constituted overpayments. In all overpayment cases we analyzed, borrowers signed promissory notes for amounts that exceeded what they owed, which means that borrowers might have been liable for repaying the inaccurate amount on the promissory notes. In some of these cases, a borrower signed two very similar promissory notes, sometimes within several weeks of each other, and EDS paid lenders for the same loan twice. In other cases, the borrower signed only one promissory note that covered the same loans twice or contained other errors. In still other cases, borrowers signed as many as five promissory notes, each one partially covering their loans but totaling far more than what was owed. Underpayments generally resulted from lenders’ data not being entered into EDS’ data system. Typically, a lender certified a number of loans for a borrower, and EDS entered data for all but one of the loans, or data that were entered for a loan were subsequently overwritten. EDS did not pay the omitted loan, and the borrower’s account with the lender could not be closed out because the system showed that a loan remained unpaid. For example, one borrower had 10 loans with a lender. Five of the loans were included in a promissory note EDS mailed to the borrower in March 1997, which the borrower signed and returned the following month. EDS sent the lender four checks on April 17 and sent a check for the fifth loan on October 7. EDS sent a second promissory note in October, which was signed and returned that month, and EDS sent the lender three additional checks on October 20. Finally, the last two loans were included on a third promissory note, and EDS sent payment for these on December 12. Some underpayments resulted from an edit check EDS put into place to reduce the likelihood of a duplicate payment. The edit check would not allow a borrower to have two loans in the data system with the same “first disbursement date,” mistakenly assuming that if two loans showed the same disbursement date, they were actually the same loan. As EDS staff entered loan data into the data system from a verification certificate, if one loan had the same disbursement date as another loan already entered, the system would overwrite the previously entered data rather than creating data for a new loan. This edit led to EDS underpaying several lenders because it did not pay off at least one loan that a borrower intended to consolidate. In one example, a borrower had five loans with a lender, but two had the same first disbursement date. During data entry, the second of these overwrote the first loan, which was for about $19,000, so the first was not included on the initial promissory note. The certificate was returned to EDS in August 1997, EDS sent out the initial promissory note in September, and EDS sent payment to the lender for four loans on October 20. After a second promissory note, covering the last loan, was signed and returned, EDS sent a check for the last loan on December 10. In another example, a borrower had three loans with the same disbursement date, and only one of them was entered into the system—the two others were overwritten. Thus, EDS did not pay these two loans and underpaid the lender for that borrower’s loans. Lenders’ representatives said that other kinds of payment mistakes caused them problems. In cases in which verification certificates were sent to the wrong lender or to an incorrect lender address, EDS would also sometimes send its payments to the same mistaken address. This might happen because the lender did not correct the initial mistaken address. However, in other examples the lender had corrected the address but EDS did not enter the corrected address into its system. A payment, or other correspondence, was then sent to the same mistaken address to which the certificate had been sent. Another source of mistaken payments concerned misidentification of loans in default at the time of consolidation. FDLP allows defaulted loans to be consolidated under certain circumstances; any costs incurred to collect the defaulted loan, up to 18.5 percent of outstanding principal and interest, are to be added to the borrower’s amount to be consolidated. Three data fields in EDS’ data system should have helped ensure that such loans could be identified. These fields showed (1) whether the loan was in default, (2) whether the loan holder was a private lender or a guaranty agency, and (3) whether collection costs were due. EDS’ records did not always contain consistent information in these fields. For example, for loans that were not in default, the system should have shown that the loan holder was a private lender and no collection costs should have been assigned. However, for one of the examples we analyzed, the data system showed that the loan was not in default but it showed that the borrower was assigned collection costs. In this case, the loan actually was in default but the system did not identify the inconsistency in the data. Because EDS’ data system used only information in the “loan default” field to compute its payment amount, and it did not separately look at the field showing whether any collection costs were due, the collection costs were not paid. EDS underpaid the lender for this loan, and it will have to process an adjustment to pay the lender for its collection costs. Perhaps the most serious examples of problems were those in which loans in two borrowers’ accounts were intermingled. In these instances, at some point in EDS’ process of entering loan data into one borrower’s account, EDS staff erroneously entered loan data pertaining to a second borrower. In EDS’ system, the first borrower’s account then included a loan with the first borrower’s social security number (SSN) in the SSN field but a second borrower’s SSN in the “account number” field. The checks EDS sent to lenders similarly listed the first borrower’s name and SSN, but they had the second borrower’s SSN in the “account number” field. One borrower owed less than $100,000 but signed five promissory notes for a total of $190,000. For this borrower, EDS made some data entry errors, but it also entered data pertaining to a second borrower’s loans into this borrower’s account. Because of these errors, EDS overpaid the lender by more than $90,000 on this borrower’s account, with about $47,000 of this excess stemming from loans that belonged to the second borrower. When the lender received the checks, it saw the second borrower’s SSN, saw that the second borrower’s account had already been paid in full, and returned the checks to EDS. In all, we found four examples of this type of error. In the second example, EDS sent three checks to a lender for a borrower with two loans. The third loan had a different nine-digit account number, which was the SSN of a different borrower whose loans were with the same lender. In the third example, a lender received five checks, each for less than $500, for one borrower but with another borrower’s SSN. In this example, the second borrower did not have any loans with that lender, so the lender did not recognize the account number as another borrower’s SSN. In the fourth example, only one check, for less than $1,000, was sent for the first borrower with the second borrower’s SSN. In this example, however, EDS sent a loan verification certificate for the second borrower that included the first borrower’s SSN in the “account number” field, so EDS apparently confused these two borrowers’ applications before the certification stage. Mistakes in transferring completed consolidations to the FDLP servicing system meant that overpayments returned by lenders were not always corrected in a borrower’s servicing account. According to Education officials, data on completed consolidations are not always smoothly transmitted between the system maintained by EDS and two systems maintained by the loan servicing contractor. Because the systems have different edit checks, data on completed consolidations can be rejected by the system receiving the information and are placed in a “suspense” file of rejected transactions. Some of these transactions can eventually be applied automatically, while others must be dealt with manually. Education officials said that EDS has been working on cleaning up this file of rejected transactions since October 1997. We analyzed some examples in which lenders’ overpayment refunds to EDS were not applied to the borrower’s account in the FDLP servicing system. The borrower we discussed earlier who had signed promissory notes totaling about $190,000, for example, was shown in FDLP servicing system records in February 1998 as owing $190,000, even though the lender who received the overpayments made refund payments to EDS totaling over $90,000, in May and September 1997. Also, for one borrower whose $58,000 loan EDS had paid twice, FDLP servicing system records continued to show the additional $58,000 as part of her loan balance in February 1998, although EDS received the lender’s refund in May 1997. In most examples we analyzed in which borrowers had not been credited for refunds made on their behalf, we verified that EDS received a refund, but we could not determine whether the refund transaction had been forwarded to the direct loan servicer, CDSI/AFSA, for posting in the FDLP servicing system. However, in one case we determined that information reached the FDLP servicing system but had not been properly recorded. In this case, a lender sent 15 refund checks to EDS for overpayments it had received. The whole dollar amounts for 13 of the 15 checks—but not the cents amounts—were recorded in the servicing system in June and July 1997. The cents amounts for all 15 checks were recorded in December 1997 but not the whole dollar amounts for the remaining 2 checks, totaling about $19,000. Thus, all 15 checks were received by the FDLP servicing center, but the whole dollar amounts for the 2 unrecorded checks had not been credited to the borrower’s account as of February 1998. As a result, the borrower’s outstanding balance was about $19,000 more than it should have been. Also, the system did not always ensure that subsidized loans correctly retained their subsidy after consolidation. Consolidation loans can contain a subsidized and an unsubsidized portion. If a borrower were to return to school or obtain certain other deferments, interest would accrue on the unsubsidized portion but not on the subsidized portion. The borrower described earlier with a net overpayment of about $114,000 because of two data entry errors had refunds properly posted to his account. His account showed that he owed about $65,000, the correct amount. However, his entire balance was shown as unsubsidized, even though about $40,000 of the $65,000 should have been subsidized. The two EDS data entry errors, and the manner in which refunds were applied, resulted in his subsidized loan being overwritten. Some lenders’ representatives also expressed frustration at what they believed was a lack of communication from Education and EDS. Lenders attributed this to EDS’ newness to the student loan arena and apparent lack of familiarity with lenders. Lenders also said that while they had complained to Education staff about various problems with the consolidation process, no resolution had yet been reached. One lender’s representative said that lenders were provided no advance notice from either Education or EDS of the August 1997 shutdown of the FDLP consolidation program. After the shutdown, lenders were not provided any information about when consolidation operations might resume. The lender’s representative also said that toward the end of the shutdown, with no advance warning, the lender suddenly received an avalanche of loan payoff checks—far more than the usual volume—which it found difficult to process. Lenders also said that EDS’ customer service to lenders had been uneven. One lender’s representative said that after her contact at EDS left the company, she had had difficulty for more than 2 months finding another contact to help resolve problems. Another lender’s representative said that EDS did not provide follow-up to her calls and that she usually had to initiate follow-up calls. A third lender’s representative said that he had difficulty finding someone at EDS to assume responsibility for resolving his problems. His calls to EDS were transferred from person to person, ultimately leaving his problem unresolved. Lenders’ representatives said that while they had discussed problems with Education officials during EDS’ first year of direct loan consolidations, they had not heard back about any resolution. Education officials acknowledged that during EDS’ first year of direct loan consolidation, it did not do a good job of working with lenders on the consolidation process. These officials suggested that consolidation did not receive needed staff time to ensure the process was running smoothly because of other priorities within Education. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed problems lenders had with William D. Ford Federal Direct Loan Program (FDLP) consolidation loans, focusing on: (1) the nature and source of problems Federal Family Education Loan Program (FFELP) lenders have encountered in the direct loan consolidation program; (2) whether these problems affected lenders; and (3) steps the Department of Education and its contractor, Electronic Data Systems (EDS) are taking to correct these problems. GAO noted that: (1) lenders said their problems came primarily at two stages in the consolidation process--verifying loan data EDS provided and receiving payments for the loans being consolidated; (2) these problems occurred, in part, because borrowers provided poor information or EDS used inaccurate Education-provided data to identify lenders' addresses for loan verification requests; (3) regarding the payments lenders received, in some examples EDS sent inaccurate payments to lenders for loans being consolidated; (4) some lenders received overpayments because EDS paid for the same loan more than once; (5) other examples GAO analyzed had more serious problems, such as several instances in which EDS charged one borrower for a second borrower's loans; (6) however, lenders also received underpayments on occasion, which occurred because not all loans a borrower owed and wanted to consolidate were paid off; (7) in addition to the two problems lenders raised about the process, GAO found a flaw in the transfer of data from EDS to the FDLP servicing system; (8) GAO found that refunds that lenders made for overpayments were not always credited to a borrower's new consolidation loan account; (9) lenders' representatives said that problems associated with FDLP consolidations adversely affected their operations; (10) lenders said that their staffs had to repeatedly complete verification requests or call EDS to explain that a completed certificate had previously been returned; (11) lenders' officials also said that it took time for their staffs to resolve inaccurate payments; (12) in general, however, lenders could not quantify their costs of resolving FDLP consolidation problems; (13) both Education and EDS recognized that the consolidation process had problems prior to a 3-month shutdown during which new applications were not accepted; (14) officials from both Education and EDS said that they have taken new steps to improve FDLP consolidation processing; (15) some of the changes were made during the shutdown, others went into effect as GAO was conducting its study, when EDS again began accepting new applications, and others are still being implemented; (16) EDS has devoted more resources and made system changes to improve data quality throughout the process, it has started a pilot program for electronic loan data exchange with lenders, and it has begun a review of the first 1,000 post startup applications with the goal of detecting remaining problems; and (17) lenders' representatives GAO talked with had mixed opinions about the effectiveness of these changes and said it was too early to evaluate them.
EPA used over two-thirds of its fiscal year 2003 budget on grants and contracts to carry out its environmental programs and obtain services. Out of an $7.6 billion fiscal year 2003 budget, EPA awarded $4.2 billion in grants and $934 million in contracts, as shown in figure 1. In fiscal year 2002, EPA made over 8,000 grant awards and amendments, covering 72 separate grant programs to 4,100 grant recipients. EPA offers two types of grant programs—nondiscretionary and discretionary: For nondiscretionary grants, Congress directs awards to prospective recipients who meet specific eligibility criteria, often awarded on the basis of formulas prescribed by law or agency regulation. For example, nondiscretionary grants support water infrastructure projects, such as the drinking water and clean water state revolving fund program, and continuing environmental programs, such as the Clean Air Program for monitoring and enforcing Clean Air Act regulations. In fiscal year 2003, EPA awarded about $3.6 billion in nondiscretionary grants. EPA has awarded these grants primarily to states and other governmental entities. For discretionary grants, EPA has the legislative authority to independently determine the recipients and funding levels. These grants fund a variety of activities, such as environmental research and training. In fiscal year 2003, EPA awarded $656 million in discretionary grants. EPA awards these grants primarily to state and local government entities, nonprofit organizations, universities, and Native American tribes. In fiscal year 2003, EPA awarded about 40 percent of the discretionary grant dollars through program offices at EPA headquarters, while its 10 regional offices awarded the remaining 60 percent. Additionally, at its own discretion, EPA took 6,745 total contract actions totaling $934 million in fiscal year 2003. EPA contracting activities range from long-term clean-up and remediation support contracts under the agency’s Superfund program, contracts to support research at EPA laboratories, contracts for management consultant services, and contracts for janitorial services and building maintenance. With discretionary funding, EPA needs to choose the appropriate award instrument—a procurement contract, a grant, or a cooperative agreement. The Federal Grant and Cooperative Agreement Act of 1977 established governmentwide criteria that agencies must use in selecting the most appropriate award instrument. Specifically: Procurement contracts are to be used when “the principal purpose of the instrument is to acquire (by purchase, lease, or barter) property or services for the direct benefit or use of the United States Government,” or when “the agency decides in a specific instance that the use of a procurement contract is appropriate.” Grant agreements are to be used when “the principal purpose of the relationship is to transfer a thing of value to the to carry out a public purpose of support or stimulation authorized by [federal law],” and when “substantial involvement is not expected between the executive agency and the when carrying out the activity contemplated in the agreement.” Cooperative agreements are to be used when “the principal purpose of the relationship is to transfer a thing of value to the to carry out a public purpose of support or stimulation authorized by ,” and when “substantial involvement is expected between the executive agency and the when carrying out the activity contemplated in the agreement.” Under the act, grants and cooperative agreements are closely related to one another. The essential distinction between a grant and a cooperative agreement is the degree of federal involvement. EPA Order 5700.1 is the agency’s policy to implement the 1977 Act and guides EPA in its selection of the appropriate award instrument. The order’s purpose is “to clarify the criteria for and to achieve consistency in the selection and use of contracts, cooperative agreements and grants by all EPA offices and laboratories.” According to the order, the decision to use a contract or an assistance agreement (a grant or a cooperative agreement) must be based solely on the principal purpose of the relationship, and EPA offices and laboratories must determine whether the government is the direct beneficiary or user of the activity. The order identifies activities that must be funded through a contract, such as activities that produce specific information that EPA will directly incorporate into technical, policy, or regulatory decisions, and activities that may be funded through an assistance agreement, such as state and local government cleanup of hazardous waste sites. The order also gives examples to clarify areas of ambiguity, such as which instrument to select to fund a conference when EPA may be attending, and what qualifies as substantial involvement in the selection of a cooperative agreement. The order also specifies the roles and responsibilities of both EPA program and grants management offices, including the responsibilities of those personnel who handle funding and “technical, legal, and administrative evaluations.” Additional project officer training guidance specifies that project officers at EPA headquarters and in regional program offices receive grant proposals resulting from agency advertisements and solicitations, or through a grantee’s unsolicited proposal. Project officers are responsible for ensuring that grants meet technical and programmatic requirements. EPA’s Office of Grants and Debarment develops agency grant policies and guidance, and, through its grants management offices at headquarters and in regions, is responsible for the administration and management of individual grants. Grants management offices work with project officers to evaluate whether individual grant proposals should be approved as a grant, a cooperative agreement, or referred to EPA’s Office of Acquisition Management, which oversees agency contracting. Figure 2 describes the process that the EPA offices follow in choosing a grant or a contract. To document compliance with the 1977 Act, EPA Order 5700.1 requires that a designated approval official sign a decision memorandum prepared by the responsible project officer verifying the selection of the appropriate award instrument. Additional Office of Grants and Debarment guidance listed in its project officer training manual requires that the decision memorandum must include, among other items, the objectives of the project or program, the total amount of the award, and a brief justification why the award should be awarded as a grant or a cooperative agreement. Internal management reviews conducted by the Office of Grants and Debarment note that the justification should address the criteria identified in the order: principal purpose of the relationship, direct benefit or use, support or stimulation, and legislative authority to enter into a grant relationship. In addition, if the award is to be a cooperative agreement, the memorandum must include a description of the substantial federal involvement. For proposals to fund conferences or Web sites, the Office of Grants and Debarment has developed separate, specific guidance for project officers to use in order to determine whether EPA is the direct beneficiary of the conference or Web-site proposal. EPA’s funding for discretionary grants and contracts have had similar trends from fiscal years 1993 through 2003, and this trend suggests there has been limited migration between discretionary grant and contract funds in EPA’s budget over this period. However, the data EPA provided to us had little information on goods and services obtained and cannot be compared with each other to determine whether activities once funded under contracts are now being funded under discretionary grants. We estimate, on the basis of our survey responses from recipients of discretionary grants closed in fiscal years 2001 and 2002 and that had project start dates after October 1, 1997, that the majority of discretionary grants’ goods and services fell into three categories: research and development; training, workshops, and education; and journals, publications, and reports. A large number of grants were also used to fund conferences and smaller presentations and meetings. Although fewer in number, discretionary grants used for cleanup and monitoring activities, such as support for state leaking underground storage tank programs, made up one of the largest dollar categories of discretionary grant funding for any spending category we identified. For fiscal years 1993 through 2003, both discretionary grant and contract spending show similar trends, as figure 3 shows. Both the overall and annual trends suggest there has been limited migration between discretionary grant and contract funds in EPA’s budget over this period. In total, EPA funded $6.4 billion in discretionary grants and $11.3 billion in contracts over the 11-year period. For discretionary grants, annual funding decreased by $18 million over the period, from $674 million to $656 million; annual funding for contracts decreased by $130 million, from $1.06 billion to $934 million—decreases of 3 and 12 percent, respectively. See table 7 in appendix II for annual funding levels for EPA discretionary grants and contracts for fiscal years 1993 through 2003. Table 8 in appendix II shows annual funding levels for EPA discretionary grants at EPA headquarters and regional offices for fiscal years 1993 through 2003. EPA’s databases do not provide sufficient information to identify and track specific goods and services obtained with grants and contracts. EPA currently uses two databases for grant management purposes—the Grants Information and Control System (GICS) and the Integrated Grants Management System (IGMS). Both databases are useful for retrieving information about specific grants, but neither is useful in analyzing the kinds of goods and services funded by discretionary grants. For our grant analysis, EPA was able to query these databases by Catalog of Federal Domestic Assistance (the catalog) program codes. As shown in table 1, the catalog codes provide little information on goods and services obtained through discretionary grants because single codes can encompass broad miscellaneous groupings of goods and services, and several codes have been merged with other codes between fiscal years 1993 and 2003. In 6 of the 11 fiscal years, EPA program offices awarded the most EPA discretionary grant funds under a miscellaneous catalog code, 66.606, called Surveys, Studies, Investigations, and Special Purpose Grants. This code also received the most funds overall during the 11-year period, or $1.4 billion. Because this code is not program-specific, it provided limited use in drawing conclusions about goods and services obtained under this code. In 2002, the EPA Inspector General found that EPA could have awarded many of its assistance agreements under a program-specific catalog code, rather than the miscellaneous 66.606 code, that would better link activities to measurable assistance agreement outcomes. EPA substantially reduced its use of this code in 2003. In addition, several grant programs were merged under new catalog codes. As shown in table 1, discretionary grant funding under the Consolidated Research Grants program code rose, but this increase occurred because the code subsumed the Air Pollution Control Research and Water Protection Consolidated Research program codes. The Consolidated Research Grants program code is also a generic, miscellaneous catalog code and provides little information on the specific goods and services obtained under it. In fiscal year 2003, EPA awarded $128 million, or 20 percent of its discretionary grant dollars, under these two miscellaneous catalog codes. See table 9 in appendix II for EPA discretionary grant funding by catalog code from fiscal years 1993 through 2003. Regarding contracts, we could not analyze trends for fiscal years 1993 through 2003 of goods and services EPA obtained through contracts. EPA’s contract data come from the Federal Procurement Data System, which changed its industrial coding categories in 1997, and EPA adopted these changes in 2001. The original coding categories came from the Small Business Administration—the Standard Industrial Classification (SIC) codes. The Federal Procurement Data System then switched to the North American Industrial Classification System (NAICS) codes. However, SIC codes categorize goods and services differently than NAICS codes, and therefore we could not compare goods and services purchased for the 11-year period. Moreover, because EPA’s database could only provide data for major SIC and NAICS codes, we could not determine, except in a general way, the goods and services EPA obtained through contracts under these codes. For the SIC codes, we found that four codes comprised 93 percent of all contract spending for the period of fiscal years 1993 through 2000. Table 2 shows selected year data for these codes. As the table shows, Engineering, Accounting, Research, Management, and Related Services was consistently the highest category of contract spending. This category accounted for 58 percent of the total contract spending for the period. Similarly, our analysis of NAICS codes shows that four codes accounted for 90 percent of the contract spending for fiscal years 2001 and 2003. Table 3 shows the five highest dollar contract spending codes for these fiscal years. See tables 10 and 11 in appendix II for EPA contract funding by SIC code (fiscal years 1993 through 2000) and NAICS code (fiscal years 2001 through 2003). On the basis of our survey responses, we estimate that of all the goods and services indicated by grant recipients, 59 percent were in three categories: (1) research and development; (2) training, workshops, and education; and (3) journals, publications, and reports. These three categories accounted for the majority of grant funds, but we identified a total of eight categories from the survey responses, as shown in table 4. Although these results provide more information than catalog codes on goods and services, they only apply to discretionary grants closed out in fiscal years 2001 and 2002 that had project start dates after October 1, 1997. Discretionary grants used for cleanup and monitoring activities, such as support for state leaking underground storage tank programs, make up one of the largest dollar categories of discretionary grant funding of the spending categories we identified. We estimate that 15 percent of grants fall into this category, accounting for $56 million of the estimated $209 million spent on grants closed in fiscal years 2001 and 2002 that had project start dates after October 1, 1997. Table 12 in appendix II provides a more detailed description of the goods and services under the categories listed in table 4. Although we were able to identify and categorize goods and services from survey responses, we could not link these to environmental results. According to EPA’s Grants Management Plan, released April 2003, the agency plans to link grant performance to achievements of the agency’s five performance goals: clean air, clean and safe water, preserve and restore the land, healthy communities and ecosystems, and compliance and environmental stewardship. To implement this initiative, the agency planned to issue policy guidance to ensure that all grant work plans, decision memorandums, and/or terms of condition include environmental outcomes and how to measure them in 2003. On January 14, 2004, EPA’s Office of Grants and Debarment issued an interim policy order requiring program offices to include a discussion of how a proposed project or program supports the goals of EPA’s Strategic Plan in funding packages submitted to the grants management offices, on or after February 9, 2004. Office of Grants and Debarment officials told us that they expected the final policy order to be issued in October 2004. EPA has procedures to guide decisions on choosing a grant or a contract, but often has not followed one of its most important procedures— documenting in its award decision memorandums the reasons for choosing a grant instead of a contract. We found that EPA’s procedures are generally more specific than those of other federal agencies that award substantial grant funds. Although EPA’s procedures are more specific, in our detailed review of 67 EPA grant and cooperative agreement awards, we found that EPA often did not follow its requirements for documenting its decision on why it chose to award a grant instead of a contract. It is unclear whether this documentation shortcoming obscured inappropriate decisions to use grants instead of contracts. On the one hand, on the basis of our survey results, we estimate that 8 percent of EPA’s grantees would identify EPA as the grant’s primary and direct beneficiary. This estimate could suggest that the principal purpose of the award was to acquire property or services for EPA’s direct benefit, and that EPA should therefore have awarded some grants as contracts. However, for those grant recipients we surveyed who identified EPA as the grant’s primary and direct beneficiary, we could not determine from our file reviews and grantee interviews that the principal purpose of the award was to benefit EPA directly and that a contract should have been used instead. We found that both EPA and the public benefited, as in the case of a grantee who used EPA funds to develop waste management standards that the private sector, state and local governments, and EPA and other federal agencies could use. Because the principal purpose of an award is not always clear, it is important for EPA to carefully document its reasons for choosing a grant or a contract. EPA’s policy and procedures to select the appropriate award instrument are generally more specific than those of other federal agencies that award substantial grant funds. See table 14 in appendix IV for more detailed information. As shown in table 5, our analysis of the award policies and guidance of the top 10 federal grant-making agencies shows that EPA’s policy on the selection of a funding instrument met all nine of the features we used to compare agencies’ policies for determining whether to award a grant, a cooperative agreement, or a contract. EPA Order 5700.1 includes the following features: the use of an internal control mechanism (decision memorandum) to document the appropriate selection of a grant or a contract award instrument; the roles and responsibilities of grants management and program personnel in selecting and approving the appropriate award instrument; the statement that the type of recipient does not determine the award instrument; specific guidance and examples on handling awards for conferences and subgrantees and “in-kind” assistance; the use of examples for awarding a grant, a cooperative agreement, or a the use of case-study material to supplement the examples and provide additional guidance. Although not included in the order, additional EPA Office of Grants and Debarment guidance requires that the decision memorandum must include a description of the substantial involvement when a cooperative agreement is selected. Each of the agencies’ policy features is discussed in greater detail in table 14 of appendix IV. EPA’s award policy requires a decision memorandum to document the selection of an award instrument, but our review of 67 decision memorandums showed that EPA often did not follow the award documentation requirements, as identified in order 5700.1 and the project officer training manual, and further expanded upon in internal management reviews conducted by the Office of Grants and Debarment. Table 6 summarizes the problems we found with EPA award documentation from grant and cooperative agreement awards made at EPA headquarters and six EPA regional offices. Sixty-four percent, or 43 of the 67 decision memorandums reviewed, lacked adequate justification for selecting a grant instead of a contract at EPA headquarters and EPA regional offices. The decision memorandums did not fully address the criteria identified by EPA. These criteria include the principal purpose of the relationship, direct benefit or use, support or stimulation, and the legislative authority to enter into a grant relationship. Frequently, the justification used boilerplate language from EPA’s award policy citing that EPA was not the direct beneficiary of the award and that the grant was meant for a public purpose. Additionally, 84 percent, or 26 of the 33 decision memorandums reviewed for cooperative agreements, did not include justification for the award of a cooperative agreement instead of a grant, as required by EPA’s guidance, or the justification was not specific. These justifications were usually missing from the decision memorandums completely or simply stated that EPA would be substantially involved in carrying out the award without detailing the type or degree of involvement. Of the four grants we reviewed from one EPA region, none of the decision memorandums associated with the grant awards identified the statutory authority for making the award. For two awards, the appropriate signature was missing on the decision memorandums. Internal reviews conducted by the Office of Grants and Debarment’s Grants Administration Division identified similar problems with documentation in award decision memorandums. According to a July 2003 internal management review of EPA Region 9, many of the memorandums did not fully explain the link between the statutory authority being selected and the specific grantee activities. Furthermore, almost all of the decision memorandums used a formula statement for the justification, rather than the criteria referenced in section 6 of order 5700.1. The management review stated that the justification should address the criteria referenced in section 6 of the order, which it identified as the principal purpose of the relationship, direct benefit or use, support or stimulation, and the legislative authority to enter into a grant relationship. Internal management reviews of Regions 5 and 4, in July and August 2003, respectively, found similar problems with the regions’ award decision memorandums and stated that the decision memorandums should address the criteria referenced in section 6 of the order. The Grants Administration Division recommended that the Assistant Regional Administrator for Management in these regions strengthen the justifications for “contracts versus grants” and “statutory authority” with respect to discretionary grants. Internal review staff in the Office of Grants and Debarment with whom we spoke noted that although their reviews found documentation problems, they found no evidence that any grants should have been awarded as contracts. Although EPA’s Office of Grants and Debarment has raised concerns about the lack of documentation in decision memorandums and made recommendations to strengthen them, officials in the Office of Grants and Debarment told us that the decision memorandum does not always reflect the full level of consideration given to the grant or contract issue because the decision memorandum is written by the project officer prior to the review by the grants management office. For example, after reviewing the decision memorandum, grants specialists in the Office of Grants and Debarment may request that the project officer provide clarifying information. Office of Grants and Debarment staff told us that evidence of the grant or contract discussions is often found elsewhere in the project file, such as in revisions in the recipient's work plan or in clarifying e-mails sent back-and-forth between the project officer and the grants specialist during the review process. They told us that they consider these clarifying e-mails and other documentation as an addendum to the decision memorandum, and that the decision memorandum is not always rewritten to reflect this process or any additional information that is developed. Finally, according to Office of Grants and Debarment officials, beginning in April 2004, EPA regions will be required to attach or enter documentation electronically justifying their decision to award a grant or a cooperative agreement instead of a contract, and headquarters offices are scheduled to begin this practice by the end of 2006. We estimate that 88 percent of EPA’s grant recipients would identify the general public or entities other than EPA as the grant’s primary beneficiary, and that 8 percent would identify EPA. Specifically, we estimate that grant recipients would identify the primary beneficiary of their grants as the following: 13 percent—schools and universities; 6 percent—research and academic communities; 6 percent—business or private sector; 1 percent—nonprofit organizations; or 4 percent—don’t know, or no response. To determine whether EPA was the direct and primary beneficiary as the respondents had indicated, we reviewed EPA’s grant and project officer files for 20 grants and conducted interviews with those grant recipients. These recipients noted that while EPA benefited from the grant, other entities benefited as well. Our file reviews confirmed this statement. As EPA’s policy notes, there may be some cases in which EPA expects to derive some incidental use or benefit from funded activities. Such incidental use or benefit does not preclude a grant award when the principal purpose is public support or stimulation. Although some of these grants could arguably be described as having a principal purpose of acquiring property or services for the direct benefit or use of EPA, in which case a contract would have been the award instrument, we could not make this determination from our review of the files or grant recipient interviews. For instance, in 1998, EPA awarded a noncompetitive cooperative agreement to an international nonprofit organization that develops voluntary waste management standards that are used worldwide by industry, regulatory bodies, and individuals. According to award documents and the award recipient, the main purpose of the award was to assist EPA in developing waste management standards that could be incorporated into EPA’s Resource Conservation and Recovery Act Program, and which could also be used by federal, state, and local regulatory bodies; industry; and individuals. During EPA’s internal review of the justification contained in the decision memorandum for the award, questions arose as to whether funding the proposed project as a cooperative agreement rather than a contract would violate provisions of the 1977 Act. Subsequent review by EPA legal counsel found that since the proposal focused on developing standards that could be used by both the public and private sector, a case could be made that federal use would be incidental to the principal public purpose of support and stimulation, and awarding the project as a cooperative agreement could be justified. However, EPA legal counsel also pointed out that to fund the project as a cooperative agreement, reference to EPA direct use and benefit would have to be removed from the decision memorandum because “Technical materials that EPA uses to set guidelines or to prepare EPA guidance documents or manual must be obtained under a contract. EPA Order 5700.1, p. 10.” As such, EPA would have to delete portions of the memorandum that stated “These standards will form a nucleus from which OSWER will develop updated sampling guidance for inclusion in Chapter Nine of ‘Test Methods for Evaluating Solid Waste, Physical/Chemical Methods’, (SW-846), the RCRA test methods manual.” EPA staff modified the decision memorandum accordingly and awarded the project as a cooperative agreement. The EPA project officer acknowledged that EPA could have funded the project as a contract, but the agency instead took the steps necessary to award the project as a cooperative agreement because it was “a faster process.” In another instance, in 1999, EPA awarded a cooperative agreement to a state Department of Environmental Quality (DEQ) to develop a wet- weather monitoring program for a watershed within that state. According to the award recipient, the state and the public benefited from this project, as well as EPA. However, EPA was under court order to develop Total Maximum Daily Loads (water quality indicators) for the watershed involved and instead opted to award a cooperative agreement to the state to perform this work for them. The state DEQ did not actually perform the work involved but subcontracted the project to a state university laboratory. Our review of project files confirmed that EPA was not the only beneficiary of this project, but EPA might otherwise have chosen to contract directly with the university laboratory involved for the performance of this work. According to officials we interviewed at the Office of Management and Budget (OMB), OMB does not review agencies’ policies implementing the 1977 Act. These officials stated that agencies have latitude to interpret the act. Although EPA has specific guidance to implement the Federal Grants and Cooperative Agreement Act, our review showed that EPA often did not follow its own requirements for adequately documenting in its decision memorandums the reasons for choosing a grant or a cooperative agreement instead of a contract. Because an award may have multiple beneficiaries and the direct beneficiary of an award is not always easily discernible, it is important for EPA to fully document in its decision memorandums its reasons for choosing a grant or a contract. We recommend that the Administrator of EPA consider ways to improve project officers’ compliance with EPA’s requirement to properly document in award decision memorandums the justification for using a grant or a cooperative agreement instead of a contract. We provided a draft of this report to EPA for comment. In response, we received oral comments from EPA officials, including the Director of the Office of Grants and Debarment. EPA officials agreed with the recommendation in our draft report and stated that they have already begun to take steps to implement it. Furthermore, EPA officials commented that they were pleased that we did not find any instances where a contract should have been awarded instead of a grant. EPA officials also commented that our review of decision memorandum documentation did not reflect the full level of consideration given by EPA when deciding whether to use a grant instead of a contract. While we included language in this report to reflect EPA’s comment regarding the full level of consideration given to the decision, both EPA and we have found and agree that the decision memorandum documentation justifying the use of a grant instead of a contract needs strengthening. Finally, EPA provided some clarifying comments that we incorporated into this report, as appropriate. As agreed with your office, unless you release its contents earlier, we plan no further distribution of this report until 30 days from its issuance date. At that time, we will send copies of this report to the appropriate congressional committees; interested Members of Congress; the Administrator, Environmental Protection Agency; and other interested parties. We will also make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Should you or your staff need further information, please call me at (202) 512-3841. Key contributors to this report are listed in appendix V. Our objectives were to determine (1) the trends over the last 11 years on the Environmental Protection Agency’s (EPA) expenditures on grants and contracts and the types of goods and services obtained by each and (2) the extent to which EPA has and follows procedures for deciding when to use grants or contracts. Initially, we conducted a literature search to identify reports, studies, legislation, and other documents relevant to EPA’s grant versus contract management. Our work was closely coordinated with EPA’s Office of Inspector General (OIG) to prevent duplication with ongoing OIG efforts. To achieve our first objective, we interviewed and obtained documents from officials in EPA’s Office of Grants and Debarment and Office of Acquisition Management (OAM). These offices provided grant and contract award data to us for fiscal years 1993 through 2003. The grant trend data were pulled from EPA’s Grants Information Control System (GICS) and their Integrated Grants Management System (IGMS). The contract trend data were pulled from the Federal Procurement Data System. We assessed the reliability of these databases by reviewing existing information and documentation about the data and the systems that produced them, and by interviewing Office of Grants and Debarment and OAM officials who were knowledgeable about the data and the checks and procedures used internally to verify data reliability, particularly with regard to financial information. Based on this information, we determined that these data were sufficiently reliable for the purposes of our report. EPA’s data were used to develop overall financial trends for grant and contract awards for fiscal years 1993 through 2003. Grant financial trends by Catalog of Federal Domestic Assistance (CFDA) codes were also developed for the 11-year period. However, contract financial trends could only be developed for fiscal years 1993 through 2000. For these years, EPA classified contract awards by Standard Industrial Classification (SIC) codes. These four-digit codes included 1,004 industries and classified businesses by the products or services they made available. Beginning in fiscal year 2001, EPA adopted the new North American Industrial Classification System (NAICS) codes. These six-digit codes included 1,170 industries and classified businesses based on the production or processes used. Consequently, contract data for fiscal years 2001 through 2003 were not comparable with previous years. It was also not possible to develop trends regarding the specific goods and services obtained under EPA grants for the 11-year period because EPA’s automated databases do not track awards in this manner. To determine the extent to which EPA has and follows procedures for deciding when to use either a grant or a contract, we reviewed the congressional hearing report covering the introduction and passage of the Federal Grant and Cooperative Agreement Act of 1977, provisions in the act itself, associated EPA implementation guidance such as EPA Order 5700.1, and the EPA Project Officer Training Manual. We also contacted EPA’s OIG and obtained and discussed past OIG reports regarding EPA grant versus contract management issues. To obtain a comparison of EPA’s grant versus contract award policies and procedures with those of other federal agencies, we compared the award policies and guidance of the 10 federal agencies across the federal government that obligated the highest dollar value of assistance awards in fiscal year 2002 with requirements spelled out in the 1977 Act. EPA ranked seventh in fiscal year 2002, with $4.2 billion in grant obligations. We then compared provisions of each agency’s policies with one another. The results of this comparison are summarized in table 5. We also sent a standard set of questions to grant managers and Inspector Generals at each of these 10 agencies that asked them to identify all implementation issues identified in their agencies’ award policies. To help determine EPA compliance with provisions in the 1977 Act, as well as its own implementation guidance in selecting discretionary grants versus contracts, we drew a stratified random probability sample of 237 discretionary grants from a population of 2,163 discretionary grants for which the grant amount, according to EPA, was a positive dollar amount, and which was thought to represent all discretionary grants that had project start dates after October 1, 1997, and closed in fiscal years 2001 and 2002. Grants were stratified by whether they were issued by EPA headquarters or by an EPA regional office, and then by dollar amount. After the sample was selected, we found that some of the grants in the sample were earmark grants and were out of scope for this study. Also, we were unable to obtain information for a small portion of the remaining discretionary grants. We were ultimately able to analyze data from 174 discretionary grants. With this statistically valid probability sample, each discretionary grant in the study population had a nonzero probability of being included, and that probability could be computed for every grant. Each sampled discretionary grant was subsequently weighted in the analysis to account statistically for all discretionary grants in the study population, including those that were not selected. The study population data were drawn from EPA’s IGMS. We assessed the reliability of the IGMS data by (1) reviewing existing information and documentation about these data and the system that produced them, (2) interviewing EPA officials who were knowledgeable about these data, (3) performing electronic testing of the required elements, and (4) comparing grant recipient responses about the type of grant they received with the information in the database. Based on this information, we determined that these data were sufficiently reliable for the purposes of our report. Because we followed a probability procedure that was based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as 95 percent confidence intervals (e.g., +/- x percentage points). These are intervals that would contain the actual population value for 95 percent of the samples we could have drawn. As a result, we are 95 percent confident that each of the confidence intervals in this report will contain the true values in the study population. All percentage estimates from the file review have sampling errors (widths of 95 percent confidence intervals) of +/- 10 percentage points, unless otherwise noted. To obtain grantee information regarding the grant sample selections, we developed a Web-based survey. We met with EPA headquarters officials and spoke with an EPA OIG official in developing the survey and pretested the survey with six grantees. These grantees were judgmentally selected to ensure coverage of large and small awards, headquarters- and field- awarded grants, and type of grant recipient. We asked these recipients to complete the survey over the Internet while we monitored their responses and checked for their understanding of each question. After completion of the pretest, we interviewed the respondents to ensure that (1) the questions were clear and unambiguous, (2) the terms that we used were precise, (3) the survey did not place an undue burden on the recipients completing it, and (4) the survey was independent and unbiased. Technical corrections and adjustments were made to the survey based on the feedback we received. Grantees selected by our random sample were then contacted by telephone and e-mail. Information about accessing the survey was provided in a second e-mail, the survey was activated, and recipients were informed of its availability on July 18, 2003. The survey remained available until December 31, 2003. To ensure security and data integrity, each recipient was provided with a unique user name and password. No one else could access that survey or edit its data. We also provided recipients with a pledge of confidentiality to ensure their candor in completing the survey. Of the 237 grantees surveyed, 213 were eligible sample cases. We used the results of 174, or 82 percent, of those responses to make population estimates. The results of our survey are summarized in appendix III. EPA project officer and grant specialist files were obtained and reviewed for 67 cases in which grantee responses indicated the possible existence of a contract versus a grant award relationship. To verify/clarify these responses, follow-up telephone interviews were conducted with 20 respondents who had identified EPA as the primary beneficiary of the award and/or indicated that EPA had directed purchases under the award, directed work outside the scope of the original grant work plan, and/or had passed on more than 75 percent of the awarded funds to subcontractors. To help determine EPA compliance with provisions of the Federal Grant and Cooperative Agreement Act as well as its own implementation guidance in selecting discretionary grants versus contracts, we drew a stratified random probability sample of 237 discretionary grants from a study population of 2,163 discretionary grants that had project start dates after October 1, 1997, and closed in fiscal years 2001 and 2002, and for which the grant amount (according to EPA) was a positive dollar amount. Of the 237 grantees surveyed, 213 were eligible sample cases. We used the results of 174, or 82 percent, of those responses to make population estimates. The population estimates for key questions from our survey are summarized below. Q2. Was this financial assistance award made in the form of a grant or a cooperative agreement? Q12. Were the project outputs/deliverables designed to directly benefit EPA? Q15. Please indicate the primary beneficiary of the project outputs/deliverables. Comparison of Q12 and Q15. Q12. Were the project directly benefit EPA? (16.58 percent of respondents) Q15. Primary beneficiary of the project outputs/deliverables. Q18. During the postaward phase, did EPA direct you to conduct activities outside the original scope of work? Q21. Did EPA direct that you contract with a specific entity (organization or individual) under this grant/cooperative agreement? Q22. Did EPA direct that you issue a subgrant to a specific entity under this grant/cooperative agreement? Q23. Did EPA direct that contracts be awarded by you on a sole-source or noncompetitive basis under this grant/cooperative agreement? Q25. Survey completion status by grant award dollars. In addition to those individuals named above, Scott Heacock, Andria Key, Mike Rahl, Sid Schwartz, Rebecca Shea, Carol Herrnstadt Shulman, and Amy Webbink made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e- mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
Grants and contracts constitute over two-thirds of the Environmental Protection Agency's (EPA) budget. In fiscal year 2003, EPA awarded $3.6 billion in grants directed by Congress, $656 million in grants awarded at its own discretion, and $934 million in contracts. Under the Federal Grant and Cooperative Agreement Act of 1977, whether EPA should award a grant or a contract depends upon the principal purpose of the award. In this context, GAO was asked to determine (1) the trends over the last 11 years on EPA's expenditures on discretionary grants and contracts and the types of goods and services obtained by each and (2) the extent to which EPA has and follows procedures for deciding when to use grants or contracts. EPA's funding for discretionary grants and contracts had similar trends from fiscal years 1993 through 2003, suggesting limited migration between these funds in EPA's budget over this period. Although EPA grants data provide little information on goods and services obtained with discretionary grants, GAO estimates, based on its survey of grantees with grants closed in fiscal years 2001 and 2002 and that had project start dates after October 1, 1997, that the majority of goods and services fell into three categories: (1) research and development; (2) training, workshops, and education; and (3) journals, publications, and reports. EPA has specific procedures to guide decisions on choosing grants or contracts but often has not followed a very important one--documenting in its award decision memorandums the reasons for choosing a grant instead of a contract. EPA procedures define staff roles and responsibilities, provide examples of when to use a grant or a contract, and require documentation in the award decision memorandum to justify the use of a grant or a contract. However, in 64 percent (43 of 67) of the memorandums GAO reviewed, EPA did not fully justify its reasons for choosing a grant instead of a contract. It is unclear whether this shortcoming obscured inappropriate decisions to use grants instead of contracts. On the one hand, GAO's survey results showed that an estimated 8 percent of EPA's discretionary grantees would identify EPA as the primary and direct beneficiary. This estimate could suggest that the principal purpose of the grant award was acquiring property or services for EPA's direct benefit, and that EPA should have awarded some grants as contracts. However, for those grantees who identified EPA as the grant's primary and direct beneficiary, GAO's review of grant files and follow-up interviews indicated that some of these grants benefited both the federal government and the public and therefore could arguably have been awarded as either a grant or a contract.
Wildland fire triggered by lightning is a natural, inevitable, and necessary ecological process. Such fires periodically consume excess vegetation and renew the productivity of our nation’s ecosystems. However, in ecosystems that are adapted to frequent small, low-intensity fires, uncharacteristically large and intense wildland fires increasingly threaten catastrophic damage to such ecosystems. Large intense fires in these and other ecosystems also increasingly threaten human lives, health, property, and infrastructure in the wildland-urban interface. Uncharacteristically large, intense fires often are fueled by abnormally dense accumulations of vegetation in many forest and rangeland ecosystems. This excess vegetation is the result of several human land use and management practices, including several decades of effective fire suppression activities that have reduced the normal frequency of wildland fires that nature had periodically used to clear undergrowth and small trees. This vegetation, in turn, provides abnormally large amounts of fuel for fires, causing some to spread more rapidly, burn larger areas, and burn more intensely than normal. Such uncharacteristic fires are more common in warmer, drier climates such as the interior western United States and during periods of drought. Federal researchers estimate that these vegetative conditions exist on approximately 190 million acres (or more than 40 percent) of federal lands in the contiguous United States, but could vary from 90 million to 200 million acres, and that these conditions also exist on many nonfederal lands. The acreage burned by wildland fire—after having declined nationally throughout most of the 20th century due to land management practices, including fire suppression—increased in the latter decades of the century. This increase was the result of more large fires, most of which were located in the inland western United States, where many of the forests historically had frequent, smaller, and less intense fires. The trend toward increased acreage burned by wildland fire has continued into the 21st century as illustrated in figure 1. For 2000 through 2003, the average number of acres burned annually on all lands nationally was 56 percent greater than the average acres burned annually during the 1990s. Our reviews over the last 5 years identified several weaknesses in the federal government’s management response to wildland fire. Specifically, we found that the land management agencies lacked an effective national strategy to respond to wildland fire, had shortcomings in addressing wildland fire issues at the local level, and had an ineffective system for accounting for wildland fire management efforts and monitoring results. We noted in a 1999 report that the federal government lacked a national strategy for reducing excessive national forest fuel levels and associated catastrophic wildland fires. Such a strategy was needed by the agencies to address numerous policy, programmatic, and budgetary factors that presented significant barriers to accomplishing fuel reduction goals. Among these barriers were program incentives that tended to focus on areas that may not present the greatest wildland fire hazards and very high costs for removing hazardous fuels. We also reported in 2003 that the Forest Service and Interior had issued national guidance on fuel reduction, but it was not specific enough for prioritizing fuels reduction projects. Lacking such guidance, agencies could not ensure that local land management units were implementing the highest-priority fuels reduction projects nationwide. Our reviews also found shortcomings in the federal government’s implementation at the local level of various wildland fire management activities, such as preparedness, suppression, and rehabilitation. Over half of all local federal land management units had no fire management plans that met the requirements of the 1995 Federal Wildland Fire Management Policy. This national policy, jointly adopted by Agriculture and Interior and updated in 2001, established a goal to restore fire’s natural role in ecosystems consistent with human health and safety. The fire management plans are intended to help ensure the effective integration of local wildland fire management activities with planned uses of agencies’ lands so that unwanted wildland fire does not impair accomplishment of desired future conditions on these lands. The Forest Service and Interior also lacked basic data, such as the amount and location of lands needing fuel reduction, and research on the effectiveness of different fuel reduction methods on which to base their fire management plans and specific project decisions. Furthermore, coordination among federal agencies and collaboration of these agencies with nonfederal entities were ineffective. Such coordination and collaboration are needed because wildland fire is a shared problem that transcends land ownership and administrative boundaries, requiring cooperation among all parties. Finally, we found that better accountability in federal wildland fire management efforts was needed. Although the agencies had begun developing results-oriented performance measures to assess the effectiveness of treatments in reducing the risk of catastrophic wildland fires, they had no baseline from which to assess program performance. They also could not establish any meaningful performance measure and goal for reducing fuels because they lacked sufficient data on the location of lands at high risk of catastrophic fires as well as data on the cost-effectiveness of fuel reduction methods and their effects on other ecosystem resources. In particular, the agencies needed to develop performance measures that would focus their actions on reducing priority hazards and to better monitor the results of those actions. The federal government has made important progress over the last 5 years in improving its management of wildland fire. Nationally, it has worked to formulate a comprehensive strategy, established a priority to protect communities in the wildland-urban interface, and increased funding for wildland fire management activities, including fuels reduction and suppression. At the local level, it enhanced its data and research on wildland fire problems, made significant progress in developing local fire management plans, and improved coordination among federal agencies and collaboration with nonfederal partners. In addition, it strengthened its overall accountability for investments in wildland fire activities by establishing more meaningful goals and performance measures. Over the last 5 years, the federal government has been formulating a strategy known as the National Fire Plan, clarifying its priorities and increasing funding for wildland fire management activities. The National Fire Plan is not a single document. Rather, it is composed of several strategic documents that set forth a priority to reduce wildland fire risks to communities. To address this priority, the agencies, working with the states, identified a list of communities nationwide that are considered most at risk of wildland fire damage. While the recently enacted Healthy Forests Restoration Act of 2003 addresses risks to both communities and ecosystems, it emphasizes a priority for protecting wildland-urban interface communities by directing that at least 50 percent of funding for fuel reduction projects authorized under the act be allocated to wildland-urban interface areas. Although we have raised concerns about how the agencies have defined these interface areas, the accuracy and process they used in designating these communities and wildland-urban interface areas, and the specificity of their prioritization guidance, the act’s clarification of the priority for protecting communities provides a starting point for identifying and prioritizing funding needs. Forest Service and Interior appropriations for fuel reductions, as well as for other wildland fire management activities such as preparedness and suppression, have increased substantially over the past 5 years. In 1999, the Forest Service had not requested increased funding to meet the growing fuel reduction needs it had identified. As shown in table 1, overall appropriations for wildland fire management activities for both the Forest Service and Interior have nearly tripled in the past 5 years, from about $1 billion in fiscal year 1999 to over $2.7 billion in fiscal year 2004. While these increases include significant amounts for unanticipated suppression costs and preparedness funding, fuel reduction funding has quadrupled since 1999. Additionally, through the Healthy Forests Restoration Act of 2003, the Congress authorized $760 million per year to be appropriated for hazardous fuels reduction activities, including projects for reducing fuels on up to 20 million acres of land. The federal government also has improved the implementation of its wildland fire management activities at the local level. In particular, significant improvements in federal data and research on wildland fires have been made during the past 5 years. In 1999, the federal government lacked adequate data on the location and extent of hazardous fuels to use in selecting and designing fuel reduction projects. Since then, the agencies have jointly completed a mapping of fuels nationwide that classifies lands by differing fuel hazard levels. Although this mapping is not done at a small enough geographic scale to support decisions on the location and design of individual fuel reduction projects, it nevertheless represents a significant improvement over the information that was available in the past. In 2003, Agriculture and Interior approved funding for development of a geospatial data and modeling system, called LANDFIRE, to identify wildland fire hazards with more precision and uniformity than the existing hazardous fuels mapping and to enable comparisons of conditions between different field locations nationwide. When operational, LANDFIRE data and enhanced models of likely fire behavior thus will help identify the nature and magnitude of the wildland fire risks confronting numerous community and ecosystem resources, such as residential and commercial structures, species habitat, air and water quality, and soils. The agencies plan to use this information to better support their strategic decisions on preparedness, suppression, the location and design of fuel reduction projects, and other land management activities. Initial results from LANDFIRE have been promising. For example, a Forest Service official, who had used LANDFIRE to choose an approach for suppressing a fire in an area of Montana where the prototype system was developed, said he found it much better at identifying suppression options and their consequences than any other currently available data. LANDFIRE— estimated to cost $40 million—is scheduled for nationwide implementation in 2009. Local fire management planning also has been strengthened. As we reported in 2002, over half of the agencies’ land management units had not completed local fire management plans in accordance with the 1995 federal wildland fire management policy. They subsequently adopted an expedited schedule to complete all of these plans in 2004, and agency officials told us that they believed they would meet this schedule. The agencies also adopted a common interagency template for preparing these plans to ensure greater consistency in their contents. Other critical improvements have been made in coordination among federal agencies responsible for wildland fire management and in collaboration with nonfederal partners. In 2001, as a result of congressional direction to the agencies to involve the states as full partners in their efforts, Agriculture and Interior jointly adopted a 10-Year Comprehensive Strategy with the Western Governors Association. This strategy, and an implementation plan adopted in 2002, detail goals, time lines, and responsibilities of the different parties for various actions related to a wide range of activities, including collaboration at the local level to identify fuel reduction priorities in different areas. Also, in 2002, the agencies established an interagency organizational body, the Wildland Fire Leadership Council, to improve coordination of their activities with each other and with nonfederal parties. The council is composed of senior Agriculture and Interior officials and nonfederal representatives. The council meets regularly to provide policy direction on a wide range of issues and decisions to foster necessary coordination and consistency among federal approaches, activities, and funding of various efforts. The federal government also made progress in accounting for the results it achieves from its investments in wildland fire management activities. In 1999, the Forest Service’s performance measure for fuel reductions, which measured only the total acres of fuel reductions accomplished, created an incentive to treat less costly acres rather than the acres that presented the greatest hazards. To rectify this shortcoming, the agencies adopted a performance measure that identifies the amount of acres moved from high-hazard to low-hazard fuel conditions. This measure will allow them to better determine the extent to which their fuel reduction efforts accomplish the key goal of reducing risks to communities and ecosystems. The agencies also made progress in developing a system to monitor the effects of wildland fires. Without such information, they cannot determine the nature of threats or the likely effectiveness of different actions taken to address threats. In May 2004, the Wildland Fire Leadership Council approved a nationwide monitoring framework for wildland fire data, including data on fire severity that may help address this problem. While we also have said that an implementation plan for this monitoring framework is needed, the adoption of the framework nonetheless represents a critical step toward enhancing wildland fire management accountability for results. While federal land management agencies have made important progress over the past 5 years in addressing wildland fire management issues, they continue to face a number of challenges that will need to be met if they are to complete development of a cohesive strategy that explicitly identifies available long-term options and funding needed to reduce fuels on national forests and rangelands and respond to the nation’s wildland fire threats. The nation’s wildland fire problems have been decades in the making and will take decades more to resolve. Without a cohesive strategy and better data, agencies will have difficulty determining the extent and severity of the wildland fire problem, targeting and coordinating their efforts and resources, and resolving the problem in a timely and cost-effective manner. Moreover, without such a strategy and better data, the Congress will not have reliable information on when, how, and at what cost wildland fire problems can be brought under control. The federal government’s strategy documents adopted thus far, such as those associated with the National Fire Plan, establish a good framework for addressing our nation’s wildland fire problems, but these documents still need to identify the long-term options and funding needed to reduce and maintain fuels at acceptable levels. A clear understanding of the options and funding needs are essential to both the agencies and the Congress for determining the most effective and affordable approach. However, the agencies are not currently in a position to develop these options and identify related funding needs with any precision or reliability because they need to complete several steps, each with its own challenges. These steps include (1) completing and implementing the LANDFIRE data and modeling system so that the extent and location of wildland fire threats are more precisely known, (2) updating local fire management plans with more precise LANDFIRE information and the latest research so that the most promising wildland fire management practices are included to effectively address wildland fire threats, and (3) based on these plans, identifying the various national options and related funding needed to reduce fuels and respond to wildland fire threats. Recently, the agencies began an assessment of wildland fire threats that may provide a useful framework for completing a long-needed cohesive wildland fire management strategy. LANDFIRE is critical to identifying and addressing wildland fire threats to communities and ecosystems, but the agencies face several challenges completing and implementing LANDFIRE. The agencies need LANDFIRE to more precisely identify the extent and location of wildland fire threats and better target fuel reduction efforts. LANDFIRE is also needed to better reconcile the effects of fuel reduction activities with the agencies’ other stewardship responsibilities for protecting ecosystem resources, such as air, water, soils, and species habitat. Fuel reduction activities, such as controlled burning or mechanical treatments (using chainsaws and heavy equipment), can adversely affect these ecosystem resources if not done at the proper time and place. For example, mechanically removing fuels with heavy equipment can adversely affect wildlife habitat and water quality in many areas and controlled burning can cause air quality problems. The agencies also need LANDFIRE to help them better measure and assess their performance. For example, such data will enable the agencies to better identify the relative importance of reducing fuels on the highest-hazard lands versus maintaining conditions on low-hazard lands. As we have noted, a separate performance measure for maintaining conditions on these low-hazard lands is important so that their conditions do not deteriorate to more hazardous conditions while funding is being focused on lands with high-hazard conditions. The agencies, however, face several challenges in implementing LANDFIRE. As we recently reported, the agencies lack a consistent approach to assessing the risks of wildland fires to ecosystem resources and an integrated, strategic, and unified approach to managing and using information systems and data, including those such as LANDFIRE, in wildland fire decision making. Currently, software, data standards, equipment, and training vary among the agencies and field units in ways that hamper needed sharing and consistent application of the data. Although the Wildland Fire Leadership Council has recently chartered a National Wildfire Enterprise Architecture Steering Group to implement an action plan for more effectively sharing and using these data, these system and implementation problems are not yet resolved. Moreover, the agencies may have to re-examine the LANDFIRE data and models before implementing them. Recent research suggests that the effects of climate change on wildland fire might more adversely affect the nature, extent, and geographical distribution of hazards identified in LANDFIRE, as well as the costs for addressing them, than previously understood. In August 2004, a panel—appointed by the Wildland Fire Leadership Council to investigate escalating suppression costs—reported that recent agency research suggested that climate change could have significant implications for the occurrence of wildland fire and the costs required to contain it. The research suggests that part of the recent increase in wildland fire has been caused by a shift in climate patterns, and that this new pattern may likely continue for decades, resulting in further increases in the amount of accumulated vegetation consumed nationally by wildland fire. Incorporating LANDFIRE data and recent research on addressing wildland fire threats into local fire management plans will be central to completing a cohesive long-term fuels reduction strategy. The fire management plans are important for identifying the fuel reduction, preparedness, suppression, and rehabilitation actions needed at the local level to more effectively address wildland fire threats. While these plans now are all scheduled for completion in December 2004, they will be based on outdated data once LANDFIRE is available. To improve the accuracy and usefulness of these plans, the agencies will need to update them when more detailed, nationally consistent LANDFIRE data become available within 5 years. The Forest Service indicated that this updating could occur during the agency’s annual review of fire management plans to determine whether any changes to plans may be needed. The agencies also will need to update their local fire management plans with recent agency research on the best approaches for more effectively addressing wildland fire threats. For example, a 2002 interagency analysis found that protecting wildland-urban interface communities more effectively—as well as more cost-effectively—might require locating a higher proportion of fuel reduction projects outside of the wildland-urban interface than currently envisioned, so that fires originating in the wildlands do not become too large to suppress by the time they arrive at the interface. Additionally, other agency research being field-tested in California and elsewhere suggests that placing fuel reduction treatments in specific geometric patterns can more effectively reduce the spread rate and intensity of wildland fires. As a result, agency officials believe the approach could provide more protection across the landscape than other approaches to locating and designing treatments, such as placing fuel breaks around communities and ecosystems resources. Moreover, these geometric fuel reduction patterns, because they are more efficient, reportedly may provide protection for up to three times as many community and ecosystem resources as other approaches do for the same cost. As LANDFIRE is developed and fire management plans are updated, the agencies should become better positioned to formulate and communicate to the Congress a cohesive, long-term federal strategy that identifies various options and the related funding needed to reduce fuels and respond to our nation’s wildland fire problems. The agencies have several efforts under way that should help them identify these options and funding needs. In 2002, a team of Forest Service and Interior experts produced an estimate of the funds needed to implement eight different fuel reduction options for protecting communities and ecosystems across the nation over the next century. Their analysis also considered the impacts of fuels reduction activities on likely future costs for other principal wildland fire management activities, such as preparedness, suppression and rehabilitation, if fuels were not reduced. The team concluded that reducing the risks to communities and ecosystems across the nation could require an approximate tripling of current fuel reduction funding to about $1.4 billion for an initial period of a few years. These initially higher costs would decline after fuels had been reduced enough to use less expensive controlled burning methods in many areas and more fires could be suppressed at lower cost, with total wildland fire management costs, as well as risks, being reduced after 15 years. Alternatively, the team said that not making a substantial short-term investment using a landscape focus could increase costs, as well as risks to communities and ecosystems, in the long term. More recently, however, Interior has said that the costs and time required to reverse current increasing risks may be less when other vegetation management activities are considered that were not included in the interagency team’s original assessment but also can influence wildland fire. The interagency experts said their estimates of long-term costs could only be considered an approximation because the data used for their national-level analysis were not sufficiently detailed. They said a more accurate estimate of the long-term federal costs and consequences of different options nationwide would require applying this national analysis framework in smaller geographic areas using more detailed data, such as that produced by LANDFIRE, and then aggregating these smaller-scale results. Agency officials told us that another management system under development—Fire Program Analysis (FPA)—also could be used to help identify long-term fuel reduction options and related funding needs. FPA, which is being developed in response to a congressional committee direction to improve budget allocation tools, is designed to identify the most cost-effective allocations of annual preparedness funding for implementing agency field units’ local fire management plans. Eventually, FPA will use LANDFIRE data and provide a smaller geographical scale for analyses of fuel reduction options. Thus, like LANDFIRE, FPA will be critical for updating fire management plans. Officials said that the FPA preparedness budget allocation systemwhen integrated with an additional component that is now being considered for allocating annual fuel reduction funding—could be instrumental in identifying the most cost-effective long-term levels, mixes, and scheduling of these two wildland fire management activities. The agencies began training employees in October 2004 for initial implementation of the preparedness budget component in February 2005. However, completely developing FPA, including the fuel reduction funding component, is expected to cost about $40 million and take until at least 2007 and perhaps as long as 2009. Finally, in May 2004, Agriculture and Interior began the initial phase of a wildland fire strategic planning effort that also might contribute to identifying long-term options and needed funding for reducing fuels and responding to the nation’s wildland fire problems. This effortthe Quadrennial Fire and Fuels Reviewis intended to result in an overall federal interagency strategic planning document for wildland fire management and risk reduction and to provide a blueprint for developing affordable and integrated fire preparedness, fuels reduction, and fire suppression programs. Because of this effort’s consideration of affordability, it may provide a useful framework for developing a cohesive strategy that includes identifying long-term options and related funding needs. The preliminary planning and analysis phases of this effort are scheduled to be completed in December 2004, followed by an initial report expected in March 2005. In our initial reporting on the wildland fire problem 5 years ago, we concluded that it would take many years for the federal government to successfully address all of the complex management challenges that wildland fire presents. Accordingly, as expected, much important work remains to be done. Nevertheless, federal agencies over the last 5 years have laid a sound foundation for success, including initial data development and planning and establishing a constructive, collaborative dialogue with the states and others. This foundation will be important for meeting the key challenges the agencies face in completing a cohesive strategy for addressing the nation’s wildland fire problems. If the agencies’ progress to date toward developing a cohesive strategy is to be of enduring value, the agencies will need to complete ongoing efforts such as LANDFIRE, research, and local fire management plans. The agencies need the results of these ongoing efforts so that they can develop a sufficiently detailed blueprint of the various available and realistic long-term options and related funding needed for addressing our nation’s wildland fire problems. Without such a blueprint, wildland fire will likely pose increasing risks to not only the nation’s communities and ecosystems, but also to tens of billions of dollars of federal budgetary resources that will be spent to respond to wildland fire over the coming decades. If these budgetary resources are not cost-effectively applied, then the risks to communities and ecosystems will not be reduced as much as intended or in ways that are needed and desired. Critical to determining cost-effectiveness will be understanding the optimal timing of appropriation investments over the long term. Thus, a focus on long-term options and their costs provides necessary realism about available choices for protecting communities and ecosystems and required cohesiveness among the actions needed to implement them. Conversely, without such a long-term focus, agencies cannot ensure that the numerous collaborative efforts they undertake locally each year will add up to a cost-effective, affordable, long-term national solution. To date there have been no clear actions or a commitment by the agencies to explicitly identify and communicate to the Congress long-term options and the funding needed to pursue them. In order for the Congress to make informed decisions about effective and affordable long-term approaches for addressing our nation’s wildland fire problems, it should have, as soon as possible, a broad range of long-term options and related funding needed to reduce and maintain wildland fuels at acceptable levels and respond to wildland fire threats. We recommend that the Secretaries of Agriculture and the Interior provide the Congress, in time for its consideration of the agencies’ fiscal year 2006 wildland fire management budgets, with a joint tactical plan outlining the critical steps the agencies will take, together with related time frames, to complete a cohesive strategy that identifies long-term options and needed funding for reducing and maintaining fuels at acceptable levels and responding to the nation’s wildland fire problems. We received written comments on a draft of this report from the Forest Service on behalf of Agriculture and from Interior. Both departments generally concurred with our findings and recommendation, but expressed concern about the time frame within which we recommended they provide the Congress with a joint tactical plan for completing a cohesive strategy to respond to wildland fire problems. We did not change our recommendation because we believe that the departments misunderstood this time frame and what we recommended that they provide within this period. The departments also provided technical comments that we have incorporated into the report, as appropriate. The Forest Service’s and Interior’s letters are included in appendixes III and IV, respectively, together with our evaluation of them. As arranged with your office, unless you publicly announce the contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies of this report to other interested congressional committees. We also will send copies to the Secretaries of Agriculture and the Interior and the Chief of the Forest Service. We will make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-3841 or at [email protected] or David Bixler at (202) 512-7201 or [email protected]. Key contributors to this report are listed in appendix V. Wildland Fires: Forest Service and BLM Need Better Information and a Systematic Approach for Assessing the Risks of Environmental Effects. GAO-04-705. Washington, D.C.: June 24, 2004. Federal Land Management: Additional Guidance on Community Involvement Could Enhance Effectiveness of Stewardship Contracting. GAO-04-652. Washington, D.C.: June 14, 2004. Wildfire Suppression: Funding Transfers Cause Project Cancellations and Delays, Strained Relationships, and Management Disruptions. GAO- 04-612. Washington, D.C.: June 2, 2004. Biscuit Fire: Analysis of Fire Response, Resource Availability, and Personnel Certification Standards. GAO-04-426. Washington, D.C.: April 12, 2004. Forest Service: Information on Appeals and Litigation Involving Fuel Reduction Activities. GAO-04-52. Washington, D.C.: October 24, 2003. Geospatial Information: Technologies Hold Promise for Wildland Fire Management, but Challenges Remain. GAO-03-1047. Washington, D.C.: September 23, 2003. Wildland Fire Management: Additional Actions Required to Better Identify and Prioritize Lands Needing Fuels Reduction. GAO-03-805. Washington, D.C.: August 15, 2003. Wildland Fires: Forest Service’s Removal of Timber Burned by Wildland Fires. GAO-03-808R. Washington, D.C.: July 10, 2003. Wildland Fires: Better Information Needed on Effectiveness of Emergency Stabilization and Rehabilitation Treatments. GAO-03-430. Washington, D.C.: April 4, 2003. Results-Oriented Management: Agency Crosscutting Actions and Plans in Border Control, Flood Mitigation and Insurance, Wetlands, and Wildland Fire Management. GAO-03-321. Washington, D.C.: December 20, 2002. Wildland Fire Management: Reducing the Threat of Wildland Fires Requires Sustained and Coordinated Effort. GAO-02-843T. Washington, D.C: June 13, 2002. Wildland Fire Management: Improved Planning Will Help Agencies Better Identify Fire-Fighting Preparedness Needs. GAO-02-158. Washington, D.C.: March 29, 2002. Severe Wildland Fires: Leadership and Accountability Needed to Reduce Risks to Communities and Resources. GAO-02-259. Washington, D.C.: January 31, 2002. The National Fire Plan: Federal Agencies Are Not Organized to Effectively and Efficiently Implement the Plan. GAO-01-1022T. Washington, D.C.: July 31, 2001. Forest Service Roadless Areas: Potential Impact of Proposed Regulations on Ecological Sustainability. GAO-01-47. Washington, D.C.: November 8, 2000. Reducing Wildfire Threats: Funds Should be Targeted to the Highest Risk Areas. GAO/T-RCED-00-296. Washington, D.C.: September 13, 2000. Fire Management: Lessons Learned from the Cerro Grande (Los Alamos) Fire and Actions Needed to Reduce Fire Risks. GAO/T-RCED-00-273. Washington, D.C.: August 14, 2000. Fire Management: Lessons Learned from the Cerro Grande (Los Alamos) Fire. GAO/T-RCED-00-257. Washington, D.C.: August 14, 2000. Forest Service: Actions Needed for the Agency to Become More Accountable for Its Performance. GAO/T-RCED-00-236. Washington, D.C.: June 29, 2000. Park Service: Agency Is Not Meeting Its Structural Fire Safety Responsibilities. GAO/RCED-00-154. Washington, D.C.: May 22, 2000. Forest Service: A Framework for Improving Accountability. GAO/RCED/AIMD-00-2. Washington, D.C.: October 13, 1999. Federal Wildfire Activities: Issues Needing Future Attention. GAO/T- RCED-99-282. Washington, D.C.: September 14, 1999. Federal Wildfire Activities: Current Strategy and Issues Needing Attention. GAO/RCED-99-233. Washington, D.C.: August 13, 1999. Western National Forests: Status of Forest Service’s Efforts to Reduce Catastrophic Wildfire Threats. GAO/T-RCED-99-241. Washington, D.C.: June 29, 1999. Forest Service Priorities: Evolving Mission Favors Resource Protection over Production. GAO/RCED-99-166. Washington, D.C.: June 17, 1999. Western National Forests: A Cohesive Strategy Is Needed to Address Catastrophic Wildfire Threats. GAO/RCED-99-65. Washington, D.C.: April 2, 1999. To identify the progress that federal land management agencies have made in addressing the threat posed by wildland fires over the past 5 years and the challenges that remain over the next 5 years, we reviewed past GAO, Congressional Research Service, and National Academy of Public Administration reports on wildland fires. We interviewed officials from the Forest Service and Department of the Interior agencies that are responsible for wildland fire management and obtained data on acres burned from the National Interagency Fire Center in Boise, Idaho. We also interviewed and obtained data from Forest Service and Interior officials responsible for developing long-term fuel treatment options and costs, LANDFIRE, the Fire Program Analysis system, climate change estimates, fire management plans, performance measures, and the Quadrennial Fire and Fuels Review. In addition, we interviewed officials and obtained data from the National Academy of Public Administration and the Brookings Institution. We conducted our work between May 2004 and November 2004 in accordance with generally accepted government auditing standards. The following are GAO’s comments on the USDA Forest Service’s letter dated December 10, 2005. 1. We did not change our characterization of the period over which progress was made because efforts made earlier than 4 years ago provided an important basis for subsequent progress, including (1) the September 8, 2000, report to the President from the Secretaries of Agriculture and the Interior that was used to inform the 2001 appropriation request and (2) the Forest Service’s formulation of its own fuel reduction strategy that was initiated in 1999. 2. We clarified the language of our report to make clearer our meaning that, although national guidance was issued, this guidance—as we have previously reported—was not specific enough for prioritizing fuels reduction projects. 3. We clarified the language in our report to make clearer our meaning that, by identifying landscape fuel hazards, LANDFIRE will help identify the risks to those resources. 4. We have included this observation in our report. However, we note that the agencies will need to ensure this is done because of (1) the likely impacts that the LANDFIRE and FPA systems will have on the fire management plans, (2) the importance of the plans for identifying aggregate national fuel reduction options and costs, and (3) agencies’ past failures to keep these plans up-to-date, as our report notes. 5. We did not recommend that the long-term options and associated costs be identified in the joint tactical plan. Rather, we said that this joint tactical plan should specify the steps and related time frames that the agencies will take in completing a cohesive strategy containing options and costs. In addition, we did not recommend that the joint tactical plan be provided concurrently with the agencies’ fiscal year 2006 budget submissions, but only that it be provided in time for the Congress’s deliberation of the agencies’ appropriations for fiscal year 2006. Should the agencies subsequently identify adjustments that need to be made to the tactical plan because of evolving LANDFIRE and FPA processes, they can so inform the Congress of those adjustments and the reasons for them. Because this is a long-term effort in which each year’s progress can have significant long-term fiscal, resource, and human safety consequences, we believe it is important from this point forward that the agencies more transparently identify for the Congress the specific steps they will undertake, and their associated time frames, for identifying long-term options and costs. Accordingly, we made no change to our recommendation. The following are GAO’s comments on the Department of the Interior’s letter dated December 10, 2005. 1. We did not recommend that the long-term options and associated costs be identified in the joint tactical plan. Rather, we said that this joint tactical plan should specify the steps and related time frames that the agencies will take in completing a cohesive strategy containing options and costs. In addition, we did not recommend that the joint tactical plan be provided concurrently with the agencies’ fiscal year 2006 budget submissions, but only that it be provided in time for the Congress’s deliberation of the agencies’ appropriations for fiscal year 2006. Should the agencies subsequently identify adjustments that need to be made to the tactical plan because of evolving LANDFIRE and FPA processes, they can so inform the Congress of those adjustments and the reasons for them. Because this is a long-term effort in which each year’s progress can have significant long-term fiscal, resource, and human safety consequences, we believe it is important from this point forward that the agencies more transparently identify for the Congress the specific steps they will undertake, and their associated time frames, for identifying long-term options and costs. Accordingly, we made no change to our recommendation. 2. We clarified the language of our report to make clearer our meaning that, although national guidance was issued, as we have previously reported, this guidance was not specific enough for prioritizing fuels reduction projects. 3. In reporting on the progress that has been made in clarifying priorities, we are merely noting that the act provided a good starting point for undertaking analysis to identify and prioritize funding needs. We neither are criticizing the emphasis that the agencies previously placed on protecting wildland urban interface areas nor are making an assessment of the act’s priorities, since our report notes that further analysis is needed to determine the most cost-effective allocation among priorities. 4. We clarified the language in our report to make clearer our meaning that, by identifying landscape fuel hazards, LANDFIRE will help identify the risks to those resources. 5. We agree these factors should be among those raised by climate change research that our report says should be considered in identifying long- term options and associated costs. 6. We have modified our draft to include the observation that Interior believes inclusion of this additional acreage would have substantially changed the outcome the team reported. Our report already noted the interagency team’s view that the accuracy of the assessment’s outcomes will be improved by use of more detailed data such as from LANDFIRE. However, we are encouraged by the departments’ commitment, expressed in both of their comments on our draft report, to use this type of analysis to identify and communicate to the Congress long-term fuel reduction options and costs, reversing a June 2002 decision by the Wildland Fire Leadership Council not to do so. We believe that the fulfillment of this commitment is needed to provide the Congress with a sufficiently informed understanding of the long-term consequences of different appropriation choices that it will need to make over the coming years and decades to adequately and cost- effectively address wildland fire management issues. 7. We did not change our characterization of the period over which progress was made because efforts made earlier than 4 years ago provided an important basis for subsequent progress, including (1) the September 8, 2000, report to the President from the Secretaries of Agriculture and the Interior that was used to inform the 2001 appropriation request and (2) the Forest Service’s formulation of its own fuel reduction strategy that was initiated in 1999. In addition to those named above, Jonathan Altshul, Barry T. Hill, Richard Johnson, Chester Joy, and Jonathan McMurray made key contributions to this report.
Over the past two decades, the number of acres burned by wildland fires has surged, often threatening human lives, property, and ecosystems. Past management practices, including a concerted federal policy in the 20th century of suppressing fires to protect communities and ecosystem resources, unintentionally resulted in steady accumulation of dense vegetation that fuels large, intense, wildland fires. While such fires are normal in some ecosystems, in others they can cause catastrophic damage to resources as well as to communities near wildlands known as the wildland-urban interface. In 1999, GAO recommended that the Forest Service develop a cohesive strategy for responding to wildland fire threats. As a follow-up, 5 years later, GAO was asked to identify the (1) progress the federal government has made in responding to wildland fire threats and (2) challenges it will need to address within the next 5 years. Over the last 5 years, the Forest Service in the Department of Agriculture and land management agencies in the Department of the Interior, working with the Congress, have made important progress in responding to wildland fires. The agencies have adopted various national strategy documents addressing the need to reduce wildland fire risks; established a priority for protecting communities in the wildland-urban interface; and increased efforts and amounts of funding committed to addressing wildland fire problems, including preparedness, suppression, and fuel reduction on federal lands. In addition, the agencies have begun improving their data and research on wildland fire problems, made progress in developing long-needed fire management plans that identify actions for effectively addressing wildland fire threats at the local level, and improved federal interagency coordination and collaboration with nonfederal partners. The agencies also have strengthened overall accountability for their investments in wildland fire activities by establishing improved performance measures and a framework for monitoring results. While the agencies have adopted various strategy documents to address the nation's wildland fire problems, none of these documents constitutes a cohesive strategy that explicitly identifies the long-term options and related funding needed to reduce fuels in national forests and rangelands and to respond to wildland fire threats. Both the agencies and the Congress need a comprehensive assessment of the fuel reduction options and related funding needs to determine the most effective and affordable long-term approach for addressing wildland fire problems. Completing a cohesive strategy that identifies long-term options and needed funding will require finishing several efforts now under way, each with its own challenges. The agencies will need to finish planned improvements in a key data and modeling system--LANDFIRE--to more precisely identify the extent and location of wildland fire threats and to better target fuel reduction efforts. In implementing LANDFIRE, the agencies will need more consistent approaches to assessing wildland fire risks, more integrated information systems, and better understanding of the role of climate in wildland fire. In addition, local fire management plans will need to be updated with data from LANDFIRE and from emerging agency research on more cost-effective approaches to reducing fuels. Completing a new system designed to identify the most cost-effective means for allocating fire management budget resources--Fire Program Analysis--may help to better identify long-term options and related funding needs. Without completing these tasks, the agencies will have difficulty determining the extent and location of wildland fire threats, targeting and coordinating their efforts and resources, and resolving wildland fire problems in the most timely and cost-effective manner over the long term.
Periodic reexamination and reevaluation of federal agencies’ activities has never been more important than it is today. The federal government must address and adapt to major trends in our country and around the world. At the same time, our nation faces a serious, long-term fiscal challenge. Increased pressure also comes from world events: both from the recognition that we cannot consider ourselves “safe” between two oceans—which has increased demands for spending on homeland security—and from the U.S. role in combating terrorism and an increasingly interdependent world. Our country’s transition into the 21st century is characterized by a number of key trends, including: the national and global response to terrorism and other threats to our personal and national security; the increasing interdependence of enterprises, economies, markets, civil societies, and national governments, commonly referred to as globalization; the shift to market-oriented, knowledge-based economies; an aging and more diverse U.S. population; rapid advances in science and technology and the opportunities and challenges created by these changes; challenges and opportunities to maintain and improve the quality of life for the nation, communities, families, and individuals; and the changing and increasingly diverse nature of governance structures and tools. As the nation and government policymakers grapple with the challenges presented by these evolving trends, they do so in the context of rapidly building fiscal pressures. GAO’s long-range budget simulations show that this nation faces a large and growing structural deficit due primarily to known demographic trends and rising health care costs. The fiscal pressures created by the retirement of the baby boom generation and rising health costs threaten to overwhelm the nation’s fiscal future. As figure 1 shows, by 2040, absent reform or other major tax or spending policy changes, projected federal revenues will likely be insufficient to pay more than interest on publicly held debt. Further, our recent shift from surpluses to deficits means the nation is moving into the future in a weaker fiscal position. The United States has had a long-range budget deficit problem for a number of years, even during recent years in which we had significant annual budget surpluses. Unfortunately, the days of surpluses are gone, and our current and projected budget situation has worsened significantly. The bottom line is that our projected budget deficits are not manageable without significant changes in “status quo” programs, policies, processes, and operations. Doing nothing is simply not an option nor will marginal efforts be enough. Tough, difficult choices will have to be made. Clearly, the federal government must start to exercise more fiscal discipline on both the spending side and the tax side. While many spending increases and tax cuts may be popular, they may not all be prudent. However, there is not a single solution to the problems we face, but a number of solutions are needed. And, it will take the combined efforts of many parties over an extended period to address these fiscal challenges successfully. One needed improvement is streamlining and simplifying the federal government’s organizational structure to make it more economical, efficient, effective, flexible, responsive, and accountable. This includes addressing both fragmentation of effort and duplicative, overlapping, and conflicting government programs, policies, and operations. We need governmental organizations that embrace modern management practices of the 21st century, including a strategic human capital management approach. Streamlining the federal government to eliminate unnecessary redundancy and inefficient operations will help address our growing fiscal problems. It will not by itself solve the problem, but it certainly will help. It is important to reexamine periodically whether current programs and activities remain relevant, appropriate, and effective in delivering the government that Americans want, need, and can afford. This includes assessing the sustainability of the programs over time as well as the effectiveness of a range of tools—such as grants, loan guarantees, tax incentives, regulation, and enforcement—that are used to achieve results. Many federal programs—their goals, organizations, processes, and infrastructures—were designed years ago to meet the demands as determined at that time and within the technological capabilities of earlier eras. We currently have 15 departments and numerous independent agencies. The recent report of the Volcker Commission found that “fifty years have passed since the last comprehensive reorganization of the government” and that “the relationship of the federal government to the citizens it services became vastly broader and deeper with each passing decade.” The commission recommended a fundamental reorganization of the federal government into a limited number of mission-related executive departments to improve its capacity to design and implement public policy. I believe that GAO’s past and present work supports the validity of this finding. As a result, we should begin to take the steps necessary to make this recommendation a reality. This hearing is one step toward doing so. I believe that a number of events over the last few years, combined with a greater understanding of broad trends, have fostered growing recognition that fundamental change is necessary. This presents the Congress and the executive branch with an opportunity to create highly effective, performance-based organizations that can strengthen the nation’s ability to meet the challenges of the 21st century and reach beyond our current level of achievement. Many departments and agencies were created in a different time and in response to problems and priorities very different from today’s challenges. Some have achieved their one-time missions, yet they are still in operation. Many have accumulated responsibilities beyond their original purposes. Others have not been able to demonstrate how they are making a difference in real and concrete terms. Still others have overlapping or conflicting roles and responsibilities. Redundant, unfocused, and uncoordinated programs waste scarce resources, confuse and frustrate program customers, and limit overall program effectiveness. Fundamental reexamination of federal agencies’ roles, functions, and structure is never easy. Reorganizing government can be an immensely complex and politically charged activity. Those who would reorganize government must make their rationale clear and build a consensus for change if proposed reorganizations are to succeed. All key players must be involved in the process—the Congress, the President, affected executive branch agencies, their employees and unions, and other interested parties, including the public. In recent years, events have driven us to reassess several major components of government. In response to the events of September 11, 2001, the Department of Homeland Security was established. Seeing a pressing need, the government moved expeditiously to form this new agency and thus consolidate many disparate homeland security functions under a single agency. However, the formation of the Department of Homeland Security is still a work in progress. In January of this year, we designated the implementation and transformation of the Department of Homeland Security as high risk. The size and complexity of the effort and the challenges the department inherited will require sustained attention over time for the department to reach its full potential. Driven in part by the events of September 11, 2001, the Federal Bureau of Investigation (FBI) is also undergoing a major transformation, including a multiphase reorganization, first announced in December 2001. The first phase is designed to strengthen the FBI’s management structure, enhance accountability, reduce executive span of control, and establish two new divisions for Records Management and Security. The second phase is designed to build, among other things, a national terrorism response capability that is larger and more mobile, agile, and flexible by shifting resources from other areas within the FBI. In June of this year, 18 months into the effort, we reported progress in several areas but noted that major challenges remain. These challenges included the continued need for a comprehensive transformation plan, an updated strategic plan, and a human capital strategic plan. The tragedy of Columbia has turned a spotlight on the weaknesses in the National Aeronautics and Space Administration’s (NASA) organization and culture. The recent report of the Columbia Accident Investigation Board made a number of very specific recommendations related to the NASA’s organization. NASA now must take a hard look at its organizational structure and culture. While NASA has undertaken numerous programs that have greatly advanced scientific and technological knowledge, the agency is at a critical juncture, and major management improvements are needed. Earlier this year, we outlined several major management challenges at NASA in human capital, contract, and financial management, some of which have existed for years. Improved performance has been a primary goal of several other restructuring efforts under way. For example, the Internal Revenue Service (IRS) is in the midst of a long-term modernization. In addition, the Department of Defense (DOD) is in the process of transforming its business operations, and the U.S. Postal Service faces the challenge of transforming its business model for the 21st century. These are some recent examples of building consensus and undertaking restructuring to meet new or changed missions and goals. To a great extent, these changes were driven by catastrophic events. Even with dramatic events demonstrating the need for change, these reorganizations and transformations will not be easy. It is likely to be even more difficult to build consensus for reorganization and change when there is not such an event driving it. However, current trends, poor performance, and growing fiscal pressures demand that we make the effort. We simply cannot afford unnecessary redundancy and inefficiency in the government, especially in light of impending fiscal challenges and taxpayers deserve better. GAO’s work has documented the widespread fragmentation and overlap in both federal missions and individual federal programs. As new needs are identified, the common response has been to add new responsibilities and roles within federal departments and agencies, perhaps targeted to a newly identified clientele or involving a new program delivery approach. In the worst-case scenario, new programs are layered onto existing programs that have failed or performed poorly. Though our work also suggests that some issues, such as security, may warrant the involvement of multiple agencies or more than one approach, fragmentation and overlap often adversely affect the economy, efficiency, and effectiveness of the federal government. Last month, we issued a report, Opportunities for Oversight and Improved Use of Taxpayer Funds: Examples from Selected GAO Work. In this report, we highlight opportunities for and specific examples of legislative and administrative change that might yield budgetary savings. Several examples clearly illustrate the need to take a hard look at our organizational structures. The responsibilities of the four major land management agencies—the National Park Service, the Bureau of Land Management (BLM), the Fish and Wildlife Service within the Department of the Interior, and the Forest Service within the Department of Agriculture (USDA)—have grown more similar over time. Most notably, the Forest Service and BLM now provide more noncommodity uses, including recreation and protection for fish and wildlife, on their lands. In addition, managing federal lands has become more complex. Managers have to reconcile differences among a number of laws and regulations, and the authority for these laws is dispersed among several federal agencies as well as state and local agencies. These changes have coincided with two other developments—the federal government’s increased focus on downsizing and budget constraints and scientists’ increased understanding of the importance and functioning of natural systems, the boundaries of which may not be consistent with existing jurisdictional and administrative boundaries. Together, these changes and developments suggest a basis for reexamining the processes and structures under which the federal land management agencies operate. Two basic strategies have been proposed to improve federal land management: (1) streamlining the existing structure by coordinating and integrating functions, systems, activities, programs, and field locations and (2) reorganizing the structure by combining agencies. The two strategies are not mutually exclusive. Some small steps have been taken. For example, the Forest Service and BLM have colocated some offices or shared space with other federal agencies. However, more needs to be done. In 1987, the Congress passed the Stewart B. McKinney Act (Pub. L. No. 100-77) to address the multiple needs of homeless people. The act encompasses both existing and new programs. Over the years, some of the original McKinney programs have been consolidated or eliminated, and some new programs have been added. Today, homeless people receive assistance through these programs as well as other federal programs that are not authorized under the McKinney Act but are nevertheless specifically targeted to serve the homeless population. In February 1999, we reported that seven federal agencies administer 16 programs that serve the homeless population, with the Department of Housing and Urban Development (HUD) responsible for most of the funds. Consolidating all of the homeless assistance programs under HUD could increase administrative and operational efficiencies at the federal level as well as reduce administrative and coordination burdens for state and local governments, which also face fiscal challenges. Each of the three military departments (Army, Navy, and Air Force) operates its own health care system, providing medical care to active duty personnel, their dependents, retirees, and survivors of military personnel. To a large extent, these separate, costly systems perform many of the same administrative, management, and operational functions. Since 1949, numerous studies, the most recent completed in 2001, have reviewed whether a central entity should be created within DOD to manage and administer the three health care systems. Most of these studies encouraged some form of organizational consolidation. A DOD health agency would consolidate the three military medical systems into one centrally managed system, eliminating duplicative administrative, management, and operational functions. Similarly, there are potential benefits to be achieved by greater coordination with the veterans health care system. In an effort to save federal health care dollars, the Department of Veterans Affairs (VA) and DOD have sought ways to work together to gain efficiencies. For example, some local VA and DOD facilities have entered into joint venture agreements, pooling resources to build a joint facility or capitalizing on an existing facility. To ensure maximize use of federal health care dollars, this area needs continued attention. A multitude of agencies oversee food safety, with two agencies accounting for most federal spending on, and regulatory responsibilities for, food safety. The Food Safety and Inspection Service, under USDA, is responsible for the safety of meat, poultry, eggs, and some egg products, while the Food and Drug Administration, under the Department of Health and Human Services, is responsible for the safety of most other foods. The current food safety system emerged from a patchwork of often archaic laws and grew into a structure that actually hampers efforts to address existing and emerging food safety risks. Moreover, the current regulatory framework concentrates on only a segment—primarily food processing—of the continuum of activities that bring food from farm to table. The threat of deliberate contamination of the food supply and scientific and technical advances in the production of food, such as the development of genetically modified foods, have further complicated the responsibilities of the existing federal food safety structure. The food safety system suffers from overlapping and duplicative inspections, poor coordination, and inefficient allocation of resources. Consolidation of the federal food safety agencies under a single, independent agency or under a single department could improve both the efficiency and effectiveness of the system. These examples illustrate a few of the opportunities that exist to improve effectiveness and efficiency by reexamining the government’s organizational structure. As part of this reexamination, it is important to ask the fundamental question of whether an existing program, policy, or activity “fits” the work we face today and will face in the future. It is important not to accept all existing activities as givens by subjecting new proposals to greater scrutiny than existing ones undergo. However, such a fundamental reexamination is not easy. Success will depend on establishing clear goals, having all the key players actively involved, and using a process that can help build consensus. Throughout the 20th century, efforts to structure the federal government to address the economic and political concerns of the time met with varying degrees of success. The first Hoover Commission, which lasted from 1947 through 1949, is considered by many to have been the most successful of government restructuring efforts. The membership of the commission was bipartisan, including members from the administration and both houses of the Congress. Half of the members were from outside government. The commission had a clear vision, making reorganization proposals that promoted what it referred to as “greater rationality” in the organization and operation of government agencies, and enhanced the President’s role as the manager of the government—principles that were understood and accepted by both the White House and the Congress. Former President Hoover himself guided the creation of a citizens’ committee to build public support for the commission’s work. More than 70 percent of the first Hoover Commission’s recommendations were implemented, including 26 reorganization plans. According to the Congressional Research Service, “the ease with which most of the reorganization plans became effective reflected two factors: the existence of a consensus that the President ought to be given deference and assistance by Congress in meeting his managerial responsibilities, and the fact that most of the reorganization plans were pretty straightforward proposals of an organizational character.” History teaches lessons that are applicable today. Those who would reorganize government must make their rationale clear and must build a consensus for change before submitting specific proposals to the Congress if these efforts are to succeed. To achieve substantive changes, it is important that all players, particularly the Congress and the President, agree on restructuring goals and establish processes to achieve their objectives that provides needed transparency. The processes used may vary depending on the significance of the changes sought. However, the risk of failure is high if key players are not involved and no processes for reaching consensus on specific reorganization proposals submitted to the Congress for consideration are in place. Both having the right processes and the right players are critical to success. Restructuring existing programs is part of the solution to meeting the challenges faced by our government. However, those decisions are not the end of the story. Restructuring is not easy and takes time to fully implement, even once consensus exists on specific proposals. This is why we have designated the implementation and transformation of the Department of Homeland Security as high risk. In addition to the implementation actions taken within the executive branch, congressional oversight throughout the implementation will be crucial to ultimate success. Regardless of the number and nature of federal entities, the government’s goal should be to create high-performing organizations. We need to look at not only at what business we are in, but how we do business. Practices that were good 50 years ago may not make sense today. Old, outdated practices and systems result in inefficiency and waste of resources that we cannot afford. Our work has identified opportunities to change how the government does business. The following three examples illustrate opportunities to improve business practices and to make them more efficient and effective. USDA’s meat and poultry inspection system is hampered by inflexible legal requirements and relies on outdated inspection methods. Current law requires mandatory inspections that do not factor in risk. Inspectors continue to largely rely on their sense of sight, smell, and touch in making judgments about disease conditions, contamination, and sanitation. Microbial testing for such things as salmonella, listeria, and generic E. coli has increased but is still not sufficient. Legislative revisions could allow USDA to emphasize risk-based inspections. Much of the funding used to fulfill current, mandatory meat and poultry inspection activities could be redirected to support more effective food safety initiatives, such as increasing the frequency of inspections at high-risk food plants. Recently, GAO identified at least 21 different grant programs that can be used by the nation’s first responders to address homeland security needs. Multiple, fragmented grant programs can create a confusing and administratively burdensome process for state and local officials seeking to use federal resources to meet pressing homeland security needs. In addressing the fragmentation prompted by the current homeland security grant system, the Congress has taken the initial step of bringing many of these programs under the Department of Homeland Security. Additional administrative and legislative steps, such as block grants, waivers, performance partnerships, and grant waivers, might be considered. These approaches could provide state and local governments with increased flexibility while potentially improving intergovernmental efficiency and homeland security program outcomes. Better integration, including consolidation, of programs could yield administrative efficiencies that result in savings or improved performance. In taking any additional steps, it will be important to ensure accountability for both performance and funding. The U.S. overseas presence at more than 260 overseas posts consists of more than 90,000 people (including dependents of federal workers). The workforce has been estimated at as many as 60,000 employees, representing over 30 agencies. The Department of State employs about a third of the U.S. workforce overseas, and its embassies and consulates have become bases for the operations of agencies involved in hundreds of activities. The costs of overseas operations and related security requirements are directly linked to the size of the overseas workforce. By reducing the number of employees at posts where U.S. interests are a lower priority, consolidating functions, establishing regional centers, or relocating personnel to the United States, the costs of overseas operations could be significantly reduced. In August 2001, The President’s Management Agenda noted that the U.S. overseas presence is costly, increasingly complex, and of growing security concern. It concluded that the cost and security considerations demand that the overseas staffing process be improved. Creating high performing organizations will require a cultural transformation in government agencies and new ways of doing business. Hierarchical management approaches will need to yield to partnerial approaches. Process-oriented ways of doing business will need to yield to results-oriented ones. “Siloed” organizations will need to become more horizontal and integrated to make the most of the knowledge, skills, and abilities of their people. Internally focused agencies will need to focus externally to meet the needs and expectations of their ultimate clients—the American people. Major programs and operations need urgent attention and transformation to ensure that the government functions as economically, efficiently, and effectively as possible. Management reform will be vitally important for agencies to transform their cultures to address the changing role of the government in the 21st century. The key to effective public management in the 21st century is to ensure that organizations have the characteristics and capabilities needed to effectively influence and leverage partners, people, processes, and technology to achieve results. As part of a continuing series of forums, GAO will convene a forum in November that will focus specifically on the implications of the public management environment in the 21st century for federal agencies as they strive to become high performing organizations. This forum is intended to help identify key characteristics and capabilities of high-performing organizations in this environment, challenges facing federal agencies in transitioning into high-performing organizations, and ways in which the Congress and the executive branch can foster these transformation efforts. Strategic human capital management should be a centerpiece of any serious change management initiative or any effort to transform the cultures of government agencies. It is a vital element to the success of any government restructuring efforts, whether within an existing agency or across current agency boundaries. People are an agency’s most important organizational asset. An organization’s people define its character, affect its capacity to perform, and represent the knowledge base of the organization. Human capital issues have been a focus of this Congress and certainly this Subcommittee. They will require continuing attention. Since 2001, we have designated human capital as a governmentwide high risk. The Congress and the executive branch have taken a number of steps to address the federal government’s human capital shortfalls. However, serious human capital challenges continue to erode the ability of many agencies, and threaten the ability of others, to perform their missions economically, efficiently, and effectively. A consistent strategic approach to maximize government performance and ensure its accountability is vital to the success of any reorganization efforts as well as to transforming existing agencies. A high-performing organization should focus on human capital. Human capital approaches are aligned with accomplishing missions and goals. Strategies are designed, implemented, and assessed based on their ability to achieve results and contribute to an organization’s mission. Leaders and managers stay alert to emerging demands and human capital challenges. They reevaluate their human capital approaches through the use of valid, reliable, and current data, including inventories of employee skills and competencies. Recruiting, hiring, professional development, and retention strategies focus on ensuring that an agency has the needed talent to meet organizational goals. Individual performance is clearly linked with organizational performance. Effective performance management systems provide a “line of sight” showing how unit, team, and individual performance can contribute to overall organizational goals. The first step in meeting the government’s human capital challenges is for agency leaders to identify and make use of all the appropriate administrative authorities available to them to manage their people both effectively and equitably. The second step is for policymakers to purse incremental legislative reforms. Most recently, the Congress has been considering legislative proposals for the DOD. As we have previously testified, agency-specific human capital reforms should be enacted to the extent that the problems being addressed and the solutions offered are specific to a particular agency (e.g., military personnel reforms for DOD). In addition, targeted reforms should be considered in situations in which additional testing or piloting is needed for fundamental governmentwide reform. Moving forward, we believe it would be preferable to employ a governmentwide approach to address human capital issues and the need for certain flexibilities that have broad-based application and serious potential implications for the civil service system, in general, and for the Office of Personnel Management (OPM), in particular. Some examples, that have been pursed, include broadbanding, pay for performance, reemployment, and pension offset waivers. As federal agencies compete for resources, it is important to maintain a level playing field among agencies. However, whether through a governmentwide authority or agency-specific legislation, in our view, such additional authorities should be put in operation only when an agency has the institutional infrastructure in place to use the new authorities effectively. This institutional infrastructure includes, at a minimum, a human capital planning process that integrates the agency’s human capital policies, strategies, and programs with its program goals, mission, and desired outcomes; the capabilities to develop and implement a new human capital system effectively; and a modern, effective, and credible performance management system that includes adequate safeguards, including reasonable transparency and appropriate accountability mechanisms, to ensure the fair, effective, and nondiscriminatory implementation of the system. Transforming an organization is not an easy endeavor. It requires a comprehensive, strategic approach that takes leadership, time, and commitment. Because GAO is the agency that reviews others, we strive to lead by example. To create a model federal agency and world-class professional services organization, we have undertaken a comprehensive transformation effort over the past few years. Our strategic plan, which is developed in consultation with the Congress, is forward-looking and built on the key trends emerging at the beginning of the 21st century that were discussed earlier and relate to the United States and its position in the world community. We also have restructured our organization to align with our goals, resulting in significant consolidation—going from 35 to 13 teams, eliminating an extra organizational layer, and reducing the number of field offices from 16 to 11. We have become more strategic, results-oriented, partnerial, integrated, and externally focused. Our scope of activities includes a range of oversight-, insight-, foresight-related engagements. We have expanded and revised our product lines to better meet client needs. We also continue to provide certain legal and adjudicatory services, as specified in our authorizing legislation. In addition, we have redefined success in result-oriented terms and linked our institutional and individual performance measures. We have strengthened our client relations and employed a “constructive engagement approach” with the entities we review. The impact on our results has been dramatic. Client feedback reports show significant improvement, and results for several of our key performance indicators have almost doubled in only 4 years. There are four lessons to be learned from our experiences. First, one should not minimize how challenging it is for an organization to undertake a comprehensive transformation. Second, transformation is multifaceted and takes time. Our transformation began in 2000 and continues to be a work in progress. Third, transformation must be based on the best, most up-to-date management practices to reach its full potential. Fourth, transformation requires continual management commitment, monitoring, and oversight. Because of the 15-year terms for comptrollers general, GAO has the advantage of stable, long-term leadership that many other agencies do not have. However, our approach—based on best management practices—can serve as a guide to others. We employed a strategic, not an incremental, approach to transforming GAO. Our approach is based on a regularly updated 6-year strategic plan for serving the Congress. GAO’s strategic plan, which is currently being updated, established clear goals and objectives. Three goals aimed at providing Congress timely, quality service to: (1) address challenges to the well-being and financial security of the American people, (2) respond to changing security threats and the challenges of global interdependence, and (3) transform the federal government’s role and how it does business. Our fourth goal is to be a model federal agency and a world-class professional organization. Our strategic plan provides a firm foundation from which to identify priorities and opportunities for eliminating redundancies and improving operations. It is the basis for our workforce planning. It also sets the stage for maximizing our effectiveness and efficiency. Our strategic planning process provides for updates with each new Congress, ongoing analysis of emerging conditions and trends, extensive consultations with congressional clients and outside experts, and assessments of our internal capabilities and needs. Our strategic plan formed the basis for a major organizational realignment. This realignment focused the organization on our goals and resulted in significant streamlining. The process employed to accomplish the realignment required time, energy, and commitment from GAO’s senior leadership. Input was sought from GAO executives and employees at all levels throughout the process. Extensive communications with GAO staff and key congressional stakeholders were maintained on an ongoing basis. The result has been a more agile, effective, responsive, and accountable organization that has been able to effectively respond to the many new challenges presented to it. People are an organization’s most important asset. Modern, effective, and credible human capital policies are critical to the successful functioning of any enterprise. This has been the case at GAO. In 2000, we sought and received certain narrowly tailored human capital authorities, including early out and buyout authorities. We have used these authorities responsibly to strategically reshape GAO. In addition, we have implemented a comprehensive recruiting program, instituted a competency-based performance management system, made significant investments in training and staff development, and continued to refine our staffing process to maximize resource utilization. We continually seek to refine and improve our human capital practices. Recently, I have sought additional flexibilities for GAO to ensure quality service to the Congress; continue leading by example in government transformation; and continue to attract, retain, motivate, and reward a quality and high-performing workforce. I appreciated the support from you Chairwoman Davis and the Subcommittee on this request. Continual communication with GAO staff is a critical feature of our human capital strategy. Among other things, we periodically survey staff on a wide range of human capital and organizational issues. I am pleased to report that the results of our latest comprehensive survey, completed last month, continued to demonstrate remarkably positive results. Finally, we are continually evaluating, reengineering, and refining our work processes to reflect the best management practices to ensure the most effective and efficient service delivery. For example, we have employed two new management strategies within the organization—risk management and matrix management. GAO’s risk management approach allows management to identify and involve internal stakeholders with needed subject matter expertise throughout an engagement to transcend traditional organizational boundaries, maximize institutional value, and minimize related risks. GAO’s matrix management approach maximizes our value to the Congress by leveraging the knowledge, skills, and experience of all employees to ensure the highest quality products and services and to help the Congress address the challenging, complex, changing, and multidimensional problems facing the nation. As part of this effort, we continually strive to provide GAO’s people with necessary tools, technology, and training, and a world-class working environment. GAO’s transformation can provide lessons about what can be accomplished. To measure ourselves, we use a balanced scorecard, measuring client service, results, and employees. On all three dimensions, we are reporting very positive results. To illustrate, in fiscal year 2002, GAO’s efforts helped the Congress and government leaders achieve $37.7 billion in financial benefits—an $88 return on every dollar invested in GAO, up from $19.7 billion and $58 return in fiscal 1998. The return on the public’s investment in GAO extends beyond dollar savings to improvements in how the government serves its citizens. The results in 2002 are in part attributed to work we have done to transform GAO using a strategic, comprehensive approach. Similar benefits can be achieved in other governmental organizations. Building on GAO’s experience, a comprehensive approach grounded in a sound strategic plan and appropriate organizational alignment, and based on the best management practices, including human capital management, can yield optimal results in terms of effectiveness and efficiency. Successful transformation is not easy. It will take strong, committed, and persistent leadership, and it will take time. We are still working on it, but we are ahead of schedule and are pleased with our progress. The challenges facing our nation are many and difficult. Clearly, there is a need to reexamine how the federal government is organized both in the executive and legislative branches. We need to reassess how the federal government does business. Fundamental questions need to be asked about what the federal government should be doing and who should be doing it, given past changes and 21st century challenges. Clearly any major organizational change is both complex and controversial. In considering government restructuring and changes in business practices, it is important to focus not just on the present but on the future trends and challenges. Identifying goals for addressing these trends and challenges can provide a framework for achieving the needed consensus. In fact, the effects of any changes will be felt more in the future than they are today. Because the world is not static and never will be, it is vital to take the long view, positioning the government to meet challenges throughout the 21st century. There is no easy answer to the challenges federal departments and agencies face in transforming themselves. Multiple actions are required. This is illustrated by the examples I have provided today. As the Congress moves forward, it will be important to keep three things in focus: goals, players, and processes. Clear goals are essential. Defining clear goals forces decision makers to reach a shared understanding of what really needs to be fixed in government, what the federal role ought to be, how to balance differing objectives, and what steps need to be taken to create not just short-term progress but long-term success. All key players must be engaged if viable solutions are to be achieved—this means the Congress and the President, as well as other parties with vested interests. Excluding key players increases the risk of failure. Finally, the process used must be tailored to the task at hand. Straightforward changes, such as the consolidation of agency payment operations, may call for agency-centered processes, requiring minimal involvement by the Congress or others. Other changes, such as revamping the U.S. food safety system, will require a process that involves key congressional stakeholders and administration officials as well as others, ranging from food processors to consumers. Even more ambitious changes like reorganizing the executive branch or rationalizing the existing federal infrastructure will likely require commission approaches similar to the Hoover Commission that I discussed previously. On September 24, 2002, GAO convened a forum to identify and discuss useful practices and lessons learned from major private and public sector organizational mergers, acquisitions, and transformations that federal agencies could implement to transform their cultures successfully. While there is no one right way to manage a successful merger, acquisition, or transformation, the experiences of both successful and unsuccessful efforts suggest that there are practices that are key to their success. These key practices should be considered as federal agencies seek to transform their cultures in response to governance challenges. These practices include the following. Ensure that top leadership drives the transformation. Establish a coherent mission and integrated strategic goals to guide the transformation. Focus on a key set of principles and priorities at the outset of the transformation. Set implementation goals and a timeline to build momentum and show progress from day one. Dedicate an implementation team to manage the transformation process. Use the performance management system to define responsibility and ensure accountability for change. Establish a communication strategy to create shared expectations and report related progress. Involve employees to obtain their ideas and gain their sense of ownership of the transformation. Build a world-class organization. Eliminating redundancy and improving federal operations are critical to meeting the challenges we are facing at the beginning of the 21st century. Chairwoman Davis has introduced the Government Accountability and Streamlining Act of 2003. This bill is aimed at stopping the creation of any additional unnecessary redundancy. As it considers this proposal, the Congress may also want to consider other options, such as reinstituting some form of budget controls, granting the President executive reorganization authority, establishing special commissions, and enhancing oversight. The Congress may want to consider giving federal department and agencies additional tools to assist in the transformations that they undertake, including creating chief operating officer positions in selected departments and agencies and human capital reforms. As I have emphasized, multiple approaches are needed to address not only future but also existing redundancy and inefficiency in federal operations. Each of the following seven tools has merit depending on the situation. Government Accountability and Streamlining Act of 2003. This proposal would require GAO to prepare statements for bills and resolutions reported by congressional committees and subcommittees on whether the responsibilities of any proposed new federal entities, programs, or functions are redundant. While I appreciate the respect for our work shown by this proposal, I also think it is important that we be practical in designing such a mandate. This kind of evaluation is very resource intensive, and there are currently no agreed-upon criteria for determining whether an activity is actually duplicative or redundant. Each year, there are hundreds of bills proposed by committees alone. Though not all bills would have potential redundancy implications, the number might be significant and could affect our other work for the Congress. An alternative might be to provide the Chair of the House Committee on Government Reform and its Senate counterpart with the authority to request such an evaluation for any bill before it goes to the floor. At a minimum, some way to limit the number of bills analyzed would be necessary. Reinstitution of budget controls. The appropriations caps and “pay-go” requirements—which expired in 2002—limited the expansion and creation of new government programs and activities. Such controls could be beneficial given our current and future fiscal challenges. In addition, the reconciliation process could be used more to force trade- offs as well as a reexamination of existing programs. Executive reorganization authority. Earlier this year, the House Committee on Government Reform held hearings on reinstating the President’s executive reorganization authority. Though a bill has not yet been introduced, this authority could provide a useful tool in reexamining the federal government’s organizational structure. Essentially, it would reinstate the authority of the President to submit government restructuring plans to the Congress and obtain expedited review. Such authority can better enable the President to propose government organization designs that would be more efficient and effective in meeting existing and emerging challenges. But it is important to achieve consensus on identified problems, needs, and solutions. The Congress has a vital role in this process. As I testified at the April 2003 hearing, some expedited congressional consideration may be appropriate for specific issues. However, the Congress may want to consider different tracks for proposals that encompass significant policy changes versus those that focus more narrowly on specific government operations. Special commissions. In the past, there have been special commissions chartered to examine and make recommendations on difficult structural issues. The most successful had both executive and bipartisan legislative branch support. For example, the first Hoover Commission had more than 70 percent of its recommendations implemented, including 26 of 35 reorganization plans. More recently, the Base Realignment and Closure process was used successfully to reduce unneeded defense assets. Provided there is a clear statement of goals and the process to be used, such commissions can provide an effective means of examining issues in depth and formulating recommendations for the consideration of the Congress. Enhanced oversight. A management and oversight process that is narrowly focused or one that considers only incremental changes, while beneficial, will not allow the government to reach its full performance potential. The government is composed of organizations, programs, and functions that are overlapping, fragmented, and interdependent. Structuring management and oversight only according to preexisting boundaries, whether they be executive departments or congressional committee structures, limits the full potential of any review. The importance of seeing the overall picture cannot be overestimated. It is important to be asking the right questions. The traditional oversight that the Congress provides to individual organizations, programs, and activities has an important role in eliminating redundancy and inefficiencies. There are important benefits to be achieved through focused oversight if the right questions are asked about program design and management. Five key questions for program oversight are as follows: Does the program duplicate or even work at cross-purposes with related programs and tools? Is the program targeted properly? Is the program financially sustainable and are there opportunities for instituting appropriate cost-sharing and recovery mechanisms? Can the program be made more efficient through reengineering or streamlining processes or restructuring organizational roles and responsibilities? Are there clear goals, measures, and data with which to track progress built into its planning and reporting systems? Chief operating officer (COO). Transformation of a large organization is a difficult undertaking, especially in government. Success depends on committed, top-level leadership and sustained attention to management issues. A COO could provide the sustained management attention essential for addressing key infrastructure and stewardship issues and could facilitate the transformation process. Establishing a COO in selected federal agencies could provide a number of benefits. A COO would be the focal point for elevating attention on management issues and transformational change, integrating various key management and transformation efforts, and instituting accountability for addressing management issues and leading transformational change. A COO would provide a single organizational position for key management functions, such as human capital, financial management, information technology, acquisition management, and performance management as well as for transformational change initiatives. To be successful, in many cases, a COO will need to be among an agency’s top leadership (e.g., deputy secretary or under secretary). However, consistent with the desire to integrate responsibilities, the creation of a senior management position needs to be considered with careful regard to existing positions and responsibilities so that it does not result in unnecessary “layering” at an agency. Consideration also should be given to providing a term appointment, such as a 5—7 year term. A term appointment would provide sustained leadership. No matter how the positions are structured, it is critical that the people appointed to these positions have a proven track records in similar positions and be vested with sufficient authority to achieve results. To further clarify expectations and responsibilities, the COO should be subject to a clearly defined, results- oriented performance contract with appropriate incentives, rewards, and accountability mechanisms. For selected agencies, a COO should be subject to a Senate confirmation. In creating such a position, the Congress might consider making certain subordinate positions, such as the chief financial officer, not subject to Senate confirmation. Governmentwide human capital reforms. There are a number of reforms that might be considered. As I have previously testified, Congress should consider providing governmentwide authority to implement broadbanding, other pay for performance systems, and other authorities whereby whole agencies are allowed to use additional authorities after OPM has certified that they have the institutional infrastructures in place to use them effectively and fairly. In addition to requiring a human capital strategic plan from each agency, the Congress should establish statutory principles for standards that an agency must have in place before OPM can grant additional pay flexibilities. Additional efforts should be taken to move the Senior Executive Service to an approach wherein pay and rewards are more closely tied to performance. Further, the Congress might consider establishing a governmentwide fund where agencies, based on a sound business case, could apply to OPM for funds to be used to modernize their performance management systems and ensure that those systems have adequate safeguards to prevent abuse. The governmentwide fund would provide for targeted investments needed to prepare agencies to use their performance management systems as strategic tools to achieve organizational results and drive organizational change. Government leaders are responsible and accountable for making needed changes to position the federal government to meet current and future challenges and to take advantage of emerging opportunities. In meeting this responsibility, leaders must take advantage of every tool that is available to them. Each of the seven tools that I have discussed has unique characteristics and benefits that can be highly effective depending on the goals to be achieved. In view of the trends and fiscal challenges facing the nation, there is a need to consider the proper role of the federal government, how the government should be structured, how the government should do business, and in some instances who should do the government’s business. We cannot afford unnecessary redundancy and inefficient operations, and taxpayers deserve better. The federal government’s large and growing fiscal gap means that doing nothing is simply not an option. Tough choices will have to be made by elected officials. The Congress and the administration will need to use every tool at their disposal to address these challenges. In addressing these challenges, it will be important to set clear goals, involve all key players, and establish viable processes that will lead to positive and sustainable results. We in GAO take our responsibility to assist the Congress in these crucial efforts very seriously. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
GAO has sought to assist the Congress and the executive branch in considering the actions needed to support the transition to a more high-performing, results-oriented, and accountable federal government. GAO provided perspectives on the federal government's overall structure and the need for reorganization to improve performance. Through normal evolution and inertia over the years, the United States now has a government that is weighed down by organizations with significant performance and management problems as well as duplicative and overlapping missions and functions. This situation is exacerbated by ways of doing business that, in some cases, are better suited for the beginning of the 20th century than the 21st century. Given the changed circumstances and stark fiscal realities, the nation simply cannot afford unnecessary, redundant, or inefficient organizations, programs, or operations. Periodic reexamination and reevaluation of federal agencies' activities have never been more important than they are today. The federal government must address and adapt to major trends in the nation and around the world. At the same time, our nation faces serious, long-term fiscal challenges. Fundamental reexamination of federal agencies' roles, functions, and structure is never easy. Reorganizing government can be an immensely complex and politically charged activity. Those who would reorganize government must make their rationale clear and build a consensus for change if proposed reorganizations are to succeed. All key players must be involved in the process--the Congress, the President, affected executive branch agencies, their employees and unions, and other interested parties, including the public. Regardless of the number and nature of federal entities, the government's goal should be to create high-performing organizations. The federal government needs to look not only at what business it is in, but how it does business. Practices that were good 50 years ago may not make sense today. Old, outdated practices and systems result in inefficiency and waste of resources that the nation cannot afford. Management reform will be vitally important to agencies in transforming their cultures to address the changing role of the government in the 21st century. Strategic human capital management should be a centerpiece of any serious change management initiative or any effort to transform the cultures of government agencies. It is a vital element to the success of any government restructuring efforts, whether within an existing agency or across current agency boundaries. People are an agency's most important organizational asset. An organization's people define its character, affect its capacity to perform, and represent the knowledge base of the organization.
To achieve directed force structure reductions, the Air Force has been reducing the number of F-15 and F-16 aircraft in its inventory. Between fiscal years 1991 and 1997, the Air Force plans to reduce its F-15 aircraft from 342 to 252. Over this same period, the Air Force plans to reduce its F-16 aircraft from 570 to 444. In 1991, F-15 and F-16 aircraft were configured in 42 squadrons. By fiscal year 1997, these aircraft will be configured in 37 squadrons. Until 1992, the Air Force predominantly organized its active fighter aircraft in wings of three squadrons, with 24 combat aircraft in each squadron. However, in 1992, the Air Force Chief of Staff directed that the squadrons be reduced to 18 aircraft. By 1997, most fighter squadrons will have been reduced to this smaller size, leaving only 54 aircraft in most wings. The Secretary of Defense has encouraged the services to consolidate forces wherever possible to reduce infrastructure and operating costs.However, the Air Force acknowledged in 1995 that while the force structure has been reduced by 30 percent, the supporting infrastructure has been reduced by only about 15 percent. The Air Force cited increased deployment flexibility and reduced span of control as the primary benefits for having smaller fighter squadrons. However, the Air Force has not demonstrated that these benefits are compelling. Moreover, the Air Force has neither documented instances of problems with deployment flexibility and span of control nor conducted studies that support its decision to use smaller squadrons. Air Force officials said that the primary benefit of using smaller-sized squadrons is increased operational deployment flexibility. With fewer fighters in the Air Force inventory, reducing squadrons to 18 aircraft increases the number of squadrons above the number there would have been had the aircraft been organized in traditional squadrons of 24 aircraft. Air Force officials stated that these additional squadrons are needed to respond to conflicts that reflect the new security environment. This new security environment is characterized by multiple contingency operations and the possibility of two nearly simultaneous military regional conflicts. On the basis of our analysis of Air Force fighter assistance in recent contingency operations, it appears that the Air Force would have considerable deployment flexibility even if the aircraft remained in the former 24-aircraft configuration. We examined the three contingency operations that were ongoing during June 1995 that required Air Force F-15 and F-16 assistance. For two operations, the Commander in Chief (CINC) for each theater operation required less than one squadron’s aircraft for each operation. For these operations, the Air Force rotated 18 squadrons of F-15s and F-16s (7 active and 11 reserve) to provide year-long coverage to support these contingency operations. We were told that for the third operation, the CINC’s requirement, which equated to one 18-aircraft squadron each of F-15s and F-16s, was met by rotating 6 F-15 and 6 F-16 continental United States (CONUS) based 18-aircraft fighter squadrons. We were advised that this number of squadrons was used because Air Combat Command (ACC) desired, for quality-of-life reasons, to maintain an 18-month interval between rotations for each squadron’s 3- to 4-month deployment overseas. However, using ACC’s stated goal of 8 to 9 months between overseas deployments, the CINC’s requirements for this latter operation could have been met with only three to four fighter squadrons. If the Air Force deployed squadrons in accordance with ACC’s stated goal, a larger number of squadrons would not be needed, particularly since reserve squadrons are available to augment the active force. We also question whether DOD’s current military strategy requires the larger number of squadrons afforded by the 18-aircraft squadron design. The Bottom-Up Review specified that 10 fighter wing equivalents (72 aircraft each) would be needed for each of two anticipated major regional conflicts. The term “fighter wing equivalent,” however, underscores that fighter requirements are not stated in terms of squadrons but rather in terms of the number of aircraft. The Secretary of Defense’s fiscal year 1996-2001 Defense Planning Guidance states Air Force requirements in terms of total aircraft, not squadrons. Further, Air Force officials at ACC and the 9th Air Force headquarters (the U.S. Central Command’s air staff) said that requirements for CINC missions are computed by the number of aircraft needed to successfully execute the mission, not by the number of squadrons. Moreover, officials at the 9th Air Force headquarters stated that the primary use of squadron organizations in a regional conflict operation is to manage the daily flight shifts and that squadron structures become almost invisible because all aircraft are controlled by the theater’s air component commander. Thus, from the CINC’s perspective, the number of squadrons in which aircraft are organized is largely immaterial. Air Force officials told us that another benefit of smaller squadrons was “span of control”—the ability to manage personnel and the collective tasks for which they are responsible. Until recently, flight line maintenance and associated personnel were controlled by the wing. When this function was shifted to the squadron in 1991-92, a typical 24-aircraft squadron would have increased from about 85 to over 300 people. This fourfold growth, according to Air Force officials, would have weakened the commander’s ability to effectively manage people and missions. These officials believed that the reduced number of squadron aircraft helps to offset this effect because a smaller squadron reduces the number of squadron personnel. However, we found that reducing the squadron to 18 aircraft only reduced personnel by about 10 percent (about 30 people). The Air Force’s standard for span of control for maintenance squadrons commanders is 700 people, about twice the number of personnel being supervised by flight squadron commanders. Although span of control may have been a perceived problem early in the Air Force’s wing reorganization, ACC officials are not aware of any instance where it has been raised as an issue. Discussions with a number of wing and squadron officials also indicated that the squadron commander’s span of control had not increased enough to be a problem. The Air Force’s reduction in squadron size was neither evaluated in a systematic manner, nor supported by documented studies. For example, no assessment of benefits versus drawbacks of the appropriate squadron size was conducted, and there were no studies to support scenarios where more squadrons would be needed. Some Air Force officials said that the basic rationale for moving to smaller squadrons was to minimize the reduction in wing and squadron commands as the number of aircraft in the force declined. We were told that the Air Force considered it inappropriate to identify command reductions during a period when the base realignment and closure (BRAC) process was ongoing because it would constitute an action that would prevent the BRAC process from proceeding as designed. According to Air Force officials, identifying changes that significantly reduce base facilities was against Air Force policy and the laws governing the BRAC proceedings. Although it is true that Department of Defense (DOD) entities were constrained from reducing force structure and closing bases beyond specified limits outside the BRAC process, the Air Force was not precluded from making recommendations on these matters during the BRAC process. In our view, such identifications would have facilitated the development of recommendations for base closures. Organizing the fighter force into 24-aircraft squadrons reduces the total number of squadrons and results in more economical operations than squadrons of 18 aircraft. For example, annual operating costs for 72 F-15s are about $12 million less if they are organized into squadrons of 24 aircraft instead of squadrons of 18. We calculated the savings from staffing standards and cost estimates provided by Air Force officials, using an Air Force’s cost estimation model (a more detailed description of our methodology is in app. III). The annual savings are primarily due to reduced military personnel requirements, in such areas as command, staff, administrative, and maintenance. The salary costs associated with reduced military personnel requirements account for about 70 percent of the total savings, of which over 90 percent is enlisted pay. Also, larger squadrons allow maintenance specialty shops to be used more efficiently, requiring little or no change in staffing. Other savings occur due to reduced training, medical services, supplies, and base operating support. The Air Force could modify its current configuration of fighter aircraft in a more cost-effective manner to increase the number of squadrons with 24 aircraft. This modification would entail consolidating some existing F-15 and F-16 squadrons with other squadrons to better maximize base utilization. Our four illustrative options (which are presented in detail in app. I) would have annual savings ranging from $25 million to $115 million annually. ACC officials we contacted stated that bases that previously had 24 aircraft per squadron and 72 aircraft per wing should be able to return to that level. Our review of Air Force base closure capacity analysis data indicated that most fighter wings on CONUS bases could increase squadron size to previous levels with little or no additional cost. For example, a capacity analysis prepared by Moody Air Force Base (AFB) officials stated that Moody will retain the capacity to support 2 additional fighter squadrons and increase 2 of its 18 sized F-16 fighter squadrons to 24 aircraft. Similarly, wing personnel at Shaw AFB and Langley AFB indicated that their installations could absorb 6 more aircraft per squadron or 18 per wing with no additional costs. These officials stated that because their bases previously had 24 aircraft per squadron and facilities were sized for 24 aircraft, returning to 24 would be little to no problem. Moreover, maintenance personnel stated that much of the support equipment could handle six additional aircraft with little additional investment. Deployment personnel at the 20th fighter wing at Shaw AFB stated that the supporting equipment for 24 aircraft would take the same number of transport planes to move as a squadron of 18 aircraft. Air Force officials at different levels of command cited several factors that should be considered when consolidating aircraft into fewer squadrons and wings. These factors include keeping aircraft with the same state of modernization and mission characteristics together. In addition, they stated that aircraft engines should be compatible at least in the squadron and preferably throughout the wing. Other factors officials said should be considered include the availability of training areas, impact on the CONUS/overseas mix, and the capacity of the receiving base to accept the additional aircraft and related personnel and equipment. Air Force officials noted that different modernization upgrades and specialized mission equipment can make the F-16 aircraft very different. For instance, newer F-16s have improved avionics that require different logistical support than earlier versions of the F-16. In addition, some aircraft have specialized equipment, such as the equipment needed to perform the night ground attack mission. Air Force officials stated that specialized training is required for pilots to perform this mission and believe mixing aircraft that have this capability with aircraft that do not will reduce unit readiness. Air Force officials also stated that having either F-15 and F-16 aircraft with different engines in the same wing complicates maintenance. For instance, different engines either from the same or different manufacturer can generate unique maintenance requirements. Because different support equipment and maintenance skills may be needed for various engines, maintaining different types of engines at the same wing can strain maintenance resources and ultimately reduce the availability of deployable aircraft. Additionally, Air Force officials said that any restructuring that affects aircraft outside the United States must consider agreements with foreign governments that govern the number of aircraft based in these countries. In general, the number of aircraft should not change materially. Considering the factors that Air Force officials believe are most important when consolidating forces we developed four alternatives for reorganizing the F-15 and F-16 fighter force. Our alternatives generally did not commingle aircraft with different type engines and modernization and mission characteristics. We also kept relatively constant the U.S./overseas basing mix and the number of aircraft in each theater, and we varied the number of aircraft in the Air Force’s composite wings. These options ranged from restructuring only fighter aircraft in the United States to restructuring all F-15s and F-16s worldwide. The “CONUS Only” alternative we developed is projected to save the Air Force about $25 million annually in operating costs. This would be achieved by increasing 6 existing fighter squadrons to 24 aircraft and eliminating 2 squadrons. The alternative of consolidating fighter squadrons worldwide would consolidate the F-15 and F-16 aircraft into 7 fewer squadrons than the Air Force currently plans and increase 17 squadrons to 24 aircraft and 2 squadrons to 30 aircraft. This alternative could save the Air Force a projected $115 million annually. Our other two alternatives would fall between these savings. Consolidating aircraft at fewer bases would also help the Air Force identify excess base infrastructure and candidate bases for closure. For example, three of the four alternatives would eliminate all fighter aircraft from at least one base, suggesting the potential of a base closure. If a base closure could be executed with savings similar to what DOD estimated for similar bases during the 1995 BRAC process, annual savings would average about $15 million for the first 6 years and about $50 million in each ensuing year. Air Force officials at headquarters and ACC expressed concerns about the implementation of our alternatives without the support of DOD and Congress. They stated that efforts in the past to move aircraft from a base without an equal substitution for the losing base have not been achievable. In their opinion, if the Air Force leadership decided to implement options to increase squadron and wing size back to 24 and 72, respectively, the Air Force would need the support of both DOD and Congress. We recommend that the Secretary of Defense, in his efforts to reduce the DOD’s infrastructure costs, require the Secretary of the Air Force to develop an implementation plan to operate the Air Force’s fighter force in larger, more cost-effective squadrons. If the Secretary of Defense believes that the plan could reduce costs, he should seek congressional support for it. DOD concurred with our findings and recommendation. DOD’s comments are reproduced in appendix II. A detailed explanation of our scope and methodology appears in appendix III. We conducted this review from February 1995 to February 1996 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretaries of Defense and Air Force and interested congressional committees. We will also make copies available to others upon request. Please contact me at (202) 512-3504 if you or your staff have any questions about this report. Major contributors to this report are listed in appendix IV. We developed and refined four alternatives that demonstrate that the Air Force could organize its fighter aircraft more cost-effectively. Underpinning our analysis were principles that the Air Force cited as important. These factors included keeping the continental United States (CONUS)/overseas basing mix relatively constant; avoiding mixing aircraft with different modernization upgrades (blocks), mission characteristics, and engines; balancing capability throughout theaters; and assessing receiving base capacity. While these principles are plausible, our options vary the extent that these principles were used to gain greater economies. Moreover, the Air Force has not rigidly adhered to these principles. For example, different engines are contained in the F-15 wing at Eglin Air Force Base. The Air Force also plans to mix F-16s with different blocks. The following tables compare the Air Forces’s planned fiscal year 1997 mix of 18- and 24-aircraft squadrons at each base with the mix of squadrons that would be achieved with each of our four alternatives.Preceding each table, we described the specific factors we considered in developing each alternative. This alternative consolidates squadrons that are located in CONUS only. Under this alternative, fighter aircraft would remain at the same number of bases as the Air Force currently plans. The number of aircraft of one composite wing would be changed. Bases would be restricted to having the same aircraft that were in the Air Force’s plan. This alternative would result in annual operating costs savings of $25 million. Table I.1 compares the Air Force’s planned basing with alternative one. This alternative consolidates squadrons and uses one fewer base than currently planned by the Air Force. In order to execute this alternative, fewer than one squadron from CONUS would have to be shifted outside of CONUS. Two different aircraft blocks would be mixed, which is comparable to the Air Force’s plan. The number of aircraft at two composite wings would be changed. Also, aircraft other than F-15s and F-16s would have to be relocated to fully execute this alternative. This alternative would result in annual operating costs savings of $59 million. Table I.2 compares the Air Force’s planned basing with alternative two. This alternative consolidates fighters at one fewer base than currently planned by the Air Force. The number of aircraft in three composite wings would be changed. One squadron at base 4 would have 30 aircraft. One squadron substitution between the Air Force’s active and reserve components would be necessary. Some aircraft would be exchanged between theaters. Two different aircraft blocks were mixed at one wing, which is comparable to the Air Force’s plan. This alternative would result in annual operating costs savings of $101 million. Table I.3 compares the Air Force’s planned basing with alternative three. This alternative consolidates fighters at one fewer base than currently planned by the Air Force. The number of aircraft at two composite wings would be changed. One squadron at base 4 and one squadron at base 6 would have 30 aircraft each. One squadron substitution would be required between the Air Force’s active and reserve components. Also aircraft would be exchanged between theaters. Two different aircraft blocks were mixed at one wing, which is comparable to the Air Force’s plan. This alternative would result in annual costs savings of $115 million. Table I.4 compares the Air Force’s planned basing with alternative four. The objective of this review was to evaluate the cost-effectiveness of operating the fighter forces in smaller squadron sizes and the implications this might have on the Secretary of Defense’s efforts to reduce defense infrastructure. Our review focused on the Air Force’s active component fighter aircraft with a primary focus on the C and D model of F-15s and F-16s. To evaluate the benefits resulting from reduced squadron sizes, we interviewed officials in various Air Force Headquarters offices such as the Force Programming Division; the Combat Forces Division of the Directorate of Forces; the Combat Forces of the Directorate of Programs and Evaluation; and the Air Operations Group. We also interviewed Air Combat Command (ACC) officials, including officials from various staff functions, the 33rd Fighter Wing, 1st Fighter Wing, and the 20th Fighter Wing. Additionally, we interviewed officials from the U.S. Central Command Air Forces Headquarters. We examined a variety of Air Force documents, including peace-keeping and Gulf War deployment records, staffing requirements and historical levels, and various studies and analyses. We also reviewed the Secretary of Defense’s Defense Planning Guidance and Joint Strategic Capabilities Plan and the Air Force’s War Implementation and Mobilization Plan. To calculate the cost implications of operating smaller squadrons, we obtained estimated annual operating costs for F-15 and F-16 fighters from Air Force headquarters cost-modeling officials. Separate estimates were provided for squadrons of 18 and 24 aircraft in the U.S., Pacific, and European theaters. These are based on staffing estimates that we developed using planning factors provided by the Air Force. The planning factors included the number of officer and enlisted personnel in squadron overhead, flight crew, and maintenance positions for independent and dependent squadrons. To provide this data, the Air Force used its Systematic Approach to Better Long Range Estimating (SABLE) model, an automated model that uses various cost and planning factors to estimate the peacetime operating and support costs of flying units. Operating costs include cost elements in the operation and maintenance, military personnel, and other procurement appropriations. Within these appropriations, the major cost categories include military and civilian pay, aviation fuel, depot level repairables, and consumable supplies. These costs are estimated for each type and model of aircraft within each major command. The SABLE model only addresses variable costs but not any fixed costs. Similarly, it captures direct costs but few indirect costs such as the costs of maintaining the base and runway. The SABLE produces general cost estimates to evaluate force structure options. The estimated savings do not include any military construction, base closure, or other costs that may be associated with transferring aircraft from one specific location to another. Since 70 percent of the estimated cost savings resulted from reduced military personnel, our reliability assessment consisted of an analysis of the reasonableness of the military personnel planning factors provided by the Air Force. In conducting this assessment, we interviewed ACC manpower officials who developed the personnel factors that were used for the squadron located at U.S. bases. Since maintenance positions accounted for over 80 percent of the military personnel savings, we also reviewed the Logistics Composite Model (LCOM) that ACC officials used in developing their maintenance personnel factors. We also interviewed fighter wing and squadron command and maintenance officials at Langley, Eglin, and Shaw Air Force Bases and toured wing and squadron maintenance and flight line areas. We also reviewed historical staffing data that covered the period when the wings at these two bases previously had squadrons of 24 aircraft. To develop and evaluate alternatives for consolidating active F-15 and F-16 squadrons, we analyzed force structure organization at all bases that had combat F-15 and F-16 squadrons from 1991 to present, as well as the Air Force’s plans through 2001. We also reviewed and analyzed the base capacity assessment completed by each fighter base as part of the 1995 base realignment and closure (BRAC) process. Additionally, we met with various officials from Air Force Headquarters and ACC to identify and understand factors that would constrain the consolidation of these fighter aircraft. We also discussed squadron consolidation and constraining factors with fighter wing officials such as the wing commander, squadron commanders, maintenance officers, and facility and air space managers. The baseline for our alternatives was the Air Force’s planned fighter force structure for fiscal year 1997. Our alternatives ranged from restructuring only fighter aircraft in the United States to including all F-15 and F-16s worldwide. These options were discussed in open critiques with Air Force officials from both Air Force Headquarters and ACC. Our alternatives did not attempt to address political or international policies impacting basing decisions. Fred Harrison, Evaluator-in-Charge Dan Omahen, Senior Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO reviewed the cost-effectiveness of the Air Force's reconfiguration of F-15 and F-16 fighters into smaller squadrons, focusing on the consequences this might have on the Secretary of Defense's efforts to reduce defense infrastructure costs. GAO found that: (1) while smaller 18-aircraft squadrons provide more deployment flexibility than 24-aircraft squadrons, the larger configuration provides enough deployment flexibility to meet the Air Force's needs; (2) the ability of squadron commanders to manage the personnel and tasks of 24-aircraft squadrons has not proved to be a problem; (3) the Air Force's decision to reduce squadron size from 24 to 18 aircraft was not based on organized analysis or documented studies; (4) using 24-aircraft squadrons instead of 18-aircraft squadrons could reduce costs; (5) by consolidating some existing F-15 and F-16 squadrons with other squadrons to better maximize base utilization, the Air Force could cost-effectively increase the number of 24-aircraft squadrons; (6) all 18-aircraft squadrons could return to their original size of 24 aircraft with little or no effort and expense; (7) if the Air Force consolidates its squadrons, it should keep aircraft with the similar modernization, mission characteristics, and engine types together; and (8) at least four alternatives exist to consolidate the Air Force's squadrons that could save between $25 million and $115 million annually.
The Coast Guard is a multi-mission, maritime military service within DHS. The Coast Guard’s range of responsibilities includes maintaining the United States’ maritime borders, facilitating the global movement of commerce, safeguarding marine resources, and protecting those at sea. To meet its statutory missions, the Coast Guard operates a number of vessels, aircraft, and information technology systems. Many of the assets that the Coast Guard operates were delivered between 1960 and 1992 and are approaching the end of or have exceeded the period for which they were expected to perform—known as the assets’ service lives. The Coast Guard began a recapitalization effort in the late 1990s to modernize a significant portion of its entire surface and aviation fleet by rebuilding or replacing assets. The Coast Guard awarded a contract to Integrated Coast Guard Systems (ICGS) in June 2002 to be the systems integrator for the portfolio. The Coast Guard generally provided ICGS with broad, overall performance specifications—such as the ability to interdict illegal immigrants—and ICGS determined the assets needed and their specifications. A central aspect of this effort was to use information technology to connect its major assets through a single command and control architecture—C4ISR—to improve the accuracy and speed of conducting Coast Guard missions. This system of systems approach was the effort formerly known as Deepwater. In 2002, the Coast Guard conducted an analysis that determined the fleet, as designed by ICGS, would have significant capability gaps in meeting mission requirements that emerged after the September 11, 2001, terrorist attacks. The Coast Guard decided, due to fiscal constraints, not to make significant changes to the ICGS planned fleet, but did approve changes to several assets’ capabilities. In 2012, we reported on the Coast Guard’s progress in achieving these capabilities, such as adding chemical, biological, and other decontamination capability. In 2006, the Coast Guard acknowledged that it had relied too heavily on contractors and, citing cost increases, took over the role of lead systems integrator. DHS approved a new baseline in May 2007 that established the total acquisition cost of the Deepwater program at $24.2 billion and projected the Coast Guard would complete the acquisition in 2027. The Coast Guard also reconsidered the planned fleet mix, required to meet the established mission needs, through a series of analyses. We reviewed these analyses in May 2012 and found that the Coast Guard did not consider any assets with less capability than the Deepwater assets and that the Coast Guard used optimistic cost constraints to conclude that it could afford the portfolio. budget, DHS and the Coast Guard no longer use the term “Deepwater” for the program aimed at recapitalizing its surface, air, and information technology capacity. This effort is now called Coast Guard recapitalization and it includes many of the assets that made up the former Deepwater effort as well as other major acquisitions. GAO, Observations on the Coast Guard’s and the Department of Homeland Security’s Fleet Studies, GAO-12-751R (Washington, D.C.: May 31, 2012). portfolio, formerly known as Deepwater. Appendix III shows the estimated cost of programs in the current portfolio. The Coast Guard has continued to strengthen its acquisition management capabilities when purchasing individual assets. For example, in response to one of our prior recommendations, the Coast Guard released a January 2013 update to its Major Systems Acquisition Manual to, among other things, better reflect cost and schedule estimation best practices and to clarify the roles and responsibilities of the Executive Oversight Council. The Executive Oversight Council is comprised of admirals and senior executives who regularly conduct oversight meetings to govern the acquisition process. As part of the budget process, the Executive Oversight Council provides recommendations to the Investment Board, which are presented to the Investment Review Board—a higher level group—and ultimately the Commandant for final investment decisions. In addition, the Coast Guard has sought to maximize competition in its acquisitions and to buy commercial products when available. For example, the Coast Guard purchased a “reprocurement data licensing package” from Bollinger Shipyards, Inc. that contains the technical specifications and licenses necessary to build the Fast Response Cutter. The Coast Guard is planning to use this information to conduct a full and open competition for the remaining vessels. Our previous work has shown that when the government owns technical data rights, it does not need to rely on only one contractor to meet requirements. Further, the Coast Guard has developed a warranty provision under its contract with Bollinger Shipyards that has held the contractor responsible for production deficiencies. While the Coast Guard does not always have insight into how much it costs the contractor to fix these issues, after multiple deficiencies interrupted production, officials noted they are confident that the Coast Guard has received value from this warranty. The Coast Guard plans to use these strategies when purchasing the Offshore Patrol Cutter. DHS and Coast Guard acquisition guidelines provide that representative units of major acquisition assets should be operationally tested by an independent test agency before they are approved for full-rate production. The Coast Guard uses the Navy’s Commander Operational Test and Evaluation Force (COTF) to conduct operational tests and other evaluations for its major acquisition assets. COTF serves as an independent evaluator of an asset’s capabilities and has experience testing Navy assets. Operational testing characterizes the performance of an asset during a discrete period of time but testers may also use actual mission performance data when available. In conducting operational testing, COTF evaluates an asset’s operational effectiveness and suitability: For operational effectiveness, testers determine whether or not an asset can meet its missions. For operational suitability, testers determine whether or not the agency can logistically support the asset to an acceptable standard, such as having the asset available for operations 80 percent of the year. According to DHS and Coast Guard acquisition guidance, results of operational tests are used to evaluate the degree to which the capability or system being acquired meets its requirements and is able to operate in its intended environment, both before and often after full-rate production commences. In addition to verifying that an asset is operationally effective and suitable, operational testing also tests key performance parameters, which are the capabilities considered essential for mission success. For example, a key performance parameter for the Fast Response Cutter is being able to reach a top speed of at least 28 knots. According to DHS and Coast Guard acquisition guidance, when programs fail to meet key performance parameters, program managers are required to file breach memorandums stating that the program failed to demonstrate the required performance threshold. Program managers are also required to formally notify Coast Guard leadership, DHS, and certain congressional committees and file a remediation plan within 30 days that proposes corrective actions to mitigate the issues that resulted in the breach. Within 90 days of filing the breach notification the program should have accomplished one of the following three actions: (1) re-validate the original baseline parameters, (2) have a new baseline approved that revises the parameters that were breached, or (3) conduct a program review that evaluates the proposed baseline revisions and makes recommendations to the acquisition decision authority. The Coast Guard’s new asset classes that we reviewed—the National Security Cutter, Fast Response Cutter, HC-144, and the C4ISR information technology system—are generally demonstrating improved mission performance over the assets they are replacing, according to Coast Guard officials who operate these assets. For example, these new assets have greater fuel and food capacity, automation, and handling/sea-keeping, all of which increase endurance and effectiveness. However, the Coast Guard has not been able to prove that assets meet key requirements through operational testing. Of these four newly fielded asset classes, the Fast Response Cutter and the HC-144 completed initial operational testing, but did not successfully demonstrate many key requirements during these tests. For example, the Fast Response Cutter did not meet its operational availability requirement due to a key engine part that failed during testing. DHS and the Coast Guard approved both assets for full-rate production noting planned improvements, but DHS and Coast Guard acquisition guidance is not clear as to when a program needs to meet minimum performance standards. For example, the guidance does not specify whether the performance standards must be met before entering full-rate production. The National Security Cutter and C4ISR programs have not completed operational testing. The Coast Guard recently conducted testing on the National Security Cutter although seven of eight vessels are completed or currently under construction. Based on early assessments and mission performance of the first three National Security Cutters, the Coast Guard has determined that design changes costing at least $140 million are necessary to meet requirements. Lastly, due to performance, maintenance, and obsolescence issues, the Coast Guard is replacing its initial C4ISR software, which cost about $413 million to develop and field, on the National Security Cutter, HC-144, and HC-130J. Coast Guard operators and commanding officers in several locations told us that the National Security Cutter, Fast Response Cutter, and HC-144 are performing well during missions and are an improvement over the vessels and aircraft they are replacing. Operators primarily attribute the performance improvements to better endurance and communication capabilities, which help to position and keep these assets in high threat areas. Specifically, these new assets have greater fuel capacity and efficiency, engine room and boat launch automation, handling/sea- keeping, and food capacity, all of which increase endurance and effectiveness. Operators stated that these new assets, using information technology systems, can also share pictures and locations of vessels, and communicate more frequently and accurately with shore-based operational commanders than the legacy vessels being replaced. For example, operators said they now use chat rooms on secure networks in addition to radios. These chat rooms improve communication because multiple parties can communicate at the same time and messages remain available on the screen for reference. Figure 1 below compares endurance-related capabilities of the National Security Cutter, Fast Response Cutter, and HC-144 with the assets they are replacing. According to operators of the National Security Cutter and the Fast Response Cutter, other new capabilities are also increasing operational effectiveness. For example, the Fast Response Cutter has a stern launch and recovery ramp—a space at the end of the vessel that stores and deploys the cutter’s small boat and is open to the water. Using this ramp, according to operators, they launch the cutter’s small boat in 10 to 15 seconds while the ship is actively pursuing a target. By comparison, the legacy 110’ patrol boat requires a significant number of personnel to launch the cutter’s small boat using a crane attached to the center of the vessel—a complex process that takes significantly longer and has potential safety risks. The National Security Cutter also has a stern launch ramp, which, in addition to launching and recovering small boats, was used by the ship’s crew to hold a seized boat while they dismantled it to find drugs hidden in hard-to-reach compartments. In addition, operators told us that the larger flight deck on the National Security Cutter allows the Coast Guard to more safely operate the helicopter in rougher seas than the legacy vessel and, based upon early demonstrations, conduct unmanned aircraft system operations in conjunction with the helicopter. These and other capability improvements allow Coast Guard operators to more effectively accomplish their missions. To date, the improved capabilities of the four newly fielded assets have led to mission-related successes, according to Coast Guard asset commanders. For example, officials from Air Station Miami reported that since they began regularly operating the HC-144 in fiscal year 2011, the aircraft has had a significant role in improving the effectiveness of the Coast Guard’s counterdrug and alien migrant interdiction operations in this area. In addition, one National Security Cutter completed a 160-day deployment in fiscal year 2013 during which it performed 6 drug interdictions totaling 570 kilograms of cocaine. Cutter officers stated that the ship’s intelligence capabilities and the small unmanned aircraft system, which are both new capabilities that are not on the 378’ High Endurance Cutter, were crucial to these drug interdictions. In addition, Coast Guard operators stated that the ability to interoperate with foreign navies during joint exercises was greatly enhanced by the communication features on the National Security Cutter. DHS approved the Fast Response Cutter and HC-144 for full-rate production in September 2013 and October 2012, respectively. However, neither asset met all key requirements during initial operational testing. The Fast Response Cutter partially met one of six key requirements while the HC-144 met or partially met four of seven. The Fast Response Cutter was found to be operationally effective (with the exception of its cutter boat) though not operationally suitable, and the HC-144 was found to be operationally effective and suitable. As we have previously found for Department of Defense (DOD) programs, continuing with full-rate production before ensuring that assets meet key requirements risks replicating problems in each new asset until such problems are corrected. DHS officials stated that they approved both assets for full- rate production because the programs had plans in place to address most major issues identified during testing, such as supplying the Fast Response Cutter with a small boat developed for the National Security Cutter. However, DHS and Coast Guard acquisition guidance are not clear regarding when the minimum performance standards should be met, such as prior to entering full-rate production. For example, DHS and Coast Guard guidance provide that the Coast Guard should determine if the capability meets the established minimum performance standards, but do not specify when this determination should be made. By comparison, DOD acquisition guidance requires that specific minimum performance standards, which are defined at the time assets are approved for system development, be met prior to entering full-rate production. In addition, DHS and Coast Guard acquisition guidance do not clearly specify how agency officials determine when a breach occurs and what triggers the need for a program manager to submit a performance breach memo. According to DHS and Coast Guard acquisition guidance, when programs fail to meet key performance parameters, program managers are required to file breach memorandums stating that the program did not demonstrate the required capability. Even though threshold key performance parameters on the HC-144 and Fast Response Cutter were not met during operational testing, the Coast Guard did not report that a breach had occurred. Acquisition guidance is unclear as to whether or not failing to meet key requirements during operational testing constitutes a breach. According to Coast Guard officials, if the Coast Guard plans to re- test or re-design a deficiency in order to meet the threshold value, then a breach has not yet occurred. For example, the Fast Response Cutter small boat did not meet the threshold seakeeping requirement, but a new cutter small boat has since been tested on its own and fielded to all Fast Response Cutters. The Coast Guard plans to test this new cutter small boat with the Fast Response Cutter during follow on testing. Program officials are confident that the cutter’s new small boat meets this requirement and that—therefore—a breach has not occurred. DHS acquisition guidance specifies the performance criteria used to determine whether or not a breach has occurred, but does not identify a triggering event for determining when a breach occurs. DHS’s Program Accountability and Risk Management officials stated that a program breach is not necessarily related to its performance during initial operational testing, which they state is a snapshot of a single asset’s performance during a defined test period. Without clear acquisition guidance, it is difficult to determine when or by what measure an asset has breached the threshold values of its key performance parameters and—therefore—when to notify DHS and certain congressional committees. Specific information on testing outcomes for each asset follows. COTF determined in July 2013 that the Fast Response Cutter, without the cutter’s small boat, is operationally effective—meaning that testers determined that the asset enables mission success. The cutter’s small boat was determined to not be seaworthy in minimally acceptable sea conditions and—therefore—could not support the cutter’s mission set. Further, COTF determined that the Fast Response Cutter is not operationally suitable because a key engine part failed, which lowered the amount of time the ship was available for missions to an unacceptable level. Despite the mixed test results, COTF and DHS testers as well as Coast Guard program officials all agree that the Fast Response Cutter is a capable vessel. Ultimately, COTF recommended that the Coast Guard proceed to field the vessel, but also recommended that the issues with the cutter’s small boat be remedied expeditiously and that follow-on operational testing be conducted once corrective actions have been implemented. Since the test, the Coast Guard has delivered a new small boat that meets the Fast Response Cutter’s needs and determined that the engine part failure was an isolated event. The Navy also examined the extent to which the Fast Response Cutter meets key requirements. The test demonstrated that it partially met only one out of its six key requirements; the other five requirements did not meet minimum performance levels or were not tested. Table 2 displays each key performance parameter for the Fast Response Cutter, the test results, and a discussion of these results. The Coast Guard proactively sought to test the Fast Response Cutter early in the acquisition process, but early testing limited the ability to fully examine the vessel. For example, the Coast Guard did not test the top speed of the vessel due to a fuel oil leak. As noted above, DHS approved the Fast Response Cutter for full-rate production, but directed the program to develop corrections for the issues identified during operational testing and to verify those corrections through follow-on operational testing by the end of fiscal year 2015. In July 2012, COTF determined the HC-144 to be operationally effective and operationally suitable and recommended that the Coast Guard continue to field the aircraft. Even though testers expressed confidence in the aircraft to meet its missions, the test also showed that the HC-144 achieved—or partially achieved— four out of seven key requirements. Table 3 contains each key performance parameter for the HC-144, the test results, and a discussion of these results. The Coast Guard did not test all key performance parameters, but is pursuing corrections following approval for production. The HC-144 did not meet the minimum performance level for detecting targets at sea with its radar and C4ISR mission system. While the mission system did not meet requirements, the aircraft was considered operationally effective because operators can supplement these systems by looking out of the windows of the aircraft. DHS approved the HC-144 for full-rate production, but directed the program to develop a plan to correct deficiencies. Coast Guard program officials told us that they are addressing the deficiencies discovered through the test as funding becomes available and through changes in operational tactics. According to the officials, the HC-144 program will likely be truncated because the Coast Guard is receiving similar assets (C-27 aircraft) from the Air Force at no cost, which would render the production decision of the HC-144 inconsequential. The Coast Guard has some knowledge about the performance of the National Security Cutter, gained through operational deployments and preliminary test events, and the field portion of operational testing was recently conducted. The Coast Guard has been operating the vessel since 2008, conducted a preliminary operational test in 2011, and has received certifications to fully operate and maintain helicopters as well as, according to officials, to use the cutter’s information technology systems on protected networks. In addition, Coast Guard program officials stated that the National Security Cutter has demonstrated most of its key performance parameters through a myriad of non-operational tests and assessments, but a few key performance parameters, such as those relating to the endurance of the vessel and its self-defense systems have yet to be assessed. Verification of an asset’s ability prior to operational testing may be beneficial, but, as we have previously found, only operational testing can ensure that an asset is ready to meet its missions. Prior to testing, the Coast Guard encountered several issues that require retrofits or design changes to meet mission needs based upon operations, certifications, and non-operational testing. The total cost of these changes is not yet known, but changes identified to date have totaled approximately $140 million, about one-third of the production cost of a single National Security Cutter. The Coast Guard must pay for all of these and future changes due to the contract terms under which the first three ships were constructed and because the warranty on the remaining ships does not protect the Coast Guard against defects costing more than $1 million. Table 4 lists the retrofits and design changes costing more than $1 million. The table does not include all changes because the Coast Guard did not have data for some of the modifications. In addition to the $140 million in identified changes, the Coast Guard has established a program to supply the National Security Cutter with cutter small boats for an additional $52.1 million because the small boats originally planned to be delivered with the vessel did not meet requirements. Additional changes may be needed because the Coast Guard has not fully validated the capabilities of the National Security Cutter, though seven vessels have been delivered or are in production. This situation could result in the Coast Guard having to spend even more money in the future, beyond the current changes, to ensure the National Security Cutter fleet meets requirements and is logistically supportable. For example, the cutter is experiencing problems operating in all intended environments. The National Security Cutter requirements document states that the cutter will conduct assigned missions in a full spectrum of climate and maritime weather conditions, to include tropical, dry, temperate, and arctic climates. This document adds that although the National Security Cutter will operate in regions in which ice is frequently encountered, it will not have an ice-breaking mission. However, Coast Guard engineering reports from December 2012 discuss problems operating in both warm and cold climates. These reports discuss several warm weather problems, including cooling system failures, excessive condensation forming “considerable” puddles on the deck of the ship, and limited redundancy in its air conditioning system—which, among other things, prevents the use of information technology systems when the air conditioning system needs to be serviced or repaired. In addition, according to operational reports, during a recent deployment, the Commanding Officer of a National Security Cutter had to impose speed restrictions on the vessel because of engine overheating when the seawater temperature was greater than 77 degrees. Cold climate issues include the National Security Cutter not having heaters to keep oil and other fluids warm during operations in cold climates, such as the arctic. Further, Coast Guard operators state that operating near ice must be done with extreme caution since the ice can move quickly and can “spell disaster” if the National Security Cutter comes in contact with it. Senior Coast Guard officials acknowledged that there are issues to address and stated that the Coast Guard has not yet determined what, if any, fixes are necessary and that it depends on where the cutter ultimately operates. The Coast Guard does not plan to operationally test the C4ISR system’s key performance parameters. The Coast Guard initially planned to test the C4ISR system separately from the operational testing of its planes and vessels, such as the HC-144 and Fast Response Cutter. Coast Guard officials then decided to test the C4ISR system in conjunction with the planes and vessels to save money and avoid duplication. However, the C4ISR system was not specifically evaluated during the HC-144 and Fast Response Cutter tests because testing the effectiveness and suitability of the C4ISR system was not fully integrated into the assets’ test plans. For example, the HC-144 was unable to meet its key requirement for detection, which uses the C4ISR software in conjunction with the HC-144’s radar and other sensors. In addition, COTF found that the HC-144’s ability to detect and share target data was cumbersome and time-consuming. These results were not evaluated against the C4ISR system’s requirements. While testing the C4ISR system at the same time as the assets can work, this strategy is not consistent with Coast Guard acquisition guidance if the C4ISR system’s key performance parameters are not tested. Acquisition guidance states that the Coast Guard should test the C4ISR system, as it does with all major acquisitions, to ensure it is operationally effective, operationally suitable, and meets its basic requirements. By not testing the system, the Coast Guard has no assurance that it is purchasing a system that meets its operational needs. In responding to a draft of this report, the Coast Guard stated that it now plans on testing the C4ISR system’s key performance parameters during follow on testing for the National Security Cutter. The Coast Guard has also encountered several issues with the C4ISR system that have required significant and costly changes, including replacing the original system. The original C4ISR system, which cost $413 million to develop and field, was designed and built as a tightly integrated system bundling large commercial and government software programs with contractor-proprietary software, which made it difficult and costly to maintain—primarily due to its unique characteristics and large size. For example, according to program officials, the Coast Guard relied on the contractor to conduct even basic system updates, which required new software code because of how the system was integrated. As a result, in 2010, the Coast Guard began replacing the C4ISR software in two steps. First, to address immediate issues, the Coast Guard separated the weapons and command and control/navigation portions of the software but maintained the ability to share data between these portions of the system. Second, the Coast Guard has developed and is now installing a new software package that shares data between proven systems, which makes the system easier to maintain. For example, the communication/navigation system is largely based upon the Navy’s Global Command and Control System, a long-standing system maintained by DOD. In addition, the combat system is adapted from the Navy’s Aegis system. While the previous version of the C4ISR system also contained this software, the Coast Guard’s new configuration keeps these systems independent to improve performance and maintenance, while still allowing data to be passed back and forth between the software packages within the system. The Coast Guard has spent nearly $2 million to develop this new system, called Seawatch, which will have to be further developed for each asset on which it is fielded. For example, it will cost an additional $88.5 million in acquisition funds to purchase the software and hardware needed to field the system on the National Security Cutters. In addition, the Coast Guard is replacing the mission systems on the HC-144 and HC-130J airframes with a proven Navy system to address obsolescence, maintenance, and performance issues. Initial cost estimates are being developed for this project. As acquisition program costs increase across the portfolio, consuming significant amounts of funding, the Coast Guard is farther from fielding its planned fleet today than it was in 2009, in terms of the money needed to finish these programs. In 2009, we found that the Coast Guard needed $18.2 billion to field its original baseline, but it now needs $20.7 billion to finish fielding these same assets. For example, the estimated funding needed to complete the National Security Cutter increased by $2.2 billion since original estimates. Given these cost increases and funding constraints, the Coast Guard and key stakeholders have acknowledged that the Coast Guard’s acquisition portfolio is not affordable but, thus far, efforts to address this issue have not led to the significant trade-off decisions needed to improve its affordability. To balance its portfolio, Coast Guard budget officials stated that they use the 5-year Capital Investment Plan. However, this plan presents data in a manner that makes the portfolio appear more affordable than it really is. For example, in the Fiscal Years 2014 through 2018 Capital Investment Plan, the Coast Guard proposed purchasing two Fast Response Cutters per year, instead of four or six per year, but did not capture the up to $800 million in total cost increases associated with this reduced quantity. As program cost increases consume significant amounts of funding, the Coast Guard is farther from fielding its planned fleet today than it was in 2009, in terms of the money needed to finish these programs. Figure 2 shows the total cost of and cost to complete the Coast Guard’s original 2007 baseline in 2009 and 2014. This is the result of $11.3 billion in cost increases realized since 2007 for these programs, according to the most recent program baselines. For example, the Coast Guard experienced a $2.2 billion cost increase to the National Security Cutter project since the 2007 estimate. In addition, the anticipated cost to complete the Offshore Patrol Cutter has increased by $4 billion since 2007 and, therefore, will also consume a significant portion of future funding. Since our last review, the Coast Guard, in conjunction with DHS, has updated many of its cost estimates. Senior Coast Guard acquisition officials told us that many of the cost increases are due to changes from preliminary initial estimates and that they expect to meet their current cost estimates. However, the Coast Guard has yet to construct the largest asset in the portfolio—the Offshore Patrol Cutter— and if the planned costs for this program increase, difficulties in executing the portfolio as planned will be further exacerbated. Coast Guard, DHS, and OMB officials have acknowledged that the Coast Guard cannot afford to recapitalize and modernize its assets in accordance with the current plan at current funding levels. According to budget documents, Coast Guard acquisition funding levels have been about $1.5 billion for each of the past 5 years and the President’s budget requests $1.1 billion for fiscal year 2015. At the same time, DHS is struggling to match acquisition needs with available resources across all of its component agencies, including the Coast Guard. Coast Guard acquisitions comprise about 16 percent of the total DHS acquisition budget. In a December 2012 memo signed by the Chief Financial Officer, DHS estimated that funding requirements for all of its major acquisitions exceed available resources by 30 percent.us that they recognize that the Coast Guard’s acquisition portfolio is not affordable at current funding levels given the fiscal constraints faced by all federal agencies. OMB officials have also told Efforts are underway to address this issue, but, so far, these efforts have not led to the significant trade-off decisions needed to improve the affordability of the Coast Guard’s portfolio. A senior Coast Guard official recently stated that external reviews of the Coast Guard’s planned acquisitions have been conducted by DHS and White House organizations, such as the President’s Policy Councils, and, often, additional demand for Coast Guard missions is identified, rather than deciding upon reductions. OMB officials stated that these reviews are not conducted in conjunction with budget policy and do not incorporate capital investment strategies. Examples of the steps OMB, DHS, and the Coast Guard have taken to address the affordability of the Coast Guard’s acquisition portfolio are described below: OMB conducts annual performance and mission based reviews of the Coast Guard, in conjunction with other White House staff, as part of the annual budget process. OMB officials told us that there has been little progress in efforts to identify the trade-offs that would make the recapitalization portfolio more affordable, such as adjusting the quantities or capabilities of assets needed to meet mission needs. The officials stated that reviews regarding the fiscal year 2015 budget process were focused heavily on the sequestration funding caps and, therefore, did not focus on long term issues. DHS has conducted two annual Coast Guard acquisition portfolio reviews, but according to DHS program reviewers, the most recent review—scheduled for September 2013—was cancelled as a result of the lapse in federal government appropriations. According to a DHS official who led the reviews, the earlier reviews provided updates to DHS leadership on the status of the Coast Guard’s acquisitions and efforts to address affordability, but no trade-off decisions were made to reduce planned quantity or capability. DHS officials told us that the Secretary recently directed a review of the Coast Guard’s acquisition portfolio over the next 20 years. We have previously reported that DHS has taken steps to address affordability issues at acquisition decision events, but it has rarely directed affordability trade-offs. In the case of the Fast Response Cutter, DHS approved the vessel for full-rate production in September 2013 even while acknowledging that the cutter faces affordability challenges and that the program did not meet DHS’s requirement to verify that sufficient funding is available. DHS has proposed two consecutive budgets, one before and one after the production decision, with a funding level for the Fast Response Cutter that supports purchasing two cutters per year rather than the four cutters per year that form the basis for the cost and schedule estimates in the asset’s acquisition program baseline. We have previously reported on the Coast Guard’s efforts to address affordability and recommended that the Coast Guard develop a plan to match needs and resources. In response to our recommendation in September 2012, DHS stated that the Coast Guard is developing a process to make trade-off decisions that will result in a portfolio that contains a balanced mix of assets that meets mission needs within affordability constraints. However, the Coast Guard has yet to document how this new process will work and it is not clear who in the Coast Guard has the authority to make trade-off decisions. Officials who support the Executive Oversight Council stated that the goal is to better inform Council members so that they understand the full consequences of annual budget decisions. These officials told us that they are striving to establish this process in time to inform the fiscal year 2016 budget. While the Coast Guard continues to concur with our previous recommendation that the Executive Oversight Council should be closely involved in making trade-off decisions to balance the portfolio, the Coast Guard could not provide documentation that this group has made any decisions to balance needs and funding as of May 2013. In addition, Coast Guard budget officials told us that the Executive Oversight Council does not have full authority to make these decisions, as final decisions are made by the Commandant, in conjunction with the Investment Review Board. The Coast Guard’s Fiscal Years 2014 through 2018 Capital Investment Plan complies with the law specifying its contents. Each year, the Coast Guard is required to submit a 5-year Capital Investment Plan to certain congressional committees when the President’s budget is submitted. This plan is required to include, among other things, the appropriations in the current budget, projected funding levels for each of the next five fiscal years, and estimated total cost and schedule in current program baselines. To date, the Coast Guard has not submitted the Fiscal Years 2015 through 2019 plan, which was due in conjunction with the President’s Budget delivered in March 2014. The law does not require the Coast Guard to include total cost of its projects at planned funding levels. In the Fiscal Years 2014 through 2018 Capital Investment Plan, cost and schedule totals did not match the funding levels presented for many programs. For example, the plan proposed lowering the Fast Response Cutter procurement to two per year but still showed the total cost and schedule estimates for purchasing three or six per year—suggesting that this reduced quantity would have no effect on the program’s total cost and schedule. Given that decreasing the quantity purchased per year would increase the unit and total acquisition cost, the Coast Guard estimated that the decision to order fewer ships will likely add $600 million to $800 million in cost and 5 years to the cutter’s final delivery date, but this was absent from the plan. Coast Guard officials stated that they are required to report the assets’ cost and schedule per the acquisition program baseline. However, these officials also acknowledged that this plan does not consistently reflect current cost and schedule estimates or the effects of the trade-offs that are made as part of the annual budget cycle. Reporting total cost and delivery dates that do not reflect funding levels could lead to incorrect conclusions about the effect of these decisions on the program’s total cost and schedule. That is, Congress may conclude that the Coast Guard’s acquisition portfolio is more affordable than it actually is. The Coast Guard is repeatedly delaying and reducing its capability through its annual budget process, but does not know the extent to which its mission needs can be tailored and still achieve desired results. Thus, its ability to meet future needs is uncertain. For example, the Coast Guard has already experienced a gap in heavy icebreaking capability and is falling short of meeting current and future major cutter operational hours. These capability gaps may persist as funding replacement assets will remain difficult at current funding levels. A key indication of this situation is that several current and additional acquisitions will have to compete for a small percentage of the Coast Guard’s acquisition funding between 2018 and 2032 while the Offshore Patrol Cutter is being built. This asset will likely absorb about two-thirds of the Coast Guard’s acquisition funding during this timeframe. The Coast Guard does not have a long term plan that demonstrates how it will maintain today’s service level and meet identified needs. While making annual budget decisions, the Coast Guard is pursuing some cost effective means of providing specific capabilities, though it has yet to fully realize potential savings. As the Coast Guard continues to make decisions through the budget process, it is experiencing capability gaps in the following areas: Icebreakers—According to program officials, due to funding constraints, the Coast Guard chose not to invest in either of its heavy icebreakers as they approached the end of their service lives. Thus, both heavy icebreakers were out of service from 2010 to 2013 and the Coast Guard could not complete missions, such as resupplying a science laboratory in Antarctica. The Coast Guard has recently returned one of these heavy icebreakers back to service, but still has one fewer heavy icebreaker than it has historically operated and several fewer than it needs, according to the Coast Guard’s June 2013 heavy icebreaker mission need statement. River Buoy Tenders—The Coast Guard is also facing a gap in its river buoy tender fleet and the Coast Guard has yet to formalize an acquisition project to replace this fleet, which is estimated to cost over $1.5 billion. Drug Interdiction Performance—The Coast Guard and DHS Inspector General recently reported that the Coast Guard was not able to meet the target for its drug interdiction mission performance measure for four of the last five years because of potential factors including the advancing age and deteriorating condition of the Coast Guard’s cutter fleet. report this spring that discusses the resources provided by the Coast Guard for drug interdiction operations. For more information, we will be issuing a 2013 Major Cutter and Patrol Boat Hours—The Coast Guard is also currently experiencing a performance gap in its major cutter and patrol boat fleets. The Coast Guard’s major cutter fleet—comprised of the National Security Cutter and the in-service high and medium endurance cutters—must operate 136,620 hours per year to meet its missions. In fiscal year 2013, partly due to sequestration, the Coast Guard’s major cutter fleet operated 99,342 hours—falling 27 percent short of its goal. The Coast Guard estimates that it would have been 6,078 hours short of its needs even if sequestration was not in effect. The Coast Guard’s patrol boat fleet operated for 178,000 hours last year, falling short of its 247,000 hour goal. The Coast Guard would have also fallen short of this goal even if sequestration were not in effect. In addition, there is little room in its budget to deal with unexpected developments in operations. For example, in 2012, the Commandant wrote about the emerging need for established forces in the Arctic, but the Coast Guard’s major cutters may need additional equipment to operate in these areas. The Coast Guard may fall even further below its operational hour goal for major cutters as the Offshore Patrol Cutter is being built. The Coast Guard has stated that delays in the delivery of the Offshore Patrol Cutter will lead to greater operational capacity shortfalls due to increased downtime for maintenance and other issues that reduce the current medium endurance cutters’ operational availability. For example, in 2013, three 210’ medium endurance cutters had to be put in a dry dock for emergency hull repairs. Coast Guard engineers stated that repairs like these are likely to become more frequent as these assets age. Even after the Coast Guard builds the Offshore Patrol Cutter, it may not achieve the 136,620 hour goal. To meet this goal, the Coast Guard needs the National Security Cutter and the Offshore Patrol Cutter to operate for a total of 4,140 hours each year. The National Security Cutter is currently operating 3,330 hours per year and the Coast Guard has a plan to increase this to 3,830 per year by fiscal year 2017. However, Coast Guard operators have significant concerns about maintaining the vessel at this high tempo, primarily due to logistics and personnel concerns. According to officials, the Coast Guard is still planning to operate the National Security Cutter and Offshore Patrol Cutter 4,140 hours per year by using a crew rotation concept. We are currently conducting a review of National Security Cutter operations, including the status of implementing rotational crewing. As the budget process takes the place of a knowledge-based acquisition process, the Coast Guard is repeatedly delaying and reducing its portfolio on an annual basis to address budget constraints, rather than pursuing an affordable set of long-term needs.future budgets and delays fielding capability, which may reduce planned performance. Despite these delays, the Coast Guard continues to follow its current plan, but does not know the extent to which this plan can be This approach puts pressure on tailored through the budget process and still achieve desired results. Thus, the Coast Guard does not know what capability it will be able to provide and whether or not this capability will meet mission needs. We have previously found that by continuing to pursue only a portion of planned capability without re-evaluating the portfolio as a whole, the Coast Guard further increases the risk that it may not accomplish its mission needs. According to best practices, agencies should implement a knowledge-based acquisition approach to pursue a long term set of affordable needs. We have previously found that acquisitions that continue without this knowledge frequently experience poor outcomes. Without such an approach, the Coast Guard does not have reasonable assurance that its assets are planned to meet established cost, schedule, and performance baselines, in turn leading to sound investment decisions. If funding levels remain constant, several current and additional acquisitions will have to compete for a small percentage of the Coast Guard’s acquisition funding between 2018 and 2032 while the Offshore Patrol Cutter is being built. According to current funding levels and cost and schedule estimates, the Offshore Patrol Cutter will absorb about two- thirds of the Coast Guard’s acquisition funding during this timeframe. Primarily due to a 14 year delay to the Offshore Patrol Cutter and a 10 year delay to the Fast Response Cutter realized since 2007, the Coast Guard is now in the position of having to continually rebuild its assets rather than rapidly modernize as was originally planned. Thus, the Coast Guard has a number of significant additional programs that will require funding while the Offshore Patrol Cutter, Fast Response Cutter, and other assets in the current portfolio are still being built. The Coast Guard is in the process of assessing its needs in many of these areas. These potential acquisitions fit into three categories: Surface Fleet Recapitalization—This project includes conducting a service life extension program for the 13 270’ medium endurance cutters, replacing or extending the Coast Guard’s 87’ coastal patrol boat fleet (73 cutters), and funding other sustainment projects for vessels that are in-service, such as the Coast Guard’s large fleet of river buoy tenders. As discussed earlier, the Coast Guard is also looking into additional icebreaker investments beyond the current single heavy icebreaker program, as the medium icebreaker will also need to be replaced or extended during this period. Aircraft Recapitalization—The primary aircraft need will be replacing or extending the MH-60 and MH-65 helicopter fleets, which approach a life-limiting milestone between 2022 and 2026. Regardless of the future path, significant acquisition dollars will be required to maintain annual flight hours for the next 20 years, according to Coast Guard program officials. Another significant project, these officials added, will be replacing the C4ISR system on the Coast Guard’s aircraft—some of which need new systems while other systems need to be replaced due to obsolescence. According to Coast Guard program officials, the prototypes are planned to be completed by the end of fiscal year 2016, at which point the new mission systems will need funding for production. Additional Costs for New Assets—As with other cutter classes, the Fast Response Cutter and the National Security Cutter will need to undergo planned repair and maintenance work when the respective fleets reach their service life midpoints beginning in 2025 and 2028, respectively. The Coast Guard cannot skip these maintenance periods; they are needed to overhaul major components because older equipment is not supported over a cutter’s 30 year service life. In addition, the future operational bases from which the Offshore Patrol Cutter will operate need an estimated $431 million for upgrades to intended home ports. The Coast Guard is not currently required to develop a long-term fleet modernization plan that considers its current service levels for the next 20 years in relation to its expected acquisition funding. Without such a plan, the Coast Guard does not have a mechanism to aid in matching its requirements and resources. For example, the Coast Guard does not know if it can meet its other acquisition needs while the Offshore Patrol Cutter is being built, which according to current plans will conclude in about 20 years. In addition, as we have previously found, the Coast Guard is deferring costs—such as purchasing unmanned systems or replacing its Buoy Tender fleet—that could lead to an impending spike in the requirement for additional funds.place to capture the effects of deferring such costs on the future of the acquisition portfolio. The Coast Guard has no method in The Coast Guard’s acquisition guidance supports using a long range capital planning framework. According to OMB capital planning guidance referenced by the Coast Guard’s Major Systems Acquisition Manual, each agency is encouraged to have a plan that defines its long-term capital asset decisions. This plan should include, among other things, (1) an analysis of the portfolio of assets already owned by the agency and in procurement, (2) the performance gap and capability necessary to bridge the old and new assets, and (3) justification for new acquisitions proposed for funding. OMB officials stated that they support DHS and the Coast Guard conducting a long term review of the Coast Guard’s acquisitions to assess the capabilities it can afford. Examples of other fleet modernization plans include the Navy’s annual naval vessel construction plan (also known as the Navy’s long range shipbuilding plan), which reflects the quantity and categories of assets that the Navy needs to buy as well as the total number of assets in operation for each year. While we have previously noted challenges associated with the Navy’s plan, we also observed that such a plan is beneficial in that it lays out a strategic approach for decision making. A long-term plan can enable trade-offs to be seen and addressed in advance, leading to better informed choices and making debate possible before irreversible commitments are made to individual programs. Without this type of plan, decision makers do not have the information they need to better understand the Coast Guard’s long term outlook. In its naval vessel construction plan, the Navy also assesses capability gaps and planned construction over the short term, middle term and long term—each 10-year periods in the plan. The Secretary of Defense transmits the plan to Congress to aid in decision making. As a result, the Navy has some knowledge of its future funding challenges. For example, the Congressional Budget Office estimates that if the Navy continues to receive the same percentage of DOD funds for shipbuilding as it has in the past, the Navy can only fund 70 percent of the current long range plan. When we discussed such an approach with the Coast Guard, the response was mixed. Some Coast Guard budget officials stated that such a plan is not worthwhile because the Coast Guard cannot predict the level of funding it will receive in the future. However, other Coast Guard officials support the development of such a plan, noting that it would help to better understand the effects of funding decisions. Without such a plan, it will remain difficult for the Coast Guard to fully understand the extent to which future needs match the current level of resources and its expected performance levels—and capability gaps—if funding levels remain constant. The Coast Guard is currently pursuing cost effective alternatives that could begin the process of building a viable long term modernization plan. Cutter-Based Unmanned Aircraft Systems—The Coast Guard is in the process of demonstrating a small unmanned aircraft system on the National Security Cutter and, to date, these demonstrations have shown that a smaller system is feasible. As opposed to the 2007 estimate of $503 million, the Coast Guard preliminarily estimates that it can outfit each of the planned eight National Security Cutters with two unmanned aircraft and a control station on each vessel for $48 million. However, according to Coast Guard officials, it is too early to fully understand the costs. Once this system is purchased, the Coast Guard still plans to pursue a bigger solution in conjunction with the Navy that meets all of the Coast Guard’s requirements. Land-Based Unmanned Aircraft System—The Coast Guard has also begun a partnership with U.S. Customs and Border Protection to share and operate that component’s 10 land-based unmanned aircraft systems. In the past year, the Coast Guard has been able to use this asset to conduct over 500 hours of surveillance for Coast Guard missions and officials expect that this number may increase. While this program is growing, the Coast Guard continues to pursue its own land-based unmanned aircraft. Heavy Icebreaker—The Coast Guard is working closely with international and U.S. agency partners in gaining knowledge to support its heavy icebreaker acquisition. So far, while there are more than 10 U.S. agencies that have requirements for a heavy icebreaker, such as the Navy and National Science Foundation, no plans have emerged for funding this vessel. As the Coast Guard’s newest assets move through operational testing they are demonstrating capability, but problems have been identified. This is not unexpected; identifying problems is the purpose of the testing. In general, project and acquisition oversight officials evaluate these test results, among other data, and make a business case as to whether the government is taking on undue risk by mass producing these assets. This approach can be reasonable, but the parameters for making this case— including defining when an asset must meet a minimum level of acceptable performance prior to this decision and determining at what point a breach occurs—are not clearly set forth in Coast Guard or DHS guidance. Moreover, without a defined point in the acquisition process by which the Coast Guard must satisfy minimum requirements, the breach process, with regards to performance, loses meaning. Further, the Coast Guard no longer plans to operationally test the C4ISR system—always intended to be a linchpin of the recapitalization program—even though such testing is required of all major acquisitions. Without testing to ensure that these systems meet minimum performance standards, the Coast Guard cannot ensure that they meet mission needs and that the taxpayer receives a good value for the investment. As the Coast Guard has continued to refine cost estimates for its major acquisitions, it is realizing that the cost of its acquisition portfolio has grown and is now much greater than initially planned. This increased cost is consuming a large portion of the Coast Guard’s acquisition budget. Our previous recommendations, regarding the need for a process to make the trade-off decisions needed to balance resources and needs, still stand. In the meantime however, the extent of expected costs—and how the Coast Guard plans to address them through budget trade-off decisions— is not being clearly communicated to Congress. The mechanism in place for reporting to certain congressional committees, the Capital Investment Plan, does not reflect the full effects of these trade-off decisions on the total cost and schedule of its acquisition programs. This information is not currently required by statute, but without it, decision makers are unable to understand the full extent of funding that will be required to complete the Coast Guard’s planned acquisition programs. A pressing concern the Coast Guard faces is that the growing affordability gap for its major acquisitions will be exacerbated by impending requirements and capability needs. Annual budget decisions and the cost saving measures the Coast Guard is pursuing may be sufficient for the short term, but they do not position the Coast Guard to address future needs. In other words, short term budget decisions may not amount to a good long term investment strategy. Without a long term plan that sets forth needed capabilities and the funding it will take to meet them, the Coast Guard is not well positioned to identify how it will meet these mission needs. A long term plan of this nature is particularly critical in light of the looming Offshore Patrol Cutter procurement, which is currently estimated to account for about two-thirds of the acquisition budget. To help ensure that it receives accurate information on the full effect of funding decisions on acquisition programs, Congress should consider amending the law that governs the 5-year Capital Investment Plan to require the Coast Guard to submit cost and schedule information that reflects the impact of the annual President’s budget request on each acquisition across the portfolio—in addition to the current practice of reporting the cost and schedule estimates in current program baselines. To ensure that Congress and other decision makers are properly informed regarding the status of programs, we recommend that the Secretary of Homeland Security and the Commandant of the Coast Guard revise their acquisition guidance by taking the following two actions: Specify when minimum performance standards should be met, such as prior to entering into full-rate production. Clarify the performance data that should be used to assess whether or not minimum performance criteria have been met, prior to full-rate production, to determine whether a performance breach has occurred. To ensure that the Coast Guard’s C4ISR system meets mission needs, we recommend that the Commandant of the Coast Guard take the following action: Assess the operational effectiveness and suitability of the C4ISR system by fully integrating this assessment into other assets’ operational test plans or by testing the C4ISR program on its own. To help the Coast Guard improve the long-term outlook of its portfolio, we recommend that the Commandant of the Coast Guard take the following action: Develop a 20-year fleet modernization plan that identifies all acquisitions needed to maintain the current level of service and the fiscal resources necessary to build the identified assets. The plan should also consider trade-offs if the fiscal resources needed to execute the plan are not consistent with annual budgets. We provided a draft of this report to DHS for review and comment. In its comments, DHS concurred with all of our recommendations. DHS’s written comments are reprinted in appendix II. We also provided draft sections of the report to OMB and COTF, which provided us with technical comments via email; we incorporated their comments as appropriate. Regarding the first two recommendations, on the timing of reporting and actions to be taken when assets do not meet performance standards in testing, DHS stated that it plans to make changes to its acquisition guidance by June 30, 2015. In concurring with the third recommendation, regarding the testing of the C4ISR system, DHS noted that it plans to provide clearer guidance in the next update of its acquisition policy, currently scheduled for June 30, 2015. Additionally, DHS stated that it still plans to test the C4ISR system in conjunction with the vessels and aircraft on which the system is installed. This strategy would be acceptable as long as the Coast Guard incorporates the key performance parameters specifically related to the C4ISR system into the vessel and aircraft test plans. In its response, DHS disagreed in general with our description of the C4ISR system as not meeting goals, noting that, according to the Coast Guard, the original system was closed as a result of obsolescence and not due to performance and maintenance problems. While it is true that much of the original system—developed as part of Deepwater—is obsolete because it was inextricably linked to the commercial vendor’s proprietary software, performance problems were also an issue. We have previously reported on these problems, such as assets not having the capability to share data as envisioned and the system needing to be restarted during operations. In short, the system of systems capability that was the original intent has not been achieved. While DHS states that the C4ISR program is one example of where the Coast Guard made tough decisions to provide the greatest capability of equipment while using the least amount of dollars, the Coast Guard invested $413 million to develop and field the original system that is now being replaced with Seawatch. While DHS concurred with our fourth recommendation to develop a 20- year fleet modernization plan, the response does not fully address our concerns or set forth an estimated date for completion, as the response did for the other recommendations. DHS stated that the Coast Guard values long term planning and can assemble a profile of the anticipated service lives of the various assets and project this information to the future. However, the response also reaffirmed the very reason we made this recommendation—that trade-off decisions considering the cost, schedule, and performance of acquisitions are made during the annual budget process. There is no evidence that these short-term budget decisions will amount to a good long-term strategy and, as we have previously noted, the Coast Guard’s annual, budget-driven approach creates continual churn as program baselines must continually re-align with budget realities instead of budgets being formulated to support program baselines. In the case of the Coast Guard, this budget-driven process is pushing tough trade-off decisions—between capability and cost—into the future. Without a long-term plan, as we have recommended, no one knows what taxpayers are ultimately going to get for their approximately $1.5 billion annual investment in Coast Guard acquisitions. We continue to believe that a properly constructed 20-year plan is necessary to illuminate what is feasible in the long term and will also provide a basis for informed decisions that align the Coast Guard’s needs and resources. DHS and the Coast Guard also provided technical comments that we incorporated into the report as appropriate. We are sending copies of this report to the Secretary of the Department of Homeland Security, Commandant of the Coast Guard, and Director of the Office of Management and Budget. In addition, the report is available on our website at http://www.gao.gov. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 13 days from the report date. At that time, we will send copies to your offices. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. In conducting this review, we relied in part on the information and analysis in our past work, including reports completed in 2008 through 2012. Additional scope and methodology information on each objective of this report follows. To assess how selected assets are performing operationally and to what extent they are achieving desired performance levels in testing, we selected key assets that are being used in operations that were a part of the original 2007 baseline—the Maritime Patrol Aircraft (HC-144), Fast Response Cutter, National Security Cutter, and the Command, Control, Communications, Computers, Intelligence, Surveillance, and Reconnaissance (C4ISR) systems—and reviewed test reports and operational data for these assets. We also reviewed the Coast Guard’s Major Systems Acquisition Manual and Department of Homeland Security (DHS) Acquisition Management Directive 102-01 to review regulations and directions for operational testing. We assessed operational test reports for the HC-144 and Fast Response Cutter to determine what issues were discovered during testing and interviewed officials from the DHS’s Science and Technology directorate and the Navy’s Commander, Operational Test and Evaluation Force (COTF) to discuss the results and limitations of these tests and plans for future testing. For the National Security Cutter and C4ISR programs, we reviewed preliminary tests and changes being made to the systems as a result of knowledge gained through early testing or operations that has led to retrofits or design changes. We compared the results of these tests and operational data with operational requirements documents for each program to determine if these assets are performing as planned. We interviewed Coast Guard officials with the capabilities and resource directorates, and officials and operators with the National Security Cutter, Fast Response Cutter, HC- 144, and C4ISR programs to gain a greater understanding of operational challenges and how they are being addressed. We met with National Security Cutter operators at U.S. Coast Guard Base Alameda in Alameda, California and we met with the District Commander for the Coast Guard’s Seventh District, Fast Response Cutter operators at Coast Guard Sector Miami, and HC-144 operators at U.S. Coast Guard Air Station Miami in Miami, Florida and discussed the C4ISR operations aboard each of these assets to discuss how these assets are performing operationally. We interviewed contractor representatives from Huntington Ingalls Industries for the National Security Cutter and Bollinger Shipyards for the Fast Response Cutter and toured their respective shipyards to discuss issues related to the production of these assets. To determine the current cost of the Coast Guard’s acquisition portfolio as well as plans to fund its assets, we reviewed the Coast Guard’s budget and capital investment plan and identified the programs that are currently in its acquisition portfolio. Based upon our definition, the Coast Guard’s current acquisition portfolio consists of all major acquisitions that are planned to receive funding in the current budget year and/or within the next 5 years. We reviewed the approved acquisition program baselines for programs currently in the portfolio to determine their cost and schedule. We compared current baselines to previous baselines to evaluate whether there has been any cost or schedule growth in these programs. In comparing original costs to revised baseline costs, if a revised baseline presents both threshold costs and objective costs, threshold costs were used. In determining the cost to complete, we took the total estimated cost of the acquisition in its current baseline and subtracted the funding that the program has received as of and including fiscal year 2014. For some assets, such as the HC-130J which received funding not included in the Coast Guard budget, we derived the cost to complete by totaling the funds required to finish the program based upon the current cost estimate. We also reviewed the Coast Guard’s Major Systems Acquisition Manual for guidance on acquisition program baselines. We interviewed officials from the Office of Management and Budget and the Department of Homeland Security’s Program Accountability and Risk Management directorate and Program Analysis and Evaluation directorate to determine what, if anything, they are doing to balance the Coast Guard’s needs with anticipated funding. To determine the extent to which the Coast Guard is experiencing capability gaps, if any, given known affordability issues, we assessed the Coast Guard’s performance targets and compared these targets with acquisition plans. In addition, we interviewed officials from the Coast Guard’s acquisitions and resource directorates to identify the challenges the Coast Guard faces reaching these targets using current funding levels and to understand actions taken by the Systems Integration Team and Executive Oversight Council to address these challenges. We also reviewed actions the Coast Guard is taking to improve the affordability of recapitalizing its assets. We interviewed officials with the Coast Guard’s acquisition directorate and the program managers for all of the programs currently in the portfolio to discuss the cost of the portfolio and future funding plans. To determine the condition and expected service life of legacy assets, we reviewed Coast Guard analysis of these assets and prior GAO work on legacy assets. We conducted this performance audit from June 2013 to June 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Coast Guard has 11 major acquisition programs in its current portfolio, based on the Fiscal Years 2014 through 2018 Capital Investment Plan. Of these 11 major acquisition programs, 8 were also a part of the 2007 recapitalization portfolio. Over time, the composition of the portfolio has changed. For example, since our last review in 2012, the Coast Guard has added 3 programs to its acquisition portfolio and another 7 programs are ending and, therefore, will no longer need additional acquisition funding. We excluded $3.6 billion in “other costs including project management” from our analysis of the Coast Guard’s current portfolio of assets because these costs are not periodically re- baselined. Thus, the total cost of the original 2007 baseline excluding these costs is $20.563 billion. Table 5 lists the total acquisition cost for each of the programs in the Coast Guard’s current portfolio as well as the cost increases and cost to complete for the programs in the original 2007 baseline. Michele Mackin, (202) 512-4841 or [email protected]. In addition to the contact above, Katherine Trimble, Assistant Director; Laurier R. Fish; Peter W. Anderson; William Carrigg; John Crawford; Sylvia Schatz; and Lindsay Taylor all made key contributions to this report. Defense Contracting: Actions Needed to Increase Competition. GAO-13-325. Washington, D.C.: March 28, 2013. Coast Guard: Clarifying the Application of Guidance for Common Operational Picture Development Would Strengthen Program. GAO-13-321. Washington, D.C.: April 25, 2013. Coast Guard: Portfolio Management Approach Needed to Improve Major Acquisition Outcomes. GAO-12-918. Washington, D.C.: September 20, 2012. Observations on the Coast Guard’s and the Department of Homeland Security’s Fleet Studies. GAO-12-751R. Washington, D.C.: May 31, 2012. Coast Guard: Legacy Vessels’ Declining Conditions Reinforce Need for More Realistic Operational Targets. GAO-12-741. Washington, D.C.: July 31, 2012. Missile Defense: Opportunity Exists to Strengthen Acquisitions by Reducing Concurrency. GAO-12-486. Washington, D.C.: April 20, 2012. Coast Guard: Action Needed as Approved Deepwater Program Remains Unachievable. GAO-11-743. Washington, D.C.: July 28, 2011. Coast Guard: Deepwater Requirements, Quantities, and Cost Require Revalidation to Reflect Knowledge Gained. GAO-10-790. Washington, D.C.: July 27, 2010. Coast Guard: As Deepwater Systems Integrator, Coast Guard Is Reassessing Costs and Capabilities but Lags in Applying Its Disciplined Acquisition Approach. GAO-09-682. Washington D.C.: July 14, 2009. Coast Guard: Better Logistics Planning Needed to Aid Operational Decisions Related to the Deployment of the National Security Cutter and Its Support Assets. GAO-09-497. Washington, D.C.: July 17, 2009. Best Practices: High Levels of Knowledge at Key Points Differentiate Commercial Shipbuilding from Navy Shipbuilding. GAO-09-322. Washington, D.C.: May 13, 2009.
The Coast Guard is managing a multi-billion dollar effort to modernize aging assets, including ships, aircraft, and information technology to provide new capabilities to conduct missions ranging from marine safety to defense readiness. GAO has reviewed the Coast Guard's acquisitions since 2001 and has found it faces challenges managing its portfolio. In 2007, the Coast Guard established a cost baseline of $24.2 billion for 13 assets. GAO was asked to examine the Coast Guard's current and planned acquisition portfolio. This report assesses: (1) operational performance and testing of selected assets; (2) the current cost of the Coast Guard's portfolio and funding plans; and (3) the extent to which the Coast Guard is experiencing capability gaps, if any, given known affordability issues. To conduct this work, GAO analyzed the operational performance and test reports for all 4 newly fielded assets that the Coast Guard planned to test and the costs and capabilities of its major system acquisition portfolio. GAO also interviewed Coast Guard, DHS, and Navy officials. The selected Coast Guard assets that GAO reviewed are generally demonstrating improved performance—according to Coast Guard operators—but GAO found that they have yet to meet all key requirements. Specifically, two assets, the HC-144 patrol aircraft and Fast Response Cutter, did not meet all key requirements during operational testing before being approved for full-rate production, and Department of Homeland Security (DHS) and Coast Guard guidance do not clearly specify when this level of performance should be achieved. Additionally, the Coast Guard changed its testing strategy for the Command, Control, Communications, Computers, Intelligence, Surveillance, and Reconnaissance (C4ISR) system and, as a result, is no longer planning to test the system's key requirements. Completing operational testing for the C4ISR system would provide the Coast Guard with the knowledge of whether this asset meets requirements. As acquisition program costs increase across the portfolio, consuming significant amounts of funding, the Coast Guard is farther from fielding its planned fleet today than it was in 2009, in terms of the money needed to finish these programs. In 2009, GAO found that the Coast Guard needed $18.2 billion to finish its 2007 baseline, but now needs $20.7 billion to finish these assets. To inform Congress of its budget plans, the Coast Guard uses a statutorily required 5-year Capital Investment Plan, but the law does not require the Coast Guard to report the effects of actual funding levels on individual projects and, thus, it has not done so. For example, the Coast Guard has received less funding than planned in its annual budgets, but has not reflected the effects of this reduced funding in terms of increased cost or schedule for certain projects. Without complete information, Congress cannot know the full cost of the portfolio. The Coast Guard has repeatedly delayed and reduced its capability through its annual budget process and, therefore, it does not know the extent to which it will meet mission needs and achieve desired results. This is because the Coast Guard does not have a long-term fleet modernization plan that identifies all acquisitions needed to meet mission needs over the next two decades within available resources. Without such a plan, the Coast Guard cannot know the extent to which its assets are affordable and whether it can maintain service levels and meet mission needs. Congress should consider requiring the Coast Guard to include additional information in its Capital Investment Plan. In addition, the Secretary of DHS should clarify when minimum performance standards should be achieved, conduct C4ISR testing, and develop a long-term modernization plan. DHS concurred with the recommendations, but its position on developing a long-term plan does not fully address GAO's concerns as discussed in the report.
FHWA is responsible for administering and overseeing various highway transportation programs, including the Federal-Aid Highway Program—which provides financial assistance to the states for improving the efficiency of highway and traffic operations. FHWA relies on AASHTO to (1) provide technical guidance for the design, construction, and maintenance of highways and other transportation facilities; (2) publish manuals, guides, and specifications regarding design, safety, maintenance, and materials; and (3) conduct planning for highways, bridges, and other structures. Active membership in AASHTO is open to the state departments of transportation of the United States, Puerto Rico, and the District of Columbia. DOT is an active, albeit nonvoting, member. FHWA supports AASHTO’s manuals, guides, and specifications, which the states can use in designing and analyzing federally funded highway projects. In addition, states can use their own pavement design criteria and procedures for such projects, which generally mirror what is in AASHTO’s pavement design guide. Currently, highway pavement design criteria and procedures are documented in AASHTO’s 1993 Guide For the Design of Pavement Structures. AASHTO’s Joint Task Force on Pavements is responsible for the development and updating of the guide. The guide was first issued in 1961 and then updated in 1972, 1981, 1986, and 1993. Another update of the guide is forthcoming. The task force’s efforts to update the guide are overseen by a National Cooperative Highway Research Program (NCHRP) project panel, which functions under the Transportation Research Board (TRB) of the National Academy of Sciences’ National Research Council. While constructing new highways was once the primary goal of state transportation departments, the major emphasis in pavement design in the 1990s has progressed to rehabilitating existing highways. According to NCHRP, the current guide does not reflect this shift in emphasis, and the updated guide is the expected product of an NCHRP/TRB contract with an engineering consulting firm that is expected to be awarded in the near future. Under the contract, the guide would be updated by 2002. In updating the guide, NCHRP intends to improve upon the outdated pavement design procedures contained in the current guide. The current design guide and its predecessors were largely based on design equations empirically derived from the observations AASHTO’s predecessor made during road performance tests completed in 1959-60. Several transportation experts have criticized the empirical data thus derived as outdated and inadequate for today’s highway system. In addition, a March 1994 DOT Office of Inspector General report concluded that the design guide was outdated and that pavement design information it relied on could not be supported and validated with systematic comparisons to actual experience or research. In contrast to the current guide, which relied heavily on an empirical approach to derive its design equations, the NCHRP contract to update the guide by 2002 calls for the use of an approach that would more realistically characterize existing highway pavement usage and improve the reliability of designs. Under the first phase of the contract that ended in July 1997, Nichols Consulting Engineers developed a detailed work plan for completing the new pavement design guide. When the project manager resigned in June 1997, NCHRP decided to rebid the contract. The NCHRP program officer stated that he believes that the new guide will be completed as planned. An existing method called nonlinear 3D-FEM has the potential to significantly improve the design and analysis of highway pavement structures. A number of nonlinear 3D-FEM computer programs have been available since the 1970s that can be used for solving complex structural engineering problems, including designing safer, longer-lasting, more cost-effective highway pavement structures. Nonlinear 3D-FEM is considered by many experts to be superior to current design and analysis methods because values of stresses, strains, and pavement deflections can be calculated accurately from a variety of traffic loads—static, impact, vibratory, and moving mixes of traffic loads, including multiaxle truck/trailer loads both within and outside legal weight limits. The nonlinear 3D-FEM analysis allows a level of detail that aids in selecting pavement materials as well as improving the accuracy of determinations of the thickness needed for new, reconstructed, and overlay pavements. This method can be used to analyze pavements for strengthening that may be required for expected traffic loads in the future and for computing the pavements’ remaining structural and operational lives. Several highway departments and academic institutions have already used nonlinear 3D-FEM for various structural analysis applications. The Indiana, Mississippi, and Ohio departments of transportation, for example, have pioneered the use of nonlinear 3D-FEM in pavement design and analysis. Officials of these agencies told us that they are very satisfied with its application on various road systems. In 1995, the University of Mississippi used nonlinear 3D-FEM to analyze jointed concrete pavement for dynamic truck loads and thermal analysis.An official from the Mississippi State Department of Transportation told us that this method enabled the state to determine the conditions causing the rapid deterioration of its concrete pavement. Similarly, a senior scientist from a firm specializing in evaluating the integrity of engineering structures told us that, among other things, the finite element method—combined with statistical theory (which factors in uncertainties in material properties)—has been used to predict the expected life of a concrete runway at Seymour Johnson Air Force Base in North Carolina. Because it considers AASHTO’s pavement design guide to be outdated, the School of Civil Engineering, Purdue University, also has been using nonlinear 3D-FEM to analyze various pavement problems. The university has used this method to analyze responses to moving multiaxle truck/trailer loads within and outside legal weight limits on both flexible and rigid pavements. Studies the university has conducted to verify the analyses have shown a strong correlation between field and predicted pavement responses (strains and deflections). More recently, Purdue University conducted a study—including the use of field instrumentation, laboratory testing, field data collection, and subgrade and core sampling—of three asphalt pavement sections with different subdrainage configurations on a portion of Interstate 469 in Ft. Wayne, Indiana. Nonlinear 3D-FEM was used to evaluate the subdrainage performance and the analysis of moisture flow through the pavement. The results of the study indicated a strong correlation between the predicted and field-measured outflows of water. The effects of high moisture conditions on pavement performance include rutting, cracking, and faulting—leading to increased roughness, unsafe conditions, and a loss of serviceability. A pavement design manager with the Indiana Department of Transportation told us that the Purdue study, using nonlinear 3D-FEM, confirmed that the Department’s previously used subdrainage design procedures resulted in a drainage outflow pipe that was too small—thus limiting moisture outflow. Subdrainage layers with filter layers, a perforated pipe (subdrainage collector pipe), trench material, and an outlet pipe play a key role in reducing the extent and duration of high moisture conditions in pavement structures and their subgrade. The manager said that nonlinear 3D-FEM provided the (1) proper (increased) size of drainage outlet pipe and (2) best, most efficient filter material, which turned out to be less costly than the material previously being used. We were told that Indiana’s Transportation Department is now in the process of adopting nonlinear 3D-FEM as its preferred method for designing subdrainage systems. An Indiana research section engineer also told us that he believes that nonlinear 3D-FEM could be used by all state highway departments to design subdrainage systems. Battelle Memorial Institute recently applied nonlinear 3D-FEM to predict pavement response to a broad range of vehicle loads on 4 miles of newly constructed highway pavement (2 miles southbound and 2 miles northbound) north of Columbus, Ohio. According to a Battelle project scientist and an academician from Ohio University, the results of the heavily instrumented highway test sections showed a strong correlation with the analytical results achieved from nonlinear 3D-FEM. They also told us that nonlinear 3D-FEM is the best computational method to address pavement problems. A chief engineer of the Ohio Transportation Department further told us that the state was pleased with Battelle’s efforts to predict pavement response using the nonlinear method. According to an engineer-advisor with the DOT Inspector General’s Office, AASHTO’s pavement design guide has changed very little over the years. He was of the opinion that new design procedures are needed, incorporating nonlinear 3D-FEM, if FHWA and the states are going to be better able to ensure that highway pavement is constructed, reconstructed, or overlaid according to current FHWA policy that it be safe, durable, and cost-effective. We reviewed the scope of work of the contract NCHRP awarded in December 1996 to Nichols Consulting Engineers for the development of the new guide. The scope of the most recent contract work does not directly cite nonlinear 3D-FEM as a technique that can be used in the design and analysis of highway pavement. In discussions with Nichols’ project manager and with an NCHRP official and in our review of the contractor’s work plan for the guide, we did not find any specific reference that nonlinear 3D-FEM would be investigated for inclusion or exclusion in the 2002 update. Through interviews with FHWA, AASHTO, and NCHRP officials, we attempted to determine why the method was not specifically being considered. We did not receive any explanation. However, the program officer said that while the contractual documentation for this particular effort does not contain specific reference to nonlinear 3D-FEM as a pavement design and analysis method, the documentation does not exclude the use of such a method either. The pavement design guide developed and updated by AASHTO over the years for designing and analyzing highway pavement structures is outdated. NCHRP has undertaken a 5-year effort to update the guide employing improved design approaches. Research on nonlinear 3D-FEM and documented successes in its application suggest that this method could be an important tool for accurately (1) designing and analyzing new highway pavement structures and (2) analyzing the response of deteriorated pavement structures for rehabilitation. We believe it should be considered in NCHRP’s ongoing efforts to update AASHTO’s current pavement design and analysis guide. The recent decision to rebid the contract for the design guide update provides an opportunity for FHWA to specify the consideration of this method. To better assist states in designing safer, longer lasting, and more cost-effective new, reconstructed, and overlay highway pavement structures, we recommend that the Secretary of Transportation direct the Administrator, FHWA, to ensure that nonlinear 3D-FEM is considered in the current update of the pavement design guide. We provided a draft of this report to DOT for its review and comment. In written comments dated October 31, 1997 (see app. II), DOT stated that it has maintained a long- standing commitment to ensuring that the nation’s investment in its highway infrastructure is cost-effective. DOT concurred with our recommendation that nonlinear 3D-FEM be considered in the current update of AASHTO’s pavement design guide. DOT stated that it would work with NCHRP to encourage full consideration of the method along with other quantitative analytical methods. As part of its commitment to a cost-effective highway infrastructure, DOT stated that FHWA has supported research efforts at its own Turner-Fairbank Highway Research Center as well as efforts by AASHTO, NCHRP, and TRB. DOT further stated that FHWA is fully aware of and recognizes the potential benefits to highway design offered by 3D-FEM. According to DOT, FHWA has supported the development of this technology at its Turner-Fairbank facility and with individual states through the State Planning and Research program. DOT stated that FHWA considers 3D-FEM to be a very useful research tool for analyzing pavement structures but that it will be up to NCHRP and AASHTO to determine whether the method has achieved the maturity necessary to become a practical engineering tool. We are pleased to hear of DOT’s interest in and acceptance of nonlinear 3D-FEM as an analytical tool for designing and analyzing highway pavement structures. Such interest and acceptance was never made known to us (1) during discussions we had with the Chief, Pavement Division, FHWA; the project manager, AASHTO; a senior program officer, NCHRP; and the initial contractor’s project manager for the development of the 2002 pavement guide nor (2) in documentation we gathered and reviewed during the assignment. We made other clarifying changes to the report as appropriate on the basis of other comments by DOT. We performed our work from May 1996 through October 1997 in accordance with generally accepted government auditing standards. Appendix I contains details on our objectives, scope, and methodology. As you know, 31 U.S.C. 720 requires the head of a federal agency to submit a written statement of the actions taken on our recommendations to the Senate Committee on Governmental Affairs and to the House Committee on Government Reform and Oversight not later than 60 days from the date of this letter and to the House and Senate Committees on Appropriations with the agency’s first request for appropriations made more than 60 days after the date of this letter. We are sending copies of this report to the Administrator, FHWA; the Director, Office of Management and Budget; and appropriate congressional committees. We will make copies available to others upon request. Please call me at (202) 512-2834 if you have any questions. Major contributors to this report are listed in appendix III. The objectives of this review were to (1) describe the roles of the Federal Highway Administration (FHWA) and others in developing and updating the pavement design guide and (2) examine the use and potential of a computer analysis method known as the nonlinear 3 Dimensional-Finite Element Method (3D-FEM) for improving the design and analysis of highway pavements. To accomplish these objectives, we first reviewed the American Association of State Highway and Transportation Officials’ (AASHTO) highway pavement guide, which is being used by many state departments of transportation as an aid in designing and analyzing pavement structures, federally funded and otherwise. We reviewed available literature and contacted officials from FHWA, AASHTO, and the Transportation Research Board. We also contacted contractor officials responsible for the development and updates of the pavement design guide. We contacted officials from the Transport Research Laboratory, Crawthorne, Berkshire, United Kingdom, and reviewed its pavement design practices. We contacted officials from the U.S. Army Engineer Waterways Experiment Station, Vicksburg, Mississippi; Indiana, Mississippi, and Ohio state highway departments; and various engineering consulting firms. We contacted academicians from the University of Arizona, the University of Cincinnati, Florida A&M University-Florida State University, Ohio University, the University of Iowa, the University of Mississippi, the University of Nebraska, and Purdue University, as well as Birmingham University in the United Kingdom. Also, we contacted scientists from Battelle Memorial Institute and Lawrence Livermore National Laboratory. We selected these educational institutions and nonprofit organizations because all have conducted research and development work related to pavement design and analysis and/or the application of nonlinear 3D-FEM for solving structural engineering problems. Furthermore, we performed a literature and database search to identify any individuals who have authored publications on the applications of nonlinear 3D-FEM to highway pavement design and analysis or other structural engineering problems. We discussed with FHWA and others their roles in keeping up with and promoting up-to-date techniques regarding pavement design and analysis. We reviewed FHWA’s pavement policy issued in December 1996, which states that pavements should be designed to accommodate current and predicted traffic needs in a safe, durable, and cost-effective manner. More broadly, we used in this review information we obtained through attendance at the Fourth International Conference on the Bearing Capacity of Roads and Airfields held in July 1994 in Minneapolis, Minnesota; the Third Materials Engineering Conference held in November 1994 in San Diego, California; annual Transportation Research Board meetings held in January 1995 and in January 1997 in Washington, D.C.; and the Structures Congress XV held in April 1997 in Portland, Oregon. Dr. Manohar Singh, P.E., Engineering Consultant Ralph W. Lamoreaux, Assistant Director The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO provided information on the: (1) roles of the Federal Highway Administration (FHwA) and others in developing and updating the pavement design guide; and (2) use and potential of a computer analysis method known as nonlinear 3 Dimensional-Finite Element Method (3D-FEM) for improving the design and analysis of highway pavement structures. GAO noted that: (1) FHwA has worked cooperatively with the American Association of State Highway and Transportation Officials (AASHTO) in developing and updating the pavement design guide; (2) the current guide is slated to be updated by the year 2002 to better reflect the changing priority of rehabilitating the nation's highways rather than building new ones; (3) in contrast to the current guide, that many transportation experts believe is outdated, the new guide is expected to incorporate the use of analytical methods to predict pavement performance under various loading and climatic conditions; (4) sponsors believe that a new design approach will more realistically characterize existing highway pavements and improve the reliability of designs; (5) a promising analytical method to accurately predict pavement response is the nonlinear 3D-FEM; (6) only with accurate response data can one reliably predict pavement performance; (7) the use of this method has the potential to improve the design of highway pavements, which encompasses highway safety, durability, and cost-effectiveness, because values of stresses, strains, and deflections (pavement response) can be calculated accurately from a variety of static, impact, vibratory, and moving mixes in traffic loads; (8) several state departments of transportation, academicians, and scientists have pioneered the use of the nonlinear 3D-FEM and are using it to solve a variety of complex structural engineering problems, including the design and analysis of highway pavement structures; and (9) while this is a promising method for improving highway pavement design and analysis, GAO could find no evidence that it is being considered for inclusion in the current design guide update.
The U.S. aviation system, which accounts for 40 percent of all worldwide aviation activity, is the largest in the world. In 1995, the nation’s system of airports served over 580 million passengers. The federal government has financed a considerable portion of this airport infrastructure. However, privatization advocates have suggested that the private sector should assume more of the cost of financing airport development. While there are 18,224 airports in the United States, only 4,172 are publicly owned. Most airports are small, privately owned general aviation airports.However, most airline passenger traffic is at the nation’s largest publicly owned commercial airports. Table 1.1 compares the number of publicly and privately owned U.S. airports in 1995. Of the 4,172 publicly owned airports, 565 (14 percent) are commercial service airports. Commercial service airports (referred to as commercial airports in this report) are legally defined as airports (1) with scheduled passenger service, (2) that annually enplane 2,500 or more passengers, and (3) that are publicly owned. FAA has identified nine additional airports—seven of which are privately owned—that would qualify for commercial status on the basis of the amount of annual passenger traffic but do not qualify because they are privately owned or do not have scheduled airline service. Airline passenger traffic is highly concentrated at the largest commercial airports. The 29 large hub airports accounted for over 67 percent of all passenger enplanements in 1995, the last year for which figures were available. The 42 medium hub airports accounted for another 22 percent of annual enplanements in the same year. Figure 1.1 depicts the concentration of passenger traffic at the largest commercial airports. 29 Large hub 67.2% 67 Small hub 7.1% 42 Medium hub 22.2% Public ownership of commercial airports varies. Most public owners are local governments, such as cities or counties. However, in many instances, local and state governments form special governmental entities, such as single-purpose airport authorities or port districts to manage airports as well as other transportation-related infrastructure. The legal and other relationships between a local or state government and a special governmental entity vary, but most local or state governments exert some level of control over them. A few states, such as Alaska, Hawaii, and Maryland, also own airports. For example, Maryland owns the Baltimore-Washington International Airport. The federal government owns two major airports—Washington Dulles International Airport and Washington National Airport—and has leased them to a public entity, the Metropolitan Washington Airports Authority. Federal grants have played a critical role in building the nation’s airport infrastructure. In addition to receiving grants from the federal Airport Improvement Program (AIP), commercial airports can also impose passenger facility charges (PFC), issue bonds, and generate net income from airport revenue. All of these sources of capital are affected by federal policies. Figure 1.2 depicts the average percentage contribution of each of these sources of capital at 53 large and medium hub airports in 1994. The percentages for large hubs do not add to 100 because of rounding. For smaller airports, federal grants constitute a larger portion of their total capital because the other sources are not as accessible. Our prior work has shown that an inverse relationship exists between an airport’s size and its reliance on federal grants. Since 1946, the federal government has helped finance airport development with more than $23.5 billion in grants. Since 1970, airport grants have been financed by the Airport and Airway Trust Fund, which is financed from taxes on domestic and international airline travel, domestic cargo transported by air, and noncommercial aviation fuel. AIP, the current federal airport grant program, was established by the Airport and Airway Improvement Act of 1982, as amended, and is administered by FAA. AIP grants help finance projects that enhance airports’ capacity, safety, security, and noise mitigation. There are two categories of AIP grants—apportionment and discretionary. Apportionment grants are distributed by formula to commercial airports (with more than 10,000 annual passenger enplanements) and states. Discretionary grants can generally be used for any eligible airport development project. The Congress has earmarked or “set-aside” some AIP discretionary funding for certain types of projects or airports. About 3,300 (18 percent) of the nation’s airports are eligible to receive AIP grants. All airports receiving AIP grants must provide a “matching share”, ranging from 10 to 25 percent of a project’s total cost, depending on the type of project and size of the airport. In fiscal year 1995, AIP grants to commercial airports totaled more than $1.2 billion, or about 80 percent of all grant obligations. The remaining 20 percent was directed to general aviation airports. A larger airport would generally receive more in airport grants than a smaller airport because larger airports enplane more passengers and have greater funding needs. To augment grants from the AIP, in 1990 the Congress authorized commercial airports to impose a PFC. This authorization enables airports to charge each passenger a $1, $2, or $3 facility charge per trip segment up to a maximum of four segments per round trip. After determining which projects to fund with PFCs, an airport must apply to FAA for approval. Large and medium hub airports that collect PFCs must forgo up to 50 percent of their AIP apportionment funding, most of which is used to provide additional funding for smaller airports. As of February, 1996, just 4 years after the first PFC was approved, FAA had approved PFC collections at 244 airports. In 1995, PFC collections totaled about $1 billion. In 1996, the first bond that was secured solely by PFC collections was issued. Tax-exempt status enables airports to issue bonds at a lower interest rate than taxable bonds, and tax-exempt bonds are an important source of funding for airports. Bond market professionals and a recent FAA study estimate that if airports did not have tax-exempt status, the interest rate on their debt would be about 2 percentage points higher. Bonds are the largest single source of capital for large and medium hub airports. From 1985 through mid-1995, over $42 billion in new and refinanced airport bonds were issued in the United States. According to one credit rating agency, an estimated $25 billion in bonds is currently outstanding. Airport bonds, which are issued by airport sponsors, are one of two types. The most common for larger airports are revenue bonds, which are secured by airport revenue. Less common are general obligation bonds, which are secured by the taxing authority and the full faith and credit of the issuing public airport owner. Airport revenue is unlike other sources of airport capital for two reasons. First, airport revenue is used to fund both current operating costs as well as capital investment. Second, future airport revenue is typically used to secure outstanding airport debt and, therefore, may not be fully available to secure new debt issues or directly fund capital projects. Most commercial airports have agreements that define their financial relationship with tenant airlines. These agreements, commonly termed “airport use agreements” are often long-term, sometimes running 20 years or more, although there has been a trend towards shorter-term agreements. Typically, these agreements set airline rates and charges using either a “residual” or “compensatory” cost approach or a combination of both approaches. With the residual approach, the airlines collectively assume significant financial risk by agreeing to pay any costs of running the airport that are not allocated to other users or covered by nonairline revenue. Any surplus revenue is credited to the airlines and any deficit is charged to them in calculating their rates and charges for the following year. With the compensatory approach, the airport operator assumes the major financial risk of running the airport and sets rates and charges to recover the costs of the facilities and services that the airlines use. Under FAA’s rules regarding rates and charges to airlines, landing fees must be based on formulas which only permit an airport to recover the historic costs of its airfield assets (generally the cost to acquire land and develop the airfield), including debt-related expenses. Therefore, an airport may not revalue airfield assets in the absence of modifications or improvements to those assets. Also, that portion of assets acquired with AIP or PFC funds is not considered airport assets for the purpose of cost recovery through airline fees. Infrastructure privatization initiatives extend across local, state, and federal governments and include such diverse services as education, housing, utilities, and transportation. Numerous studies, task forces, and initiatives have focused on ways to attract private capital to help provide public goods and services. For example, the Congress included provisions within the Intermodal Surface Transportation Efficiency Act of 1991 that are intended to promote public-private partnerships to meet the nation’s surface transportation needs. In 1992, the President issued Executive Order 12803 outlining the principles executive agencies must use to determine whether to approve a local or state government’s request to privatize an asset that had been partly paid for with federal money. Under this order, local and state governments (where permitted by law) would be able to recover the unadjusted dollar amount of their portion of an asset’s total costs from sale or lease proceeds. From any remaining proceeds, the federal government would receive its share of grants associated with the asset, less the depreciated value of the asset. In 1994, the President issued a subsequent order on infrastructure investment, Executive Order 12893, which directs executive agencies to minimize regulatory and legal barriers to private participation in providing infrastructure facilities and services. Despite these executive orders and other federal initiatives, very few sales or leases of federally funded infrastructure assets have occurred. In 1995, the first and only privatization under Executive Order 12803 occurred, with the long-term lease of a waste water treatment plant in Hamilton, Ohio. According to a privatization expert, the federal government waived its share of the lease proceeds because it considered the plant to be fully depreciated. Legislation was introduced in the 104th Congress to expand the private ownership of public infrastructure. In 1995, bills were introduced in both the House and Senate (H.R. 1907 and S. 1063, “Federal Aid Facility Privatization Act of 1995”) that would waive the federal government’s claim to any proceeds from privatizing any locally owned or state-owned facility that had received federal aid. Although these bills were not enacted, the Congress did authorize an airport privatization pilot program as part of the Federal Aviation Reauthorization Act of 1996. Because of continuing widespread interest in airport privatization, the Chairman and Ranking Minority Member of the Subcommittee on Aviation, House Committee on Transportation and Infrastructure, requested that we undertake a study to examine the current extent of private sector participation at commercial airports in the United States and foreign countries; the current incentives and barriers to the sale or lease of airports; and the potential implications for major stakeholders, such as the passengers, airlines, and local, state, and federal governments, should airports be sold or leased. To determine the current extent of private sector participation at U.S. and foreign airports, we reviewed airports’ financial statements, interviewed airport and government officials, reviewed external studies, and surveyed 69 of the nation’s largest airports. For U.S. airports, we measured the levels of public and private sector participation in their operations and capital financing. To measure private and public sector participation in airport operations, we surveyed 69 large and medium hub airports and requested the number of private and public full-time-equivalent positions there. We received responses from all 69 airports. To assess the levels of private and public financing, we analyzed several sets of data, including FAA’s information on federal grants and airport enplanements, Van Kampen American Capital Management’s information on 85 airports’ financial statements, and the Securities Data Company’s information on all airport bonds issued between 1985 and 1994. While we did not audit the accuracy of the databases, we did some limited cross-checking of information and found that it was accurate. To obtain information on privatization in foreign countries, we relied on a study by the World Bank, a survey by Public Works Financing, and studies of international airport finance.We also spoke with officials of two foreign countries and four airport management companies concerning planned or completed privatizations and reviewed pertinent studies and documents relating to airport operations and financing. To assess the incentives and barriers to privatization, we spoke to a broad array of interested parties, including officials representing 13 airports, airport and airline interest groups, airlines, airport management firms, investment banks, credit rating agencies, the Department of Transportation, and FAA. Among the 13 airports we selected to visit are 9 that have at one time considered privatization. At these airports, we reviewed any feasibility studies and legal analyses they had conducted relating to privatization. We surveyed representatives from 13 domestic airlines to obtain their positions on airport privatization and their reasons for supporting or opposing the concept. We also met with representatives of four of the largest airport management firms operating in the United States and airport consultants to discuss impediments they have encountered in structuring privatization bids. Similarly, we met with representatives of three major credit rating agencies and several firms active in municipal finance to discuss economic benefits and impediments to privatization. Finally, we met with lawyers active in airport law and FAA counsel to discuss legal impediments to privatization. We also researched all applicable federal statutes, FAA policies, legal opinions, and court cases to determine how various laws may affect the sale or lease of airports. We also assessed the possible implications and policy considerations of selling or leasing airports on airlines; passengers; and local, state, and federal governments. To assess privatization’s possible effects on public airport owners, we spoke to officials representing airports, airport management firms, airport consultants, investment banks, and FAA. We also reviewed studies of infrastructure privatization in other countries and in the United States. To gauge the possible effects of privatization on airlines and their passengers, we examined privatization studies, airport and airline industry financial trends, and studies of the effects of airlines’ prices on passenger traffic. We also spoke to representatives of 13 U.S. airlines. Finally, we assessed privatization’s potential effects on the federal budget through estimates of airports’ outstanding debt, tax-exempt versus taxable bond yield differentials, and grant funding. We also discussed the effect of grant repayment on the federal budget with a representative of the Congressional Budget Office. We provided the Department of Transportation and the FAA with a copy of our draft report for their review and comment. Officials, including the Acting Manager of the Airports Financial Assistance Division and Manager of the Program Guidance Branch, generally agreed with the facts presented and provided some minor clarifying comments and information, which we included as appropriate. Officials also stated that the report was a thorough and balanced representation of the facts. Our work was performed from July 1995 through October 1996 in accordance with generally accepted government auditing standards. Even though all U.S. commercial airports are publicly owned, they operate in partnership with the private sector to deliver most services. Airports have also adopted commercial practices in response to regulatory and market demands to become less dependent on federal grants and more self-sustaining. As a result, the private sector provides most employees at the nation’s major airports. While federal grants have played a significant role in financing airport development, airport investment is also subject to some market discipline because investment supported by airport bonds must produce sufficient revenue to pay debt service costs. In other countries, private sector participation in airport operations and financing is also becoming more prevalent, including the sale or lease of the airports in some countries. Several factors are causing airports to rely on the private sector for airport operations and financing and to adopt more business-like practices. Airports are required by federal statute to operate as self-sufficiently as possible. While budget pressures on the federal government have reduced traditional sources of capital (grants), intense competition in the airline industry has resulted in greater pressure on airports to contain costs. Airport sponsors have also begun to adopt innovative industry practices to increase airports’ retail potential. One of the obligations an airport assumes as a condition for receiving federal grants is that its fee and rental structure will make the airport as self-sustaining as possible. This obligation generally requires that an airport charge fair market value for the use of airport facilities, excluding the airfield. In recent years, FAA and the Department of Transportation’s Inspector General have emphasized the need for airports to comply with this obligation. Following substantial growth in the 1980s, AIP funding has declined in recent years. Figure 2.1 depicts AIP funding trends, in inflation-adjusted and nominal dollars, for fiscal years 1982 (the first year of the AIP) to 1995. While airline profitability rebounded in 1995, the industry as a whole has suffered substantial losses over the last decade. Our prior work found that the U.S. airline industry had a profit margin half that of the average U.S. company. While intense competition brought on by airline deregulation in 1978 helped to lower passenger fares, it also made airlines less profitable and, accordingly, more cost-conscious. Although the money airlines pay in landing fees and terminal rentals is relatively little—on average 6 percent of their total costs in 1995 according to data from airlines—these costs are not fixed. Therefore, airlines pressure airports to keep these costs low. The growth in passenger traffic helps airports expand nonairline revenue, such as retail concessions. Passenger traffic has nearly doubled, from 300 million enplanements in 1982 to over 580 million enplanements in 1995; and FAA has forecasted that enplanements will increase 3.9 percent each year through 2007, as shown in figure 2.2. Airports obtain revenue from four general sources: landing fees and rentals from terminal leases (both paid by airlines), concessions (such as parking), and other income (such as advertising). As figure 2.3 shows, nonairline revenue from concessions and other income now account for a majority of total revenue at large and medium hub airports. The Airmall terminal at Pittsburgh International Airport illustrates an innovative method to increase an airport’s retail potential. In Pittsburgh, a private operator manages the retail facility, which includes over 100 retail outlets, for the public owner, Allegheny County. These retail outlets represent a wider diversity of products and services than U.S. airports generally provide. Between 1992 (when the Airmall opened) and 1995, per passenger retail spending at the airport increased 250 percent. U.S. commercial airports have collaborated with the private sector to control costs and improve services. While local governments, and in a few instances states, own almost all of the nation’s commercial airports, we found that most employees providing services at airports work for private companies, including airlines, concessionaires, and contractors. Some public owners have also contracted out the management of their airports to the private sector, although such arrangements have tended to be with smaller airports. Most of the people working at the nation’s largest airports are employed by the private sector. As shown in figure 2.4, information we obtained from 69 of the nation’s largest airports (29 large hub and 40 medium hub airports) showed that 90 percent of the people who work at these airports are private employees and 10 percent are public employees. Federal 5% Of the nearly 686,000 private employees working at the 69 responding airports, about 437,000 (64 percent) were airline employees, such as pilots, flight attendants, ticket counter attendants, and baggage handlers. The approximately 249,000 (36 percent) nonairline employees were engaged in providing such services as cleaning, retail concessions, and ground transportation. According to airport executives we spoke with, there are several benefits to using contractors and concessionaires, including improved services, lower costs, and increased revenue. These officials noted that by using private companies to provide these services, airports can rely on the expertise and financial standing of these companies. Contracting can reduce the airports’ costs through the competitive bid process, and concession agreements often allow airports to share in the revenue generated by private companies. Of the nearly 80,500 public employees working at the 69 responding airports, about 32,750 (41 percent) worked for local or state governments, about 38,000 (47 percent) worked for the federal government, and about 9,750 (12 percent) were other public employees, primarily military personnel. Employees of local and state governments were primarily administrative personnel (such as airport directors, financial officers, operations officers, public relations officers, and clerical support), police officers, and firefighters. Federal employees included public safety and security personnel such as FAA air traffic controllers, and agents from the Customs Service, Department of Agriculture, Drug Enforcement Agency, and Immigration and Naturalization Service. Other public employees at airports were primarily military personnel from such services as the U.S. Air Force and Air National Guard. Despite commercial airports’ reliance on the private sector for most services, few of these airports are privately managed. However, in response to increased pressure to reduce costs and the growing number of airport management firms competing for management contracts, the number of publicly owned airports that are privately managed has expanded. The Indianapolis Airport Authority’s contract with a private firm to manage its system of airports (1 commercial airport and 5 general aviation airports) is an example of this trend. We found 7 commercial airports (out of 565) that were privately managed under management contracts. Also, in addition to the Indianapolis Airport Authority’s five publicly owned general aviation airports, we found 10 such airports that were privately managed under a management contract and 3 such airports that were privately managed under a lease. (See app. I for information on publicly owned commercial and general aviation airports that are privately managed.) In 1994, the Indianapolis Airport Authority sought bids to manage its airport system that included Indianapolis International Airport (the nation’s 47th largest airport) and five surrounding general aviation airports. The winning bidder won a 10-year contract. Under the contract, the winning bidder has made a guarantee, secured by a letter of credit, to reduce airport costs and increase airport revenue. Airport profits will be split between the contractor and the airport authority, the latter passing on its share of profits to tenant airlines in the form of reduced rates and charges. According to city and airport authority officials, the contractor was selected on the basis of its demonstrated ability to develop and increase retailing profits at airports. While first year financial results are not yet available, estimates are mixed on whether the contractor will achieve the contract’s goals. In most cases, private managers are compensated on a fixed fee basis, sometimes including a performance incentive payment. The Indianapolis contract is different in that the private manager has promised the public authority and the airlines a guaranteed level of cost savings. One other municipality is now exploring the viability of a similar agreement at its airport. The use of private investment funds, such as bonds, is subject to the scrutiny of credit rating agencies. While federal grants have played a significant role in developing airport infrastructure, airports’ net income and bond financing has also played a key role. For example, in 1994 more than half of the average large or medium hub airports’ total capital for development consisted of net income and bond proceeds (see fig. 1.2). Airport revenue bonds, which are backed by an airport’s current and future revenue, provide the greatest single share of total capital at the largest airports. To support continued infrastructure development, large airports have in recent years increasingly relied on debt financing through revenue bonds. For example, accumulated debt levels (in nominal dollars) doubled between 1988 and 1994, rising to an average $889 million, for each of the 22 large hub airports we examined. Despite taking on this additional debt, these airports’ financial performance did not deteriorate, as operating margins remained constant and credit ratings were not impaired. To issue a revenue bond, an airport must convince credit rating agencies that future airport revenue will be sufficient to cover future interest and principal payments as well as operating costs. Credit rating agencies evaluate the airport’s finances, operations, and management before rating a bond issue. The rating agencies also evaluate how the bond proceeds will be invested. An investment grade rating is generally necessary in the municipal bond market before a bond can be issued. In some cases, airlines and other tenants have privately financed the construction of their terminals, hangars, and other facilities at U.S. airports. For example, major terminals at Chicago O’Hare International Airport, Cincinnati/Northern Kentucky International Airport, and John F. Kennedy International Airport were privately financed. In 1996, the public sponsor completed negotiations with a private developer to finance, build, and operate a new $1.2 billion building for international arrivals at John F. Kennedy International Airport. While national governments of most foreign countries have historically owned and operated airports, in recent years some countries have begun to privatize all or parts of their nation’s aviation system as part of an overall economic restructuring. These countries have privatized many parts of their infrastructure, including airports, railroads, shipping, and trucking. Generally, these countries’ privatization policies have been driven by a desire to raise capital, reduce the size of the public sector, and to improve economic efficiency. Most of the efforts to privatize airports that we identified in 50 countries were in the preliminary stages. For example, Mexico passed legislation in 1995 to lease 58 major airports on a long-term basis. Australia is implementing privatization legislation to allow 22 major airports to be leased on a long-term basis. Most countries’ privatization efforts do not transfer ownership of airports to the private sector, but involve long-term leases, management contracts, the sale of minority shares in individual airports, or the development of runways or terminals by the private sector. Only the United Kingdom has sold major airports to the private sector. Appendix II provides a list of countries and their efforts to privatize airports. Our findings on the increasing efforts to privatize airports are similar to those in a recent World Bank study, which determined that airports around the world have evolved into multifaceted commercial operations.This study also noted that while most airports are owned and operated by national governments, a trend toward more private sector involvement has been emerging. The study found a great variety of ownership structures, ranging from fully public to fully private with many variations in between. U.S. airports were in the middle of this ownership spectrum—with regional (local and state) governmental ownership but commercial operations. The United Kingdom, which sold its major commercial airports in 1987, is one of the few countries where airports have been privatized long enough to provide measurable results. To privatize, the United Kingdom sold the government corporation British Airports Authority (BAA) and the seven major airports it operated (including London’s Heathrow and Gatwick airports) in a $2.5 billion public share offering. Proceeds from this sale were used to reduce the national debt. Even after privatization, the airports have remained subject to government regulation of airlines’ access, airports’ charges to airlines, safety, security, and environmental protection. The government also maintains a right to veto new investments in or divestitures of airports. BAA has generated profits every year since it assumed ownership of the United Kingdom’s major airports in 1987. As a result of steadily increasing passenger traffic and growth in retail revenue, BAA generated $455 million in profits for its shareholders in 1995. This profit was attained despite government-imposed caps on charges to airlines and $782 million invested in infrastructure improvements, including a rail link to central London from Heathrow International Airport. BAA was valued at over $4.5 billion in 1995. However, the privatization of BAA has not been without its critics. Some private economists have noted that by selling BAA’s seven airports together, instead of separately, the United Kingdom did not allow for greater competition among the airports. These critics charge that as a result, the government converted a public asset into a regulated private monopoly that requires regular review and negotiation over the airports’ charges to airlines. In recent years, the sale or lease of U.S. airports has generated considerable interest. Supporters of privatization believe that many major U.S. commercial airports can operate on a sound economic basis without government assistance. Airports’ funding needs, the desire to improve their efficiency, and the potential financial benefits to all levels of government are also generating interest in privatization. However, considerable legal barriers currently block the sale or lease of U.S. airports. In addition, even if the legal barriers were removed, significant economic barriers could impede privatization. Privatization advocates point to three major reasons why the sale or lease of airports should be encouraged. First, they note that private entities would provide additional private capital to help finance airport development. Second, advocates maintain that private operators would more efficiently develop and manage airports and, in the process, reduce airlines’ and passengers’ costs. Third, if federal requirements on the use of airport revenue are changed, the sale or lease of airports by local or state governments would generate a quick infusion of cash for them, while reducing the need for local, state, and federal grants and eliminating tax subsidies. Although there has been considerable investment in the nation’s airports, FAA studies indicate that substantial future investment in airport infrastructure will be needed. As of March 1996, FAA estimated that U.S. domestic and international passenger enplanements will grow 3.9 percent annually through 2007. Also, according to FAA’s analysis, the number of severely congested airports would increase from 7 in 1995 to 17 in 2002 if capacity is not increased. Congestion results in increased costs and delays for airlines. Airport officials contend that they will need about $60 billion from 1997 through 2002, or $10 billion per year, most of which will be needed for projects to increase airport capacity. FAA estimates that airports’ AIP-eligible capital needs will be about $6.5 billion per year over the next 5 years. Whether existing sources of capital will be adequate to meet future development needs is uncertain. Since 1992, AIP funding has declined to $1.46 billion in fiscal year 1997. PFCs contribute about $1 billion annually for airport capital development. Whether debt financing and internally generated revenue will be sufficient to supply the difference in funding needs is uncertain. Privatization advocates believe that the private sector would provide additional capital to meet these needs. For example, private entities could tap the debt equity market (such as by selling stock) that is not open to public entities. A 1995 FAA study indicates that the largest airports generally have been able to obtain sufficient debt financing to meet their capital needs. A prior GAO report also showed that while the debt levels of large hub airports doubled between 1988 and 1994, revenue was available to pay the increased principal and interest amount.However, the same report also noted that airports cannot accumulate unlimited debt to fund capital projects and the ability to finance large amounts of debt may vary substantially among airports. Advocates claim that private firms would operate airports more efficiently and profitably than the public sector. Some studies support the position that the private sector is more efficient than the public sector. Advocates also point to the contract to manage the Indianapolis airport system, where a private firm has promised to reduce operating costs and increase revenue by about $140 million over 10 years, even though some aviation industry officials considered it among the more efficient public airports in the country. The Reason Foundation, a privatization advocate, also points to labor productivity growth at airports in the United Kingdom following their privatization as evidence of private airports’ ability to operate more efficiently. Private airport owners or lessees can generate profits and a return on their investment in two ways—by increasing efficiency and by charging users higher prices. However, whether private firms would operate airports more efficiently than public owners (and pass on some cost savings to users) is uncertain and would likely vary among airports. According to airport management firms, some airports are not good privatization candidates because opportunities to increase revenue or cut costs are limited. In addition, several economists have asserted that competition is a more important factor than the type of ownership in encouraging greater efficiency. According to analysts who rate airport bonds, airports in some cities may face little competition and could charge prices above the levels that would prevail in a competitive market. Advocates contend that airport privatization would benefit the budgets for all levels of government for several reasons. First, if current restrictions on the use of airport revenue are changed, privatization would immediately generate sale or lease proceeds that could be used for other than airport purposes. The amount of these proceeds would depend on how privatization might be implemented, but one privatization advocate calculated that the 87 largest airports have a total market value of $29 billion. In addition, local, state, and federal governments would receive a lasting benefit from reduced airport demands for financial assistance. Advocates also point out that private airports would be paying taxes. As of October 1996, only one of the ten attempts by public owners to sell or lease U.S. commercial airports to a private entity has been successfully implemented (see table 3.1). Very few of the privatizations under consideration were formally proposed to FAA for approval, and some were rejected as infeasible because of legal impediments. In at least three cases, public owners considered selling or leasing their airports to divert the proceeds from the airports for other uses. For example, in 1995, Orange County, California, considered whether it could sell John Wayne Airport to obtain revenue for its general fund after the county had filed for bankruptcy in December 1994. The county abandoned this effort, in part, after concluding that it could not legally divert sale proceeds. Atlantic City is the only public owner that was able to lease its airport to a private company and collect annual payments to use for nonairport purposes although it had received federal grants. In 1986, the city leased the main airport’s terminal and a general aviation field to a private firm for a minimum yearly payment of $400,000, which was diverted to the city’s general fund and not used for airport purposes. We could not determine, nor could FAA explain, why this lease was approved, when the agency has subsequently opposed similar proposals. In 1992, Atlantic City sold the terminal to a newly created public transportation authority for $11.5 million and annual payments of $500,000, which have been placed in the city’s general fund. This latter transaction was specifically authorized under the Department of Transportation’s 1992 Appropriations Act. Under federal grant agreements, FAA approval is required before a commercial airport can be sold or leased, regardless of whether the transfer is to a public or private entity. In opposing proposals to sell or lease airports to private entities, FAA has cited its concern that a private owner or lessee would not be able to satisfy the legal obligations that the public airport sponsor had made as a condition of obtaining a federal grant. Grant agreements currently contain 35 assurances (obligations), including those on the uses of airport revenue, environmental compliance, and public use and access. While many of the assurances would not likely be an obstacle to privatization, some could, especially those concerning the use of airport revenue and reimbursement of federal assets. According to FAA, these legal obligations cannot be unilaterally extinguished by repaying past grants to the federal government. However, according to FAA’s recently proposed policy, the agency will be open and flexible on the conditions for the use of airport revenue if it determines that privatization would not harm the public interest or undermine aviation policy. The Airport and Airway Improvement Act of 1982, as amended, which established the AIP, requires sponsors to use all of an airport’s revenue for its capital and operating costs and not divert revenue for nonairport purposes. The intent of this provision was to ensure that airports receiving federal grants also used the revenue generated at the airport to pay for its costs. In 1987, the restrictions on revenue diversion were tightened to limit the use of airport expenditures to activities that were not only “directly” but also “substantially” related to air transportation. In late 1993 and early 1994, the House Committee on Appropriations and the Department of Transportation’s Inspector General issued reports concerning airport revenue diversion and recommended greater oversight by FAA. In 1994, the Congress added airport financial reporting requirements and penalties for violating requirements concerning the use of airport revenue. In 1996, the Congress added the penalty that an airport is subject to a fine of three times the amount of revenue that it illegally diverts. To what extent the public owner of an airport can retain sale or lease proceeds is a crucial issue in the privatization debate. FAA contends that any sale or lease proceeds constitute airport revenue and, therefore, must be used for airport purposes. If a public owner of an airport cannot retain privatization proceeds for nonairport purposes, the financial incentives to privatize are diminished. A 1991 Department of Justice opinion stated that public owners of airports are entitled to unreimbursed capital and operating expenses from the proceeds of an airport’s sale or lease. The opinion also stated that no time limits exist on the right to receive compensation for these expenses. However, under the Federal Aviation Reauthorization Act of 1996, any request to recoup capital and operating costs must be made no later than 6 years after the expense occurred. Another legal issue concerns whether federal grants must be repaid and donations of surplus federal property must be returned if an airport is sold or leased to a private entity. Since 1946, the federal government has awarded over $23.5 billion in airport grants and donated an unknown value of surplus federal property to assist in the development of airports. According to privatization proponents, federal grant and surplus property requirements would pose significant barriers to privatization if FAA requires that grants be repaid and the Secretary of Transportation does not waive surplus property restrictions. The question of whether federal grants must be repaid has not been officially determined by FAA. According to FAA officials, the statutory restrictions on the use of airport revenue appear to take precedence over Executive Order 12803 that requires FAA to seek grant repayment from sale or lease proceeds. Furthermore, there is no reason for FAA to seek reimbursement of federal grants if, as the agency has interpreted, revenue diversion restrictions only allow sale or lease proceeds (exclusive of proceeds used to reimburse the public owners’ capital and operating costs) to be used for airport purposes. For any airport property that is deeded as surplus federal property the Secretary of Transportation must approve its sale or lease even if it is used as originally intended. Specifically, the Secretary must determine that in selling or leasing an airport to a private entity, the airport will continue to be used as originally intended. Upon making this determination, the Secretary can then allow the airport to be transferred to a private entity. According to privatization advocates, grant repayment and surplus federal property requirements impede airport privatization. Specifically, they are concerned that FAA would seek reimbursement of federal grants because the agency has not had to consider whether to apply Executive Order 12803 to an actual public to private transfer of an airport, and FAA has no policy on whether this order would apply. Under bills introduced during the 104th Congress (H.R. 1907 and S. 1063), the Secretary of Transportation could not require local and state governments to repay federal grants if a legal agreement or regulation requires that the privatized asset continue to serve its originally intended purpose. However, these bills were not enacted. Also, according to privatization advocates, surplus property requirements are barriers to privatization because it would take a costly legal effort to determine if the Secretary would allow the airport to be transferred and would also waive certain terms of the original transfer to the public entity, especially the terms allowing the federal government to possess the surplus property during a national emergency or take back the property if any requirements are not met. Conformance with noise, environmental, and land-use assurances does not present significant barriers to the sale or lease of an airport. Specifically, these assurances apply equally to both privately and publicly owned airports and meeting these assurances would generally require the same actions. Federal regulations established a system for measuring aircraft noise in communities next to or near airports and for providing information about how land should be used depending on the noise level. Airport operators must also meet applicable environmental requirements such as air and water quality standards. In considering whether to buy or lease an airport, a private entity can determine what the potential costs of meeting noise and environmental requirements are and how these costs will be met. The land-use assurance requires airport operators to take appropriate action, including the adoption of local zoning laws (to the extent reasonable) to restrict the use of land next to or in the immediate vicinity of the airport to activities and purposes compatible with normal airport operations, including the landing and take-off of aircraft. Private entities do not have zoning authority. Therefore, to satisfy this assurance private owners would either need to control the land within the immediate vicinity of their airports or have the cooperation of local governments. In some cases, local governments that own airports also do not control land next to or in the immediate vicinity of their airport and must have the cooperation of other local governments to meet the land-use assurance. The exposure of a private owner or lessee to noise and environmental liability arising from lawsuits presents an additional business risk. For example, public owners have been found liable for damages from noise caused by airport operations. Therefore, a private airport owner or lessee could be liable for damages from noise. Determining liability for airport noise and environmental damages is, for the most part, a local issue. Although airports must conform to federal safety and security requirements, regardless of their ownership and whether they receive federal grants, these requirements do not pose significant barriers to privatization. Under FAA’s safety requirements, airports must be certified by FAA to service various categories of commercial aircraft. Similarly, airports must meet FAA’s security requirements. Because of sovereign immunity, a public owner may have greater protection from lawsuits claiming that the airport failed to adhere to safety or security requirements. A private owner would not have this immunity and would need to obtain private insurance or self-insure against liability unless specifically indemnified as part of any transfer. As a result, a private airport’s costs could increase to cover this insurance cost. In the event a public agency does not abide by its grant obligations, the Secretary of Transportation can pursue several courses of action depending on the nature of the offense. For example, airports that have illegally diverted revenue can be required to make repayment. Also, under some circumstances, the Secretary can impose a civil penalty for failure to take corrective action. At the most extreme, the Secretary could withhold any future transportation grants, including airport apportionment grants and highway funding in accordance with the 1994 and 1995 Department of Transportation Appropriations Acts. For a private airport owner, the Secretary’s ability to enforce compliance with outstanding grant assurances is more limited. A commercial airport that was sold to a private entity would not be eligible for apportionment grants or other transportation grants that a local or state government can receive. Therefore, the federal government’s ability to encourage compliance by withholding grants to privately owned airports is reduced. FAA’s proposed policy on the use of airport revenue, including the use of sale or lease proceeds, is ambiguous because it provides conflicting advice to airport owners interested in privatizing. On February 26, 1996, FAA issued its proposed policy for public comment. Under the proposal, FAA continues to consider sale or lease proceeds as subject to restrictions on diverting airport revenue. However, the proposal also states that FAA does not intend to discourage privatization and will consider privatization proposals on a case-by-case basis. The proposal further states that the FAA will remain open and flexible in specifying conditions on the use of airport revenue that will protect the public interest and fulfill revenue diversion restrictions without interfering with privatization. However, FAA has not specified these conditions. As a result, the policy effectively discourages privatization as long as FAA considers sale or lease proceeds to be airport revenue subject to diversion restrictions. Covenants in bonds could restrict the transfer of a public airport to private control in certain instances. To protect bondholders, bonds generally contain covenants that require the bonds to be retired if assets are sold or transferred. According to public finance officials, altering these covenants would generally require a vote of bondholders. Recalling existing bonds and issuing new bonds would mean incurring prevailing interest rates that could be higher. In addition to the various legal constraints, a privatized airport’s ability to operate profitably under current regulations and conditions is uncertain. Privatized airports would lose eligibility for some main sources of capital. Also, a private airport could encounter opposition from airlines and restrictions on its ability to generate an adequate return on investment. Finally, a privatized airport could go bankrupt. Under current regulations, private airports would lose access to some AIP funding as well as PFCs and tax-exempt status for bonds. First, privately owned airports cannot receive AIP apportionment grants, although they would continue to be eligible for AIP discretionary grants. Depending on how a lease is structured, a privately leased airport could receive apportionment grants. Specifically, the public owner could be the airport sponsor for the purpose of receiving grants. In fiscal year 1995, apportionment funding for commercial airports was one-third of the total $1.45 billion in AIP funds. Second, privately owned airports could not collect PFCs, but could impose other types of fees. As with apportionment grants, depending on how the lease is structured, a privately leased airport could collect PFCs. Between June 1992 and January 1996, 244 airports were approved to collect an estimated total of $12.5 billion in PFCs through the year 2024. In 1995, airports collected almost $1 billion in PFCs. To replace lost PFCs, a privately owned airport could collect other types of passenger usage fees that are not subject to PFC limits. Finally, according to public finance officials, for future bond issues at privately owned airports, the loss of tax-exempt status would add about 2 percentage points to the average airport’s debt costs. For example, without tax-exempt status, a $100 million bond issue would cost at least $2 million more in additional interest costs each year for a privately owned airport. However, these interest costs are tax deductible. Concerning the status of outstanding bonds at a privatized airport, in 1994, the Internal Revenue Service issued Revenue Procedure 93-17 that sets forth the conditions under which an outstanding bond’s tax-exempt status can be protected when the use of that bond’s proceeds changes. (This protection is referred to as “safe harbor.”) This revenue procedure requires the issuer to take one of several specified remedial actions that are available only if certain conditions are met. To the extent that the requirements and conditions of the revenue procedure are met, safe harbor protection for outstanding tax-exempt bonds might be available if an airport is sold or leased to a private entity. A private airport owner or lessee also could face opposition from airlines and could encounter constraints on its revenue that would make it more difficult to earn a return on investment. First, the airline officials that we talked to are almost universally opposed to privatization, especially if it means higher charges to the airlines. In our discussions with officials from 13 domestic carriers, a majority opposed privatization because of concerns that it would lead to revenue diversion and an increase in airport landing fees and terminal rentals. Airlines approved of the contract for the private management of Indianapolis’ airport system because they hoped it would lead to lower costs, improved efficiency, and assurances that no revenue would be diverted. Second, FAA’s policy on rates and charges prohibits airports from increasing their charges to airlines to reflect the costs of appreciated or revalued airfield assets. On June 21, 1996, FAA published its new policy on rates and charges, which dictates how airports may charge airlines for aeronautical uses of the airport. Because revenue from fees for using an airfield, generally landing fees, may not exceed actual historical costs, a private airport would not be able to charge landing fees based on revalued airfield assets that reflect its acquisition costs. However, this new policy would allow a private owner or lessee to earn a reasonable rate of return on airfield investments although the policy does not define what constitutes a reasonable return. In addition, it permits airports to earn a return, without constraints, on other assets. Third, a private owner or lessee may need to renegotiate the airport’s agreements with its tenant airlines to retain profits. Often these agreements, which govern how airports charge airlines for using terminals and airfields, restrict how much and in which ways airports can make a profit. Private owners or lessees of airports would be particularly keen to renegotiate residual agreements because they would not allow the airport to retain any profits. However, air carriers would likely be hesitant to renegotiate their airport agreements if they believed their costs would increase. Privatized airports could go bankrupt. The outcome from a bankruptcy proceeding would depend on several factors, including whether the insolvent party is the airport’s owner, lessee, or a management contractor, and what type of bankruptcy protection, such as protection to reorganize its debts, is sought. It is unclear to what extent an airport’s activities might be disrupted by bankruptcy proceedings. If a private airport owner faces bankruptcy proceedings, the local community or state may have to purchase the airport to ensure it continues to be used as an airport. Executive Order 12803 states that any sale or transfer must contain a mechanism to ensure that the airport continues to operate even if the private owner becomes insolvent. However, the effect of any such mechanism has never been tested in bankruptcy proceedings. As part of a bankruptcy liquidation or reorganization, the airport’s assets could be sold to satisfy creditors, without regard to whether those assets would be used for airport purposes. Also, it is uncertain what the courts would decide were the assets of the private airport owner, the airlines, or the local, state, or federal government. For example, air traffic control facilities and equipment might be considered assets of the airport owner for bankruptcy purposes even though they had been funded by FAA. Certain Bankruptcy Code provisions may, in effect, hinder or prevent a local or state government from cancelling a lease or management contract to protect other creditors, even if the lease or contract contains a default clause. Furthermore, the local or state government’s ability to substitute a new operator may be restricted even if the bankrupt operator’s performance deteriorates. Moreover, certain Bankruptcy Code provisions authorize the trustee, subject to court approval, to reject certain agreements, which could include a lease or management contract. How the sale or lease of airports would affect local and state governments, airlines, passengers, and the federal government depends on several factors, including how privatization is implemented, how privatized airports might be regulated, and the unique characteristics of each airport, such as its size and future revenue potential. If federal restrictions on the use of airport revenue are changed and local and state governments could retain the proceeds from privatizing airports, then they are more likely to sell or lease them. If airports’ costs for capital increase as a result of privatization, the effects on airlines and passengers would depend on whether these increases are passed on to them. The effects of privatization on the federal government will depend on whether the grants and subsidies that are currently extended to public airports are similarly offered to private airports. The Congress recently established a pilot program for airport privatization. Under this program, the public owners of up to five airports could be exempted by the Secretary of Transportation from revenue diversion, grant repayment, and surplus property requirements in leasing commercial airports or selling or leasing general aviation airports. Local and state governments could potentially benefit from privatization in two or more ways. First, leasing or selling an airport to a private concern would result in a financial windfall for the public owner if federal restrictions on the use of airport revenue are changed. Second, public owners would accrue a long-term benefit by adding airports to their tax bases. Some public owners have actively sought to privatize their airports specifically to benefit financially from the proceeds of selling or leasing their airports. For example, the Los Angeles and Orange County privatization studies were undertaken, in part, to examine if the proceeds from the sale or lease of an airport could be legally diverted. However, an official of one airport that had sought to privatize told us that if they could legally divert that airport’s revenue without selling or leasing it, they would not be as interested in privatizing it. Estimating how much local or state governments would gain by selling or leasing airports is difficult because the amount largely depends on whether current revenue diversion and grant repayment requirements are changed. Although airports have reported billions of dollars in assets, their market value may be substantially more or less to a prospective buyer. An airport’s market value principally depends on the present value of its future earnings, which in turn depends on market forces and the manner in which it is privatized, especially what constraints are imposed and subsidies are granted by the various levels of government. While local and state governments could benefit financially from privatization, there is the risk that a private airport operator could go bankrupt. If a private airport owner faces bankruptcy proceedings, the local or state government might have to purchase the airport to ensure that it continues to be used as an airport. Also, bankruptcy proceedings might, in effect, hinder or prevent a public owner from cancelling its lease with a private operator. The effects of the sale or lease of airports on airlines largely depend on whether airlines’ airport costs would increase. Currently, airports subject to FAA’s policy on rates and charges are required to charge landing fees based on historical costs, thus prohibiting them from charging market-based rates. No such policy applies to airports’ other sources of revenue, such as concessions and parking fees. Indeed, the self-sufficiency assurance to obtain a federal grant generally requires an airport to impose market rates. If FAA’s current policy on rates and charges is not applied to privatized airports, then airports could raise their landing fees because airports, especially those with large origination and destination traffic, have a strong local demand for air services. Some economists contend that pricing based on historical costs is inefficient because assets would usually be underpriced and eventually rationing must take place. A few countries are experimenting with various market pricing systems as part of their privatization initiatives. However, it is likely that the federal government would regulate the landing fees privatized airports’ charge airlines because of concerns that monopoly pricing would result in fees above the levels that would prevail in a competitive market. Other countries that have privatized airports generally impose some form of price regulation on landing fees. For example, the United Kingdom has capped these fees at historical rates plus an adjustment to account for inflation and increases in productivity. The United Kingdom has also allowed a form of market-based pricing by permitting airports to charge airlines higher landing fees during peak traffic times. Even if FAA’s policy on airport rates and charges remains the same and airport landing fees are tied to historical costs, airlines could still face higher costs at a privatized airport. Under current law, a privately owned airport would no longer receive federal apportionment grants or be eligible for tax-exempt financing, which could increase the owner’s costs to obtain capital. Accordingly, even if subject to FAA’s current policy, a privately owned airport could pass its higher costs—for example, greater interest expenses—on to airlines in the form of higher landing fees and terminal rentals. Such costs, according to data from airlines, were on average about 6 percent of an airline’s total costs in 1995. Economic studies indicate that even relatively small increases in airlines’ airport-related costs could have a profound effect on their profitability.Prior to 1995, the airline industry had encountered significant losses and several carriers had gone bankrupt. Substantial increases in airline costs could result in lower profitability and reduced competition. The effects of the sale or lease of airports on airline passengers depend on the extent to which increases in airlines’ costs would be passed on through higher ticket prices or changes in the number of flights. Although small increases in airlines’ costs may have a substantial effect on airlines’ profitability, airlines may be reluctant to offset this increase by raising ticket prices if they believe that higher prices would reduce passenger traffic. Economic studies have shown that passenger traffic is sensitive to changes in ticket prices and that a 1-percent increase in prices may lead to more than a 1-percent decline in passengers. Also, with higher costs, airlines might cut back or eliminate flights at some airports. Airline ticket prices could increase if airport privatization reduced airline competition. If privatization lead to higher costs because of a change in FAA’s rates and charges policy or reduced subsidies for airports, this increase could also serve to reduce airline competition and increase fares. GAO previously found that reduced competition between airlines in serving various airports had resulted in higher fares. The effect of the sale or lease of airports on the federal government’s budget would generally be positive, provided federal laws and FAA’s policies remain unchanged. Currently, privately owned airports are not eligible for federal financial assistance in the form of tax-exempt bonds and AIP apportionment grants. In addition, public airports do not pay corporate income taxes. The actual effect on the federal budget, however, would depend on the eventual form and extent of privatization. A privately owned airport’s loss of tax-exempt status would result in additional tax receipts for the federal government. While over $42 billion in airport bonds was issued between 1985 and 1994, we could not identify exactly how much tax-exempt debt is currently outstanding because some of these bonds had been used to refinance existing debt. One credit rating agency estimated that roughly $25 billion in tax-exempt airport bonds is currently outstanding. If all these bonds were taxable and interest costs averaged 8 percent, then an additional $2 billion in annual interest income would be taxed. At a 28-percent tax rate, the tax exemption for interest on airport bonds would cost the federal government $560 million annually in forgone tax receipts. However, the federal government may not be forgoing this entire amount because airports would have likely issued less debt if it were taxable. Also, the amount of additional tax revenue resulting from airport privatization would depend on several factors, including how many airports are sold, the amount of airport bonds issued in the future, and whether existing bonds would continue to be exempt from taxation. Privately owned airports would not be eligible to receive AIP apportionment grants. In fiscal year 1995, large hub airports received $168 million in AIP apportionment funding, while medium hub airports received $89 million. According to airport management firms and a privatization consultant, large and medium hub airports are generally the most attractive candidates for privatization. Therefore, if a significant number of them were to be sold to private entities, the Congress would have the option of reducing the total AIP funding level by the amount of apportionment funding these airports had received or redirecting these funds for other airport development needs. The Congress, as part of the Federal Aviation Reauthorization Act of 1996, created an airport privatization pilot program that became effective on October 9, 1996. This legislation acknowledges the current obstacles to privatization and recognizes that the pilot program provides an opportunity to test the potential benefits of privatization to increase funding for airports, improve airport management, improve customer service, and lower costs of operating at airports. Up to five airports can participate in the pilot program. At least one airport must be a general aviation airport and the other four airports can be commercial airports, although only one of the commercial airports can be a large hub airport. Any general aviation airport in the program may be sold or leased, while the commercial airports can only be leased. A privately leased commercial airport could collect PFCs and receive AIP apportionment grants. A privately owned or leased airport would still be eligible to receive AIP discretionary grants, but the maximum grant amount of a project’s total cost would be 40 percent rather than the normal maximum grant amount of 75 to 90 percent. Under the program, the Secretary of Transportation may exempt the public sponsor and private owner or lessee from revenue diversion restrictions or grant repayment or surplus property requirements. Specifically, an airport owner can retain sale or lease proceeds if 65 percent of the airlines serving that airport approve and would not have to repay federal grants. Also, the Secretary could waive any requirements for the public owner or lessee to return surplus federal property. However, before granting these exemptions, the Secretary must find that approval would not result in unfair or deceptive practices or unfair competition. Also, the Secretary must determine that the sale or lease agreement would meet several conditions, including the following: the airport would remain available to public use; airport operations would not be interrupted if the operator went bankrupt; the private owner or lessee would maintain and improve the facilities; airline fees would not increase faster than the rate of inflation, unless a higher amount is approved by 65 percent of the airlines that service the airport; general aviation fees would not increase faster than airline fees; safety and security would be maintained at the highest levels; and noise and environmental effects would be mitigated to the same extent as at a publicly owned airport. An airport would remain eligible for the pilot program and any associated exemptions to revenue diversion, grant repayment, or surplus property requirements as long as its facilities continue to be used for airport purposes. The Secretary may, however, revoke an exemption upon determining that the owner or lessee knowingly violated any of the conditions set forth in the statute governing the pilot program. According to FAA and aviation industry officials, it is too early to know which airports might be interested in applying for this pilot program or if any airports could qualify for it and gain the support of their tenant airlines. However, the public owners of two airports—Allegheny County Airport, a general aviation airport in Pennsylvania, and Stewart International Airport, a former military air base in New York—have expressed interest in the program’s innovative arrangements. The Department of Transportation and FAA are charged with reporting to the Congress within 2 years after the first application is approved on the pilot program’s implementation and are authorized under the program to audit a private owner’s or lessee’s financial records and operations in order to monitor its compliance with the program’s requirements. Airport Group International (AGI) Direct and indirect costs, plus a management fee of $331,680 (inflation adjusted) Johnson Controls World Services, Inc. (JCWS) Addison Airport of Texas, Inc. Alliance Air Services, Inc. All revenue that exceeds contractor costs, including a $3 million payment to the county 20-year contract to expire in 2011 (continued) Windham Aviation, Inc. Plans or actions for airport privatization Contracted with a private entity to modernize and expand Tirana Airport Plans to contract with a private entity to complete construction of and operate the new international terminal at Houari Boumedienne Airport near Algiers Considering long-term management contracts with private entities to operate 59 airports; the national legislature (Senate) passed a bill allowing for these management contracts Implementing 50-year leases with private entities to operate 22 major airports Sold shares in Vienna International Airport; 47 percent of total shares are privately held Transferred ownership of Freeport International Airport to a private entity Plans a long-term agreement with a private entity to operate three major airports Plans a contract with a private entity to rehabilitate the terminal at Guararapes International Airport in Recife Plans (with the municipality of Sofia) a 30-year build, operate, and transfer (BOT) contract with a private entity to modernize Sofia International Airport Plans a 20-year BOT contract with a private entity for projects at Pochentong Airport in Phnom Penh; plans a 15-year BOT contract with a private entity for projects at Sihanoukville Airport on Naga Island Plans a long-term lease with a private entity to build and operate a terminal at the airport in Yaoundé Implemented a long-term lease with a private entity to build and operate Terminal 3 at Pearson International Airport in Toronto; a regional government implemented a 40-year contract with a private entity to operate and manage Hamilton-Wentworth Airport in Ontario Implemented a contract with a private entity to operate the passenger terminal and plans a 15-year BOT contract with a private entity for a second terminal at Arturo Merino Benitez International Airport in Santiago Implementing a joint agreement with a private entity to build and operate a new airport in Haikou; plans to contract with private entities to develop and operate 8 airports, including Beijing International Airport Awarded a contract to a private entity to build a runway at and plans a contract with a private entity to operate the Eldorado International Airport in Bogotá; awarded long-term leases to private entities to operate two airports in Cartagena and Barranquilla; plans long-term leases with private entities to operate two airports in Medellín and one airport in Cali Plans a BOT contract with a private entity for a new airport in San José Sold shares in Copenhagen International Airport Transferred ownership of Punta Cana International Airport to a private entity Plans to contract with private entities to operate two airports in Quito and Guayaquil and plans BOT contracts with the same private entities for two new airports in these cities Plans a BOT contract with a private entity for a new airport near Cairo Considering contracts with private entities to develop and lease airports, including a major airport in Berlin Implementing a 30-year BOT contract with a private entity for a new airport near Athens Implementing a joint development agreement with a private entity for the new Chek Lap Kok Airport on Lantau Island Implementing a joint development agreement with a private entity for a new international terminal at Ferihegy Airport in Budapest Considering contracting with a private entity to construct and operate a new airport in Bangalore Plans a joint development agreement with a private entity for a new airport in Medan (continued) Plans or actions for airport privatization Plans to contract with a private entity to manage the airport in Naples; national government-owned airlines are divesting their shares in Rome and Milan Airports Plans a long-term contract with a private entity to operate Sangster International Airport in Montego Bay and Norman Manley International Airport in Kingston Plans (with Chubu regional governments) a contract with a private entity to develop one runway and terminals for the new Chubu International Airport; implemented (with Osaka regional governments) a contract with a private entity to build the new Kansai International Airport Implemented a joint development agreement with a private entity to develop and manage a new international airport Implemented a BOT contract with a private entity for a new terminal and a lease-develop-operate contract with a private entity for nonaeronautical portions of a new international airport in Sepang Considering leasing 58 airports to private entities; national legislature passed a bill to allow these leases Plans a BOT contract with a private entity for the new Hanathawaddy Airport near Rangoon Plans to sell three major airports to private entities Plans to contract with a private entity to build and operate a new terminal at Lahore International Airport Plans a 10-year contract with a private entity to expand and maintain passenger and cargo facilities at Tocumen International Airport near Panama City Implemented a lease with a private entity to build and operate a terminal and runway at Jorge Chavez International Airport in Lima Plans a long-term agreement with a private entity to build a new terminal at Ninoy Aquino International Airport in Manila; plans a 25-year contract with a private entity to convert the former Clark Air Base into an international airport Plans a BOT contract with a private entity for a new international airport in Doha Plans a contract with a private entity to manage nonaeronautical activities at the airport in Moscow; plans a 25-year contract with a private entity to upgrade a runway and modernize the terminal at Kazan International Airport; plans contracts with private entities to expand Khabarovsk Airport and modernize Tolmachevo Airport Implemented private sector participation in the development of Changi International Airport Plans to sell Bratislava Airport to a private entity Sold shares in Zurich International Airport; 50 percent of the shares are privately held; a private firm operates the airport Plans to contract with a private entity to build a second international airport in Bangkok Implementing a BOT contract with a private entity for a new terminal at Piarco International Airport Plans a BOT contract with a private entity for a new terminal at Ataturk International Airport near Istanbul; plans a joint development agreement with a private entity for a new international airport near Sanliurfa Sold shares in seven airports (BAA); local government sold Belfast International Airport to a private company formed by the airport employees; regional government plans to sell shares in Birmingham International Airport and sold East Midlands International Airport to a private entity Plans a 20-year contract with a private entity to expand the terminal, build a new runway, and make other improvements at Laguna del Sauce International Airport near Maldonado Plans a long-term contract with a private entity to build, operate, and manage a new airport between Bolívar City and Guayana City in eastern Venezuela (continued) John H. Anderson, Jr. Paul M. Aussendorf Jeanine M. Brady Michael G. Burros Charles R. Chambers Jay R. Cherlow Fran A. Featherston Joseph D. Kile Stanley G. Stenerson Michael R. Volpe Randall B. Williamson The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed issues relating to airport privatization in the United States, focusing on: (1) the extent of private sector participation at commercial airports in the United States and foreign countries; (2) incentives and barriers to the sale or lease of airports; and (3) the potential implications for major stakeholders, such as the passengers, airlines, and local, state, and federal governments, should airports be sold or leased. GAO found that: (1) none of the nation's commercial airports has ever been sold to the private sector, and only one has ever been leased, nevertheless, employees of private companies including airlines, concessionaires, and contractors account for 90 percent of all employees at the nation's largest airports; (2) the largest source of capital for airport development is long-term bond debt secured by future airport revenue and subject to the scrutiny of credit rating agencies; (3) in other countries, a majority of airports are owned and operated by their national governments, but 50 countries have sought greater private sector involvement in their airports; (4) several factors, such as providing additional private capital for development, are motivating greater interest in privatization, but legal and economic constraints impede the sale or lease of U.S. airports; (5) although FAA has permitted and even encouraged some limited forms of privatization, it has generally discouraged the sale or lease of an entire airport to a private entity; (6) FAA proposed policy on the use of airport revenue states that FAA will consider privatization proposals on a case-by-case basis and will be flexible in specifying conditions on the use of airport revenue that will protect the public interest and fulfill restrictions on diverting revenue without interfering with privatization, but FAA has not specified these conditions; (7) predicting how various stakeholders might be affected by the sale or lease of airports largely depends on how such privatization might ultimately be implemented; (8) recognizing the barriers to and the opportunity to test the potential benefits of privatization, Congress established an airport privatization pilot program and, as of October 9, 1996, the Secretary of Transportation can exempt up to 5 airports from some legal requirements that impede their sale or lease to private entities; and (9) the pilot program also requires that a sale or lease agreement meet certain conditions, such as requiring that the private owner or lessee maintain airport safety and security at the highest levels.
In conducting our review, we assessed the Air Force’s Year 2000 efforts against our own Year 2000 Assessment Guide. This guide addresses common issues affecting most federal agencies and presents a structured approach and a checklist to aid in planning, managing, and evaluating Year 2000 programs. The guidance, which is consistent with DOD’s Year 2000 Management Plan and the Air Force’s own Year 2000 management approach, describes five phases—supported by program and project management activities—with each phase representing a major Year 2000 program activity or segment. The phases and a description of each follows. Awareness - Define the Year 2000 problem and gain executive-level support and sponsorship. Establish a Year 2000 program team and develop an overall strategy. Ensure that everyone in the organization is fully aware of the issue. Assessment - Assess the Year 2000 impact on the enterprise. Identify core business areas and processes, inventory and analyze systems supporting the core business areas, and prioritize their conversion or replacement. Develop contingency plans to handle data exchange issues, lack of data, and bad data. Identify and secure the necessary resources. Renovation - Convert, replace, or eliminate selected platforms, applications, databases, and utilities. Modify interfaces. Validation - Test, verify, and validate converted or replaced platforms, applications, databases, and utilities. Test the performance, functionality, and integration of converted or replaced platforms, applications, databases, utilities, and interfaces in an operational environment. Implementation - Implement converted or replaced platforms, applications, databases, utilities, and interfaces. Implement data exchange contingency plans, if necessary. During our review, we concentrated primarily on the Air Force’s efforts to oversee its Year 2000 program during the awareness and assessment phases. We focused our review on Year 2000 work being carried out by (1) DOD’s Office of the Assistant Secretary of Defense for Command, Control, Communications, and Intelligence (OASD/C3I)—which is responsible for promulgating DOD guidance on Year 2000 and providing assistance to Defense components, (2) Air Force headquarters, including the Air Force Communications and Information Center (AFCIC)—which is responsible for day-to-day management and supervision and for issuing Air Force Year 2000 policy and guidance, (3) Air Force Communication Agency, the designated Air Force Year 2000 program office, and (4) selected program offices managed by the Air Force Materiel Command’s Aeronautical Systems Center at Wright-Patterson Air Force Base, Ohio. To assess OASD/C3I efforts in providing Year 2000 support to the Air Force, we met with the Acting Assistant Secretary of Defense for Command, Control, Communications and Intelligence, the Principal Director for Information Management, the Director for Information Technology, and other senior staff responsible for Year 2000 issues. We reviewed the office’s Year 2000 guidance and other documentation on Year 2000 funding, reporting, and date format requirements. To assess Air Force headquarters efforts to manage and oversee the Year 2000 computer problem, we (1) met with the Air Force Communications and Information Center officials and Year 2000 focal points, (2) obtained and analyzed documents issued by these offices that describe organizational structure and responsibilities for carrying out the Air Force Year 2000 program, and (3) reviewed the Air Force’s Year 2000 Guidance Package to assess the level of guidance, roles, and responsibilities, and target milestone dates for the Year 2000 effort. Further, we obtained and analyzed the Air Force Year 2000 inventory data to determine (1) the number of systems owned and operated by Air Force organizations and (2) the status of Air Force systems in their Year 2000 efforts, proposed strategy, and the number of systems reporting to be compliant. We reviewed pertinent Year 2000 program documentation such as Defense and Air Force guidance and management directives, working group minutes, status reports, and cost and schedule data. We performed our work primarily at the Air Force Materiel Command, Wright-Patterson Air Force Base, Ohio; Headquarters Air Force at the Pentagon, Washington, D.C.; and the Office of Assistant Secretary of Defense for Command, Control, Communications and Intelligence at Arlington, Virginia. We conducted our work from July 1996 through August 1997 in accordance with generally accepted government auditing standards. We received written comments on a draft of this report from the Chief Information Officer for the Department of the Air Force. His comments are discussed in the “Agency Comments and Our Evaluation” section and are reprinted in appendix II. Most of the Air Force’s automated information systems and embedded weapon systems are vulnerable to the Year 2000 problem, which is rooted in the way dates are recorded and computed in automated information systems. For the past several decades, systems have typically used two digits to represent the year, such as “97” representing 1997, in order to conserve on electronic data storage and reduce operating costs. With this two-digit format, however, the Year 2000 is indistinguishable from 1900, or 2001 from 1901, etc. As a result of this ambiguity, system or application programs that use dates to perform calculations, comparisons, or sorting may generate incorrect results when working with years after 1999. Should Air Force computer systems fail on the morning of the Year 2000, Air Force operations at all levels could be affected by the incorrect processing of data, as well as corrupted databases, or even massive system failures. In turn, this could result in such problems as delays in supply shipments, faulty inventory forecasts, unreliable budget estimates, and erroneous personnel-related information. Moreover, the problem could adversely affect critical warfighting functions such as combat, communications, command and control, intelligence, surveillance, reconnaissance, and air traffic control. Like the other military services, the Air Force has adopted DOD’s Year 2000 management strategy, which calls for centralized oversight with decentralized execution of Year 2000 correction. In February 1995, the Air Force designated the Air Force Communication Agency (AFCA) as the focal point for Year 2000 efforts, with responsibility for (1) coordinating Year 2000 efforts being carried out by its 9 major commands, 36 field operating agencies, and 3 direct reporting units, (2) ensuring that components completed Year 2000-related tasks on time, (3) developing Year 2000 guidance, (4) collecting and reporting progress and inventory-related data, and (5) chairing the Air Force Year 2000 working group which is comprised of representatives from components. In April 1997, the Air Force established a Year 2000 program office at AFCA. The program office is currently staffed with 24 full-time personnel and it reports to the Air Force Communications and Information Center (AFCIC). AFCIC, which was established in April 1997 due to a Headquarters Air Force reorganization, was tasked with responsibility for implementing Year 2000 policy and programmatic changes across the Service. AFCIC also reports to the Office of the Chief Information Officer and it has assigned three full-time staff members to oversee the Air Force’s Year 2000 Program. Appendix I illustrates the Air Force’s Year 2000 organizational structure and describes the complexity involved in carrying out Year 2000 efforts at the command level. Early in its Year 2000 effort, the Air Force introduced a five-phased management approach for addressing the Year 2000 problem, which was later adopted by DOD and the Federal Government CIO Council’s Year 2000 Subcommittee. According to Air Force officials, if properly implemented, this phased approach will enable the Air Force to achieve its goal of having every mission-critical system compliant by December 1998. The five phases and their supporting program and project management activities are consistent with those identified in our Year 2000 Assessment Guide, which draws heavily on the best practices work of the CIO Council’s Year 2000 Subcommittee. In addition to following the five-phase approach, our guidance addresses common issues affecting most federal agencies and provides a checklist to aid them in planning, managing, and evaluating their year 2000 programs. Also, because the Year 2000 is a massive and complex management challenge, our guidance recommends that agencies plan and manage a Year 2000 program as a single large information system development effort and promulgate and enforce good management practices on the program and project levels. To comply with DOD’s current Year 2000 funding mandate, the Air Force does not plan to provide system/program managers with any additional funds to manage and fix the Year 2000 problem. Rather, system/program managers have been directed to reprioritize or reprogram previously budgeted funds (primarily operational & maintenance (O&M) funds) to fix Year 2000 problems. The Air Force estimates there are 2,944 automated information systems and weapons embedded systems in its inventory and that the majority of these systems will have to be either renovated, replaced, or retired before January 1, 2000. Of the 2,944 systems, 550 (about 19 percent) are considered to be mission-critical systems, that is, they directly support wartime operations. As of September 4, 1997, the Air Force reported that all of its 2,944 systems completed the awareness phase, 33 percent were in the assessment phase, 32 percent in renovation, 17 percent in validation, 12 percent were in implementation, and 6 percent will be decommissioned by December 1999. As of September 1997, the Air Force estimated that it will cost about $405 million to successfully complete its Year 2000 program. Table 1 details the status of Air Force systems according to their mission impact. The Air Force has taken a number of positive steps to ensure that its personnel are fully aware of the impact should Air Force systems not be compliant at the turn of the century. For example, in November 1995, the Air Force established a Year 2000 working group comprised of focal points from each major command, field operating agency, and direct reporting unit. This group has focused on such matters as sharing lessons learned, eliminating duplicative efforts, sharing resources, and tracking component progress. In the same month, the Air Force released an Air Force-wide impact assessment survey to all major commands, field operating agencies, and direct reporting units for the purpose of obtaining a rough order-of-magnitude of the Year 2000 problem throughout the Air Force. The results of this survey indicated that the Air Force Year 2000 problem would be significant and that it required immediate and sustained management attention. The Air Force has also addressed a number of steps associated with the assessment phase of Year 2000 correction, including the following. Developing a comprehensive Air Force-wide system inventory, which will include information on information systems, weapons systems, and infrastructure-related devices that could be affected by the Year 2000 problem. Prioritizing systems for conversion or replacement according to their mission impact. Tasking the Air Force Software Technology Support Center at Hill Air Force Base, Utah, to evaluate in-house and vendor tools and services that could be used to identify and fix Year 2000 problems. Creating a dedicated Year 2000 database, which contains system inventory-related information as well as information on component progress. Issuing a Year 2000 Guidance Package for senior managers and Year 2000 points-of-contact, which (1) explains how to prepare individual project management plans and develop Year 2000 strategies, (2) includes milestones and exit criteria for Year 2000 tasks, (3) provides a flowchart illustrating the five-phase resolution process, and (4) provides cost estimating formulas. This package is continually updated to reflect new managerial, technical, legal and other Year 2000-related developments. Developing a checklist to assist system managers in ensuring that their systems are compliant for the Year 2000, which covers (1) the identification of systems and interfaces, (2) assessment of date usage by the systems, and (3) compliance testing, among other subjects. Directing each major command and field operating agency to appoint Year 2000 certifiers to ensure that all systems belonging to the components have completed the necessary steps to become Year 2000 compliant. The Air Force originally anticipated that it would complete the assessment phase of its Year 2000 effort in May 1997. It acknowledged that approximately 66 percent of its systems did not meet this deadline and it subsequently revised the deadline to October 1997. However, as of September 4, 1997, about 33 percent of its systems had still not been assessed. With less than 26 months remaining before the Year 2000 deadline, this will add pressure on the Air Force to renovate, validate, and implement systems as quickly as possible. According to an industry expert, June 1997 is apt to be the latest point in time to start fixing systems and to have a reasonable probability of finishing before year 2000. The Air Force’s Year 2000 guidance, as well as GAO and OMB’s Year 2000 guidance, call for a similar completion date. In addition, according to the Gartner Group—an independent contractor hired by Defense to provide Year 2000 technical support primarily in the areas of scheduling and cost estimating—no more than 26 percent of an organization’s total Year 2000 effort should be spent in the awareness and assessment phases. Our analysis shows that the Air Force has used nearly 46 percent of its available time to complete these two phases. While Air Force officials acknowledge that the assessment phase is taking longer than expected, they do not believe it will significantly affect their Year 2000 program because system and program managers have already begun to fix systems identified with Year 2000 problems. One reason for the delay in completing the assessment phase is that it has taken longer than anticipated to develop a complete systems inventory. Before its Year 2000 effort, the Air Force did not have a comprehensive servicewide system inventory. As such, it could not readily determine the magnitude (much less the cost to fix) of the Year 2000 problem servicewide when it began the assessment phase. While its inventory now contains 2,944 systems, the Air Force is still expanding it to include information on infrastructure-related devices, such as elevators, traffic control and security devices, telephone switching systems, and medical equipment. These devices rely on either microprocessors or microcontroller chips that may be vulnerable to Year 2000 problems. In addition, the Air Force is contending with slow and incomplete reporting by system and program managers. As a result, it has revised reporting requirements to facilitate better reporting on the part of its components. Furthermore, the Air Force must still resolve discrepancies between its inventory and recent findings by the Air Force Audit Agency. In June 1997, the Audit Agency identified over 6,000 information systems that were not included in the Air Force inventory (which contained 2,543 systems at the time this audit was conducted). These additional systems included 1,600 mission-critical systems. The Air Force is currently reconciling its database to the audit findings. The Air Force has recently enlisted the Air Force Audit Agency to help evaluate component progress in completing the assessment phase. The agency will determine whether selected components have (1) completed timely assessments, (2) addressed all system interfaces, (3) accomplished mandatory system certifications, (4) prioritized and scheduled required renovations, and (5) developed contingency plans. Even though the Air Force is entering the next phases of its Year 2000 correction effort, it has yet to complete several critical assessment steps, which are designed to ensure that it is well-positioned to deal with the later, and more difficult, phases of Year 2000 correction. These include (1) recalculating its $405 million cost estimate, based on actual assessment data, so that it can make informed choices about information technology priorities, (2) ensuring that interfaces are properly accounted for, (3) ensuring that components are developing contingency plans, and (4) ensuring that components are adequately prepared for the testing phase. The Air Force Audit Agency audit should help the Air Force complete these steps; however, this work will be carried out only at selected sites and it will not provide the comprehensive and continued oversight that is needed to ensure that the Air Force can handle unforeseen problems and delays. As DOD’s Year 2000 Management Plan and our Year 2000 Assessment Guide state, the primary purpose of the assessment phase is to gather and analyze the information in order to determine the size and scope of the problem. Among other things, this enables an agency to estimate the cost of its Year 2000 effort in terms of dollars and work years, and, in turn, to make informed choices about information technology priorities and whether other system development efforts should be deferred or canceled so that resources can be freed up to solve the Year 2000 problem. The Air Force, however, has not yet fully defined the scope of its Year 2000 problem or refined cost estimates, using actual assessment data, in order to gauge what resources are needed for correction. The need to take immediate action in this regard is critical, given that some organizations are already discovering that they do not have sufficient funding to correct their systems. Currently, the Air Force expects to spend about $405 million from fiscal year 1997 through 1999 to fix its Year 2000 problem. Table 2 breaks down the estimated cost by fiscal year. According to AFCIC officials, the cost estimate was calculated using the Gartner cost formula, which recommends multiplying $1.10 by the lines of code contained in the agency’s automated information systems and $8.00 by the lines of code for weapon systems. The Gartner method is helpful in developing a rough estimate of what it will cost to resolve the problem early in the Year 2000 effort. However, according to a directive from Defense’s Chief Information Officer as well as Year 2000 consultants, agencies should refine their cost estimates as they progress through the assessment phase and into the later Year 2000 phases to factor in the actual resources they believe are needed to renovate and implement their systems. According to DOD’s Year 2000 Management Plan, these can include: The age of the systems being corrected. Age can have a significant impact on the cost of correction since older code tends to be less structured and thus harder to understand and correct than newer code. The Year 2000 strategy that the program is pursuing. Strategies that involve keeping the two-digit code, for example, are much less expensive than those that involve changing the two-digit code to a four-digit code. The degree of documentation that is available on the system and its understandability and the availability of source code. The skill and expertise of in-house programmers. Projected engineering costs. Labor hours required to fix systems. Testing requirements. The September estimate still used the Gartner formula and did not take into account other factors that can have a significant impact on the cost of correction including those identified in DOD’s Year 2000 Management Plan. Air Force officials acknowledged that the $405 million estimate is a rough figure. They planned to re-estimate costs at some point after the assessment phase is completed. Costs should be continuously reestimated through the assessment and subsequent Year 2000 phases. By waiting to refine its cost estimates, the Air Force will be delaying the availability of information needed to make informed resource trade-off decisions. In fact, trade-off issues and other funding disputes, which call for the need to develop more accurate cost estimates, have already surfaced in some Air Force programs. For example, one aircraft weapon system program found that correcting the Year 2000 problem in ground software equipment that is used to program the aircraft’s operational avionics software for navigation and weapons delivery would cost $42 million more than what was budgeted for routine maintenance of the aircraft. In August 1997, the program office reported that it fixed the problem for about $300,000 using a temporary workaround. However, according to a program office official, because the existing equipment consists of old IBM mainframes and outdated Jovial code it will have to be replaced eventually—and likely at a higher cost—in order to support future planned aircraft enhancements such as Joint Direct Attack Munition and Joint Standoff Weapon. In addition, the Air Force estimates that it will cost between $70 million and $90 million to fix telephone switches throughout the Service. This estimate is not included in the $405 million total Air Force Year 2000 cost estimate. The Air Force is currently in a dispute with the contractor that supplied the switches over who is responsible for Year 2000 correction. At the same time, Air Force components have not budgeted funds to fix their telephone switches. Since then, and according to AFCIC officials, the Air Force has begun to address this funding issue through its normal corporate funding process. It is critically important during the Year 2000 effort that agencies protect against the potential for introducing and propagating errors from one organization to another and ensure that interfacing systems have the ability to exchange data through the transition period. According to our Year 2000 Assessment Guide, to address the issue of interfaces, agencies should (1) identify their internal and external interfaces, (2) determine the need for data bridges and filters, (3) notify outside data exchange partners of their interface plans, (4) test their interface correction strategies, and (5) develop contingency plans that address the possibility of failing to receive data from an external source or receiving invalid data. DOD’s Year 2000 Management Plan places responsibility on component heads or their designated Year 2000 points of contact to document and obtain system interface agreements in the form of memorandums of agreement or the equivalent. Since October 1996, the Air Force has participated in six high-level DOD Year 2000 interface workshops, including finance, intelligence, command and control, communications, logistics, and weapons systems. However, to date, the Air Force has not been tracking (1) how its components are going about identifying their interfaces, (2) how they plan to correct interfaces, and (3) whether they are instituting memorandums of agreement in order to communicate their interface plans to their data exchange partners. It is important for the Air Force to immediately begin tracking these issues since individual components are embarking on varying—and possibly conflicting—approaches to addressing interfaces. Moreover, others have not yet addressed the interface issue. For example, none of the five weapon system program offices we surveyed had fully determined the actual impact or program status of their system interfaces. One program office told us that it did not plan to do so until the Air Force prescribed a uniform approach to interfaces. In addition, we found other weapon system program approaches to identifying their interfaces to be considerably different. For example, the F-22 weapon systems program formally requested its development contractor, in writing, to assess the impact of the Year 2000 problem on the aircraft. This assessment would include identification of interfaces and an evaluation on whether they pose a Year 2000 problem. By contrast, the F-16 program office planned to informally contact its subcontractors to identify the status of interfaces and Year 2000 issues for on-board components of the aircraft that the program office does not directly manage. For components that the program office directly manages, it plans to informally request that its contractor assess Year 2000 problems and identify the status of interfaces. However, that assessment will not be documented as the F-22 program office’s assessment will be. Clearly, the second approach will provide the Air Force with less assurance that all interfaces have been accounted for than the first approach. Without centralized oversight over the identification and correction of interfaces, there is a chance that some systems and interfaces, for which ownership is unclear, may not be identified and corrected. In addition, there is also a higher risk that conflicting interface solutions will be implemented without the data bridges that are necessary to ensure that information can still be transferred. For example, one system manager may choose to fix a system by expanding its date and year, while another may choose to keep the two-digit format and use procedural code or sliding windows as a strategy for becoming Year 2000 compliant. According to current Defense guidance, either fix is acceptable, but both parties need to know of the potential conflict so that they can install the data bridge. AFCIC plans to recommend that responsible system/program managers prepare interface memorandums of agreement, which describe the method of interface and assign responsibility for accommodating the exchange of data. If implemented, these agreements could ensure that information can be transferred even when components take conflicting approaches to their interfaces. At the time of our review, however, none of the five program offices we visited had prepared such agreements, and the Air Force was not tracking whether these or comparable agreements were being instituted. Our Year 2000 Assessment Guide calls on agencies to develop validation strategies and test plan, and to ensure that resources, such as facilities and tools, are available to perform adequate testing. This planning should begin in the assessment phase since agencies may need over a year to adequately validate and test converted or replaced systems for Year 2000 compliance and since the testing and validation process may consume over half of the Year 2000 program resources and budget. At the time of our review, however, the Air Force was not ensuring that components were developing test plans. It was also not assessing the need for additional testing resources, even though it acknowledged that these resources would be in demand. Instead, AFCIC officials told us that they are relying heavily on system/program managers to organize, plan, and manage the necessary resources to test Year 2000 fixes. Our review showed that more attention is needed in this area. For example, none of the five program offices we surveyed had completed a master Year 2000 test plan. Due to the complexities and risks involved with testing, components that are not currently planning their testing strategies run a high risk of not completing the Year 2000 effort on time. This is because components must not only test the year 2000 compliance of individual applications, but also the complex interactions between scores of converted or replaced computer platforms, operating systems, utilities, applications, databases, and interfaces. Moreover, in some instances, components may not be able to shut down their production systems for testing and thus have to operate parallel systems implemented on a year 2000 test facility. Components may also find that they need computer-aided software testing tools and test scripts to help prepare and manage test data, automate comparisons of test results, and schedule tests. AFCIC officials themselves believe that there is a good chance that adequate test facilities may not be available to conduct joint interoperability testing involving systems that interface with one another. For these reasons, it is critical that Air Force headquarters ensure that components are taking time now to assess their testing needs and that the Air Force is well-positioned to provide components with additional testing facilities and tools. In August 1997, the Air Force working group began to address this testing issue in part by directing its components to identify and develop an inventory of existing testing facilities that could support Year 2000 testing of selective platforms such as Unisys and IBM. This effort is ongoing. DOD’s Year 2000 Management Plan and our Year 2000 Assessment Guide call on agencies to develop realistic contingency plans during the assessment phase for critical systems and activities to ensure the continuity of their core business processes. Contingency plans are important because they identify the manual or other fallback procedures to be employed should some critical systems miss their Year 2000 deadline or fail unexpectedly even after they are found to be compliant. Contingency plans also establish a series of checkpoints that allow the agency to identify performance problems early enough to correct them. The Air Force itself has acknowledged that components need to develop contingency plans and it has directed system/program managers to prepare, at a minimum, contingency plans for all mission-critical systems. It has also incorporated this requirement into its assessment phase exit criteria. However, the Air Force has not been tracking the extent to which components have prepared plans for mission-critical functions/systems. Without greater oversight over the preparation of such plans, some components may fail to adequately plan for contingencies without the Air Force’s knowledge. In fact, at the time of our review, none of the five system program offices we surveyed had prepared contingency plans. Officials from these offices told us that contingency plans are not needed because they believed that their systems did not require extensive Year 2000 work and thus their corrections would be made before the Year 2000 deadline expired. In addition, they did not believe that contingency planning was cost-effective. All Air Force organizations need to be engaged in contingency planning since there is no guarantee that the corrections they will make will be completed on time or be free of unforeseen problems. As such, according to DOD’s Year 2000 Management Plan, components, at a minimum, need to (1) analyze the impact of a system failure, (2) identify alternative activities—including manual or contract procedures—to be employed should critical systems fail to meet their Year 2000 deadline, and (3) identify procedures and responsibilities for implementing such alternatives. Furthermore, given the dangers associated with not having contingency plans, we believe the Air Force headquarters’ oversight responsibility must involve ensuring that all components are planning for contingencies for mission-critical systems. To its credit, the Air Force has recognized that virtually every computer system it operates is vulnerable to the Year 2000 problem, it has raised the awareness of the Year 2000 problem among system owners, and it has begun assessing the Year 2000 impact on Air Force systems. However, the Air Force is unnecessarily putting its Year 2000 program at risk of failure because it has not yet refined cost estimates based on actual assessment data, fully examined resource trade-offs, and ensured strong and continuous oversight for interface, testing, and contingency planning issues. Because these steps are designed to ensure that organizations are well-positioned to deal with the more difficult stages of Year 2000 correction, neglecting any one of them can seriously endanger the Air Force’s ability to meet its Year 2000 deadline. Given its role in national security, and its interdependence with other military organizations, the Air Force cannot afford this risk. We recommend that the Secretary of the Air Force immediately require that the Air Force ensure its cost estimates factor in the actual resources it believes are needed to renovate and implement systems so that the Service can make informed resource trade-off decisions and ensure that this estimate is periodically refined throughout the Year 2000 program. We also recommend that the Secretary ensure that an approach is developed to continuously track how components are going about identifying interfaces, how they plan to correct interfaces, and whether they are instituting memorandums of agreement. In addition, we recommend that the Secretary ensure that components are developing test plans and identifying the need for additional testing resources and design an approach to obtain any needed testing resources that are identified by Air Force components. Finally, we recommend that the Secretary act to ensure that components have prepared contingency plans for their mission-critical systems. In written comments on a draft of this report, the Office of the Air Force Chief Information Officer agreed with all of our recommendations to improve the Air Force’s Year 2000 program. In response to our recommendations, the Air Force agreed to update its cost estimates as it progresses through the remaining Year 2000 phases and include actual resources needed to renovate and implement systems so that it can make informed resource trade-off decisions. The Air Force also agreed to place greater management attention on identifying system interfaces and improve reporting practices to ensure that interface corrections are properly accounted for and can be readily tracked. In addition, the Air Force agreed to have major commands and product centers outline and prioritize their test requirements to ensure that testing resources will be available when needed. The Air Force pointed out that it is working with components to develop Year 2000 contingency plans as part of the renovation and validation phases. In addition, the Air Force plans to open servicewide crisis response centers around August or September 1999 to deal with critical systems that will not be Year 2000 compliant by January 1, 2000. The Air Force is taking steps to ensure that contingency plans will be prepared on each noncompliant system identified and be made readily available to the crisis response centers. The full text of Air Force’s comments is provided in appendix II. This report contains recommendations to you. The head of a federal agency is required by 31 U.S.C. 720 to submit a written statement on actions taken on these recommendations to the Senate Committee on Governmental Affairs and the House Committee on Government Reform and Oversight not later than 60 days after the date of this report. A written statement also must be sent to the House and Senate Committees on Appropriations with the agency’s first request for appropriations made more than 60 days after the date of this report. We appreciate the courtesy and cooperation extended to our audit team by Air Force officials and staff. We are providing copies of this letter to the Chairman and Ranking Minority Members of the Senate Committee on Governmental Affairs; the Subcommittee on Oversight of Government Management, Restructuring and the District of Columbia, Senate Committee on Governmental Affairs; the Subcommittee on Defense, Senate Committee on Appropriations; the Senate Committee on Armed Services; the Subcommittee on Government Management, Information, and Technology, House Committee on Government Reform and Oversight; the Subcommittee on National Security, House Committee on Appropriations; and the House Committee on National Security. We are also sending copies to the Honorable Thomas M. Davis, III, House of Representatives; the Deputy Secretary of Defense; the Acting Assistant Secretary of Defense for Command, Control, Communications and Intelligence; the Air Force Chief Information Officer, Department of Defense; and the Office of Management and Budget; and other interested parties. Copies will be made available to others on request. If you have any questions on matters discussed in this letter, please call me at (202) 512-6240 or John B. Stephenson, Assistant Director at (202) 512-6225. Major contributors to this report are listed in appendix III. As figure I.1 below indicates, the size and complexity of the Air Force’s organization structure will pose a significant management challenge. Year 2000 management and oversight efforts will have to be coordinated among 9 major commands, each with complex and diverse organizational structures of their own, 3 direct reporting units, and 36 field operating agencies. Figure I.2 provides an example of just one command’s organizational structure. To understand the complexity involved in carrying out Year 2000 efforts at the command level, consider the following: the Air Force Materiel Command employs about 112,000 personnel; the command manages about 1,700 computer applications and embedded systems; 175 of these systems cover 21 various types of aircraft, including the F-22 and F-16 fighters, the B-1 and the B-2 bombers, and C-17 cargo plans; 410 of these systems are business applications; 266 of these systems are applications covering command, control, communications and intelligence activities; 915 of these systems are base-level owned and operated applications, such as local area networks and medical systems; and the Air Force Materiel Command alone has about 50 Year 2000 points-of-contact. Air Force Chief of Staff 11th Wing, Bolling AFB, Washington, D.C. Air Force Office of Scientific Research, Bolling AFB, Washington, D.C. Robert P. Kissel, Jr., Senior Evaluator Steven M. Hunter, Senior Evaluator Robert G. Preston, Senior Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Air Force's program for solving its year 2000 computer systems problem, focusing on the: (1) status of the Air Force's efforts to oversee its Year 2000 program; and (2) appropriateness of the Air Force's strategy and actions for ensuring that the problem will be successfully addressed. GAO noted that: (1) as with other military services, the Air Force is taking a decentralized approach to year 2000 correction--that is, it is relying heavily on its components to identify and correct year 2000 problems affecting their own systems; (2) however, in providing oversight for this effort, the Air Force must ensure that all of its systems have been accounted for and that component actions are successful; (3) it must also be well-positioned to make the resource tradeoff decisions that are inevitable in any year 2000 effort and to address conflicts between component approaches toward identifying and correcting interfaces; (4) further, it must be able to provide additional resources, such as testing facilities, that may be necessary to correct and validate systems; (5) the Air Force has taken a number of positive actions toward fulfilling its year 2000 oversight responsibilities; (6) for example, it is taking inventory of its systems and prioritizing them for conversion or replacement, and it has issued extensive guidance on dealing with the year 2000 problem; (7) it has also established a year 2000 working group comprised of focal points from the components which aims to eliminate duplicative efforts, share resources, and track component progress; (8) at the same time, the Air Force has not yet adequately addressed several critical issues that would ensure that it is well-positioned to deal with the later, and more difficult, phases of year 2000 correction; (9) GAO's review revealed that some components are failing to plan for the testing phase of their year 2000 effort and develop contingency plans; (10) GAO also found that some components are taking conflicting approaches toward determining the actual impact or the program status of their system interfaces; (11) if components and the Air Force do not promptly address and take consistent action on these issues, they may well negate any success they may have in making systems within their control year 2000 compliant; and (12) while the Air Force has enlisted the services of the Air Force Audit Agency to help address some of these concerns, this work needs to be backed by comprehensive and continued Air Force oversight in order to ensure that it can address unforeseen problems and delays in the next, more difficult phases.
Fuel for nuclear power plants consists of fingernail-sized pellets of uranium dioxide, a radioactive compound. The pellets are fitted into hollow metal rods, typically constructed of zirconium alloy, and the rods are then gas pressurized. The rods are generally 12 to 14 feet in length and are bundled together into assemblies. A portion of the assemblies must be replaced every 1 to 2 years as the fuel in the reactor expends energy, becoming less efficient at producing heat. As part of the process of expending energy during a nuclear reaction, the fuel becomes highly radioactive and thermally hot. Spent fuel emits radiation as a consequence of radioactive decay. Barriers such as thick walls, sealed containers, and water are used to shield individuals from exposure to this radiation. NRC regulates not only the construction and operation of commercial nuclear power plants but also the storage, transportation (together with the Department of Transportation), and disposal of spent fuel. NRC requires each operating nuclear power plant to have safety and security programs. For example, NRC requires protective shielding and security systems, including armed guards, at nuclear power plants. When spent fuel assemblies are removed from a reactor, they are stored in large pools of cooling water. These pools are constructed according to NRC’s requirements, typically with 4- to 6-foot thick steel-lined concrete walls and floors. Pools are typically 30 to 60 feet long, 20 to 40 feet wide, and 40 feet deep. The location of these pools is dependent on the type of reactor. Essentially, all commercial power reactors in the United States are one of two types, either a boiling water reactor or a pressurized water reactor. For most boiling water reactors, the pools are located close to the reactors, several stories above ground. For pressurized water reactors, the pools are located in structures outside the reactor building, on the ground or partially embedded in the ground. Regardless of reactor type, these pools are required by NRC to be constructed to protect public health against radiation exposure, even after a natural disaster, such as an earthquake. The water in the pool is constantly cooled and circulated, and the fuel assemblies are generally 20 feet below the surface of the water. In 1982, through the Nuclear Waste Policy Act, the Congress directed DOE to construct an underground repository for disposal of spent fuel and other high-level radioactive waste. The Congress amended the act in 1987 and required DOE to only consider Yucca Mountain, Nevada, as a potential site for a repository. In 2002, the President recommended to the Congress, and the Congress approved, Yucca Mountain as a suitable site for the development of a permanent high-level waste repository. As we reported in 2001, for a variety of reasons, DOE is unlikely to open the repository as planned in 2010. Lacking a long-term disposal option now, some nuclear utilities must move a portion of their spent fuel into dry storage or face shutting down their plants because their wet pools are reaching capacity. Currently, 25 of the 72 storage sites use dry storage, and 11 other sites have plans to move some of their inventory of spent fuel into dry storage. Dry storage facilities for spent fuel typically consist of steel containers that are placed inside concrete vaults or bunkers where the fuel is cooled by air rather than water. These storage systems are required by NRC to be capable of protecting against radiation exposure and of surviving natural disasters. Because the move to dry storage is time-consuming and expensive, utilities are, wherever possible, modifying wet pool storage capacity so they can store larger quantities of spent fuel in these pools. To expose a large number of people to the harmful effects of radiation from spent fuel, the fuel would have to be released from its protective containers and dispersed over a wide or densely populated area. However, unlike many other hazardous materials, spent fuel is a hard, heavy ceramic material that is neither explosive nor volatile. To achieve a wide dispersal, some portion of the spent fuel assemblies would have to be pulverized into small particles by an external force—such as a high-speed impact or a violent explosion—or some portion of the spent fuel assemblies would have to burn in a sustained, high-temperature fire. According to NRC, the redundancy and robustness of the designs of the fuel containers make wide dispersal highly unlikely. In the event of a dispersal, the most significant health effects would involve persons who inhaled very small (respirable) particles—10 microns or less in diameter. Such particles would be absorbed into the body and possibly remain there for many years. In addition, these particles could be deposited on buildings and the ground where, in the absence of a costly cleanup effort, they could expose people to elevated levels of radiation. The transportation of spent fuel to Yucca Mountain—most likely by both truck and rail, but with a preference for using mostly rail—will be a major undertaking, spanning 20 to 30 years. According to DOE, more than 50,000 tons of the spent fuel have accumulated at 72 sites in 33 states, many located near urban areas in the Midwest and the East. DOE has estimated that the accumulated inventory will have grown to 69,000 tons by 2010 and that moving this volume could require approximately 175 shipments per year over 24 years, relying on a combination of truck and rail shipments. For the transportation of spent fuel, NRC has certification and inspection requirements for shipping containers to ensure that the containers protect against radioactive releases under accident scenarios. NRC has certified a number of shipping container designs for use on trucks and rail. The Nuclear Waste Policy Act of 1982, as amended, requires DOE to ship spent nuclear fuel and high-level radioactive waste to Yucca Mountain in containers that have been certified by NRC. The act also requires DOE to notify NRC in advance of spent fuel and high-level radioactive waste shipments. In addition to NRC, the Department of Transportation plays a role in regulating the transportation of spent fuel and other high-level waste. The department’s Research and Special Programs Administration sets certain safety standards for the transportation of hazardous materials, including spent fuel. These standards include, among other things, documentation and labeling of containers, including placards identifying the shipment, and requirements for separating certain radioactive materials while in transit. The Federal Motor Carrier Safety Administration oversees the safety of shipments by highway, and the Federal Railroad Administration oversees the safety of shipments by rail. The U.S. Coast Guard oversees the safety of shipments that may be made by barge. Studies conducted by NRC and DOE have consistently found that the likelihood of widespread harm to human health from a terrorist attack or a severe accident involving spent fuel is very low. None of the studies involving the transportation of spent fuel or dry storage of spent fuel identified a scenario resulting in widespread harm—largely because of the protective containers required by NRC. For example, these studies repeatedly found that transportation containers would be very difficult to penetrate, and in the worst-case scenarios where they may be penetrated, only a small fraction of the material would be released. Some studies involving spent fuel stored in pools of water found that widespread harm is possible under severe but unlikely accident conditions. Such conditions may include a catastrophic earthquake or a severe but unlikely accident that could uncover the fuel for several hours, possibly allowing it to spontaneously ignite and scatter radioactive material over a wide area. To respond to increased security concerns stemming from the September 11, 2001, terrorist attacks, NRC is further studying the safety and security of spent fuel in transit and in wet or dry storage, including the potential effects of more extreme attack scenarios such as deliberate aircraft crashes. Since the late 1970s, federal studies have examined the effects of both terrorist acts of sabotage and severe accidents involving shipping containers for spent fuel. Sabotage studies have sought to determine whether radioactive material could be released from shipping containers in specific sabotage scenarios, while accident studies have assessed whether radioactive material could be released in a variety of accidents, and the overall probability of their occurrence. Some of these studies were commissioned by NRC, and others by DOE, and many of them were conducted through DOE’s Sandia National Laboratory and other DOE laboratories. These studies collectively indicate that the construction of the shipping containers helps to limit releases. Although NRC is confident in these results, it is sponsoring assessments to further validate computer models and address heightened security concerns. The most recent sabotage study—conducted by DOE’s Sandia National Laboratory for DOE in 1999—estimated the amounts and characteristics of releases of radioactive materials from truck and rail spent fuel containers subjected to two different types of weapons. The results of this study confirmed the findings of earlier studies that armor-piercing weapons could penetrate shipping containers and release small quantities of radioactive material. The study found that, under a worst-case scenario, the weapon could penetrate a shipping container and release a small amount of material—equal to about 0.016 of 1 percent of the spent fuel in the container—as small, respirable particles. These small, respirable particles could become airborne and spread beyond the immediate vicinity of the attack. A subsequent DOE-sponsored report used the results of the 1999 Sandia National Laboratory study to estimate the human health impact of the most severe release. Using a computer-based analytic model and conservative assumptions, DOE’s contractor found that the predicted release from a truck container would cause about 48 cancer deaths over the long term and that a predicted release from a rail container would cause about 9 cancer deaths over the long term. DOE’s contractor’s analysis explained that these cancer deaths should be considered against a backdrop of an expected 1.1 million cancer deaths among the same population expected from other causes. This analysis assumed that the release would occur in an urban area with a population projected to the year 2035 under stable weather conditions. The analysis also assumed that the spent fuel release would contain twice the radioactive content of a typical spent fuel shipment and that there would be no evacuation or cleanup of the affected area for 1 year after the incident. These studies are the most recent in a series of studies dating back to the 1970s. According to NRC and DOE officials, confidence in the results of these studies has increased significantly as better data and more sophisticated analytic techniques have been used. Appendix II contains a fuller description of the methodology of these recent studies and the results of previous studies. Since the 1970s NRC has also sponsored a series of studies examining the risk that spent fuel could be released during transportation accidents. NRC’s most recent assessment of spent fuel transportation accident risks was conducted for NRC by Sandia National Laboratory and was published in 2000. The 2000 Sandia National Laboratory study, like preceding accident studies, found that an accidental release of spent fuel in transit is very unlikely and that significant human health impacts are even less likely. The study estimated that in over 99.9 percent of all truck and rail accidents, the shipping container would experience no significant damage, and no radioactive material would be released. In fact, the analysis found that only 7 in 100,000 (0.007 of 1 percent) truck accidents and 4 in 100,000 (0.004 of 1 percent) rail accidents would involve spent fuel casks in impacts or fires that might cause a release of radioactive material. While this study did not project the human health impacts of particular accident scenarios, it concluded that the overall risk of human exposure to accidental releases of spent fuel was far less than that estimated in the 1977 study, which confirmed that NRC’s safety and security regulations then in place were adequate. A subsequent DOE-sponsored study used the results of the 2000 Sandia National Laboratory study to determine the potential health effects of the estimated quantity of material released. DOE’s contractor used the estimated amount of material released in what DOE determined as the most severe reasonably foreseeable accident to estimate the number of latent cancer fatalities that could result from severe accidents while shipping spent fuel to the Yucca Mountain repository. From this study, DOE concluded that this type of accident—having a probability of occurring about 2.8 times in 10 million accidents per year—could cause about 5 long-term latent cancer fatalities—far less than its estimate of 48 latent cancer deaths in the event of a successful sabotage attack with armor-piercing weaponry. Apart from this type accident, DOE found that the probability of any deaths due to an accidental release of radiation was quite small. DOE’s final environmental impact statement for Yucca Mountain projected that accidents over 24 years of shipping would cause fewer than 0.001 latent cancer fatalities. In contrast, DOE projected that these same shipments had a much greater probability of resulting in deaths due to normal traffic accidents—between 2.3 and 4.9 traffic fatalities over the same 24-year period. As with the sabotage studies, these studies of accident scenarios are the most recent in a series of studies dating back to the 1970s. According to NRC and DOE officials, confidence in the results of these studies has increased significantly as better data and more sophisticated analytic techniques have been used. Appendix II contains a fuller description of the methodology of these recent studies and the results of previous studies. Although NRC believes that the results of the federally sponsored studies are valid, it has several evaluations ongoing and planned to further assess its security and safety measures. To assess its existing security measures following the September 11, 2001, terrorist attacks, NRC initiated a commissionwide review. As part of this review, NRC commissioned Sandia National Laboratory to examine more severe terrorist attack scenarios involving spent fuel shipping containers. For example, the laboratory will assess the effects of (1) a 20-passenger aircraft loaded with explosives crashing into shipping containers and (2) a sustained attack on these containers using a variety of weapons in combination. As part of an ongoing process to assess its safety measures, NRC has a number of ongoing and planned studies. NRC commissioned Sandia National Laboratory for further validation of computer models used to evaluate the safety of shipping containers. To solicit comments on the scope of its evaluation, NRC held a series of public meetings beginning in 1999. It considered comments obtained during these meetings and issued an interim report in 2002 that recommended several additional studies. Although these studies are still being designed, their preliminary objectives include (1) validating past computer-based predictions of damage to containers resulting from collisions, (2) validating past computer-based predictions of how well containers withstand fires, and (3) identifying the response of fuel pellets, fuel rods, and fuel assemblies in severe impacts. In contrast to past analyses of severe accident scenarios, the studies are to include physical tests of full-scale current model shipping containers. The results of these physical tests will be compared to the predictions of past computer-based analyses and serve to either validate or to correct those results. The studies are also to address some of the technical issues that were not adequately addressed by past accident analyses. For example, while past studies relied on expert judgment to assess the complex chain of variables involved in releasing respirable spent fuel from containers—including fracturing spent fuel rods and pellets—the planned studies will examine these events experimentally. According to NRC officials, the studies are expected to be completed by 2006. NRC studies have reported that a risk of widespread harm to human health from spent fuel arises from the remote possibility of a sustained loss of coolant in a spent fuel pool. Such a loss could potentially lead to a fire that would disperse radioactive material across a wide area. NRC’s most recent published study of this risk, released in 2001, found that, though the potential consequences of such a fire could be severe—nearly 200 early fatalities and thousands of latent cancer fatalities—the likelihood of such a fire is low. The study estimated that a catastrophic earthquake or a severe but unlikely accident, such as dropping a 100- to 150-ton storage container into the pool, could precipitate a pool fire. The study was conducted to assess the risks associated with accidents at nuclear reactors that have been permanently shut down. According to NRC, once the fuel is removed from the reactors, there is a risk associated with the fuel stored in pools. NRC designed the study with conservative assumptions to identify the most severe possible impact on public health. The study assessed a variety of natural disasters and accidents that could drain coolant and cause a fire. These events included loss of electrical power, which would shut down the pool cooling system; an event that would significantly damage the pool cooling system; a drop of a heavy load, which could damage the pool wall or floor; a severe earthquake; and an accidental aircraft crash. The study found that a catastrophic earthquake and a heavy load drop were the events most likely to significantly damage the pool, leading to sustained loss of coolant and potentially causing a fire. The study then calculated the amount of radioactive material that might be released by a fire and the possible human health effects stemming from exposure to this material. In making these calculations, the study made various conservative assumptions to ensure that NRC identified the most severe consequences possible. For example, the study assumed that a pool fire would involve 100 percent of the spent fuel assemblies in the pool, releasing large amounts of radioactive material into the atmosphere. Two of the authors of the study noted that it was not certain how many spent fuel assemblies would actually burn in a fire. The uncertainty in the amount of radioactive material released depends on the fuel age and distribution in the pool and the characteristics of the accident scenario. The authors noted that some spent fuel assemblies might not reach the high temperatures required to burn and that some of the radioactive material might remain trapped in the pool or building. Because spent fuel decays and thus becomes less dangerous over time, the study evaluated scenarios in which the reactor had been shut down for 30 days, 90 days, 1 year, 2 years, 5 years, and 10 years. For each scenario, the study evaluated two levels of radioactivity released from the fuel. NRC used the results of this study to calculate the potential health effects of a fire in a spent fuel pool. These results are shown in table 1. The study noted that the results are based on a natural disaster or an accident severe enough to lead to a pool fire and that the risk of such an event occurring is very low. NRC also noted that part of the reason for the low probability is NRC’s defense-in-depth policy, which states that NRC establishes requirements to ensure that safety will not be wholly dependent on any single system. Instead, NRC’s requirements ensure multiple or redundant safety systems. In the case of the storage pool studied in the 2001 report, NRC noted that several factors combine to make a pool fire unlikely, including the robust design of the pool; the simple nature of the pool support systems; and the long time required to heat up the fuel, which allows time for operators to respond. For example, according to the 2001 report, heating the least-decayed spent fuel to the ignition point—were it to occur at all—would take hours, perhaps even days. Thus, NRC officials explained that even if a massive loss of coolant occurred, plant operators might still have time to react, depending on the extent of the damage. NRC requires that nuclear power plants have a backup water supply that can cool fuel in case of an accident, so, depending on the extent of damage, plant operators might be able to keep the fuel submerged. The risk of a pool fire is also limited by the ability of some of the fuel to be cooled by simple air ventilation if the coolant drains out. According to NRC, completely draining a pool may allow enough air ventilation among the stored fuel assemblies so that the spent fuel would stay below the ignition point of a self-sustaining fire (about 1,650 degrees Fahrenheit). Furthermore, even if a fire did begin in one assembly, there is considerable uncertainty about whether the fire would spread to other assemblies. A 1987 study of spent fuel pools found that spent fuel in pools with fewer assemblies, after being cooled for just a few weeks, would not ignite if subjected to loss of coolant. Under the dense storage conditions characterized by most spent fuel pools today, however, air ventilation becomes less effective. To begin addressing some of the uncertainties regarding the risks of storing spent fuel in wet storage pools, NRC has some ongoing work, and recently completed some initial evaluations of sabotage attacks on these pools, and has more work planned and ongoing at two DOE national laboratories. Following the terrorist attacks of September 11, 2001, NRC commissioned the U.S. Army Corps of Engineers to examine potential effects of sabotage directed at spent fuel pools. The Corps conducted several computer-based analyses of the potential effects of armor-piercing weapons and high explosives on typical spent fuel pools. The analyses found that the penetration of armor-piercing weapons and high explosives could vary considerably, depending, among other things, on the size of the weapon or explosive and the sophistication of the attacker. NRC is also conducting studies with less conservative assumptions to more realistically evaluate the risks of spent fuel in a drained pool. NRC has contracted with Argonne National Laboratory to study the conditions necessary to ignite a pool fire. NRC has also contracted with Sandia National Laboratory for a series of studies to define potential threats, and to identify potential vulnerabilities, regulatory improvements or legislative initiatives to improve security and safety and better protect public health. The studies by Sandia National Laboratory include a review of a variety of terrorist scenarios, including attacks on fuel pools with aircraft and high explosives. According to NRC, preliminary results of these studies indicate that spent fuel may be more easily cooled than has been predicted in some past studies and that off-site radiological releases may be substantially reduced from previous worst-case estimates. Predicted public health effects might also be substantially reduced for the worst scenarios where coolant is lost and recovery actions are not successful in cooling the fuel. Dry storage containers, like shipping containers, pose a considerable barrier to releasing spent fuel. Used to store spent fuel when it is removed from wet storage, dry storage containers are constructed of layers of steel and radiation barriers such as concrete. In establishing regulations for dry storage of spent fuel, NRC stated in 1998 that dry storage containers are structurally similar to shipping containers and that the results of sabotage studies on shipping containers could reasonably be applied to dry storage containers. Nevertheless, NRC is continuing to study potential risks of releases from dry storage containers. Studies by DOE and the Corps on dry storage containers have generally reached the same conclusion—that the thick walls of the containers, consisting of an inner steel container and an outer steel or concrete container, could not be penetrated by airplane crashes and would result in no significant release of radiation when attacked with advanced weapons. Two DOE-sponsored reports, released in 1998 and 2001, found that airplane crashes would not penetrate dry storage containers. The reports focused on the most penetrating components of the commercial jet aircraft: the engines and landing gear. Both reports concluded that although airplane crashes could damage the containers, no radioactive material would be released. The analysis showed that the containers would break up the airplane, spreading jet fuel over a wide area, causing the jet fuel to dissipate or burn without affecting the spent fuel in the containers. Two other studies, performed in 2001 by the Corps, found that the containers would not release significant amounts of radioactive material when attacked by armor-piercing weapons or high explosives. The study examining the effect of armor-piercing weapons found that the penetration to the containers was very limited. NRC and DOE officials and independent experts told us that, based on a previous analysis and similar studies involving shipping containers, the weapons would not likely cause a significant release. The study examining the effects of high explosives found that the explosives would not completely penetrate the container. The study showed extensive exterior damage, but no penetration to the spent fuel. NRC is continuing to study potential risks to dry storage. NRC has contracted with Sandia National Laboratory to assess the vulnerability of dry storage containers to terrorist attacks, including a further analysis of aircraft crashes and the effects of high explosives. In addition, the laboratory will investigate measures to mitigate any vulnerability identified through the assessment. As DOE develops its plans for shipping spent fuel to the Yucca Mountain repository, the agency has several potential options for enhancing the security of spent fuel during the Yucca Mountain shipping campaign. Specifically, DOE could potentially minimize its total number of spent fuel shipments, ship the fuel in an order that reduces risk, or transport the fuel on railroad trains dedicated exclusively to hauling spent fuel. Not all of these options may be feasible under the terms of DOE’s contracts with spent fuel owners, and some options for shipping in a particular order would conflict with one another. DOE could enhance the overall security of spent fuel by minimizing the total number of shipments. Fewer shipments would present fewer potential targets for terrorists and could also enhance safety because there would be fewer chances for an accident. Representatives of the nuclear power industry and nuclear safety experts that we contacted agreed on these points. For example, a representative of a consortium of nuclear utilities told us that shipping spent fuel by rail is preferable to shipment by truck because spent fuel containers designed for rail can carry about 5 times more spent fuel than truck containers. This larger capacity translates to fewer shipments overall. Similarly, a frequent critic of the safety of spent fuel shipments agreed that fewer shipments would be better, noting that fewer, large shipments are easier to protect and track. Beyond expressing a preference for shipping spent fuel to Yucca Mountain mostly by rail, DOE has not yet developed its plans to implement the shipping campaign. In addition to providing security advantages, minimizing the number of shipments by using rail provides safety and efficiency benefits. According to a 1998 Department of Transportation report, rail was the safer mode for shipping large amounts of spent fuel. The report states that minimizing trips usually reduces total risk by reducing risks associated with routine radiation exposure—such as the incidental exposure experienced by transportation and plant workers while shipping containers are being prepared—as well as accident-related exposure and other nonradiation accident consequences. DOE’s ability to minimize the total number of shipments may be limited by its contracts with owners of spent fuel. Under the contracts, DOE is to establish a shipping queue, in which each utility has shipping rights based on the date and quantity of fuel removed from a reactor. In many cases, the places in the queue correspond to quantities of spent fuel that would fill less than three large rail containers—an amount that, according to the Association of American Railroads, would be a reasonable size for a single rail shipment. If strictly followed, the queue could result in many more shipments than necessary. For example, the 12 spent fuel owners with the largest quantities of spent fuel would make approximately 576 shipments based on the shipping queue. On the other hand, if these 12 owners consolidated all their shipments into rail containers and used 3 containers per shipment, they could reduce their total shipments to 479, a 17 percent reduction. If these same owners consolidated shipments into 5 rail containers per shipment, which according to DOE is another possible option, total shipments could be reduced to 287—a nearly 50 percent reduction. DOE could also enhance security by shipping spent fuel in an order that minimizes risk. There are at least three shipping orders that would potentially reduce risk: (1) shipping fuel from shutdown nuclear reactors first, reducing the number of sites storing spent fuel; (2) shipping the oldest and least radiologically dangerous fuel first to reduce transportation risk; or (3) shipping fuel from storage pools first, reducing the likelihood of a pool fire. Shipping fuel first from shutdown nuclear reactors would be permissible under DOE’s contracts with fuel owners, but the contracts might preclude the other two options. Further, to some extent, these options conflict with one another. For example, an emphasis on shipping fuel from spent fuel pools first could leave some older fuel in dry storage at current storage facilities. Data are not available to determine which order would provide the greatest risk reduction. DOE could potentially enhance the overall security of spent fuel by first shipping fuel currently stored at shutdown nuclear reactor sites. Currently, about 4,100 tons of spent fuel—about 8 percent of the total stored nationwide—are stored at 14 shutdown nuclear reactors. Because nine of these sites will not be accumulating additional spent fuel, clearing their spent fuel inventory would eliminate them as potential targets of a terrorist attack. DOE recognized the potential importance of removing spent fuel from shutdown reactors when it established its contracts for disposal of spent fuel. Although the contracts establish a shipping queue, the contracts allow DOE to override the queue to make an exception for spent fuel from shutdown reactors. Specifically, the contracts provide that, notwithstanding the age of spent fuel, priority may be accorded any spent fuel removed from a civilian nuclear power reactor that has reached the end of its useful life or has been shut down for whatever reason. DOE could lower the risk of transporting spent fuel by shipping the oldest spent fuel first. Radioactivity emitted by some components of spent fuel declines significantly over comparatively short periods of time. For example, one of the more radioactive elements in spent fuel—cobalt— accounts for about 90 percent of the gamma radiation emitted by spent fuel when it is first removed from the reactor. However, after about 25 years, cobalt, a comparatively volatile element that would be a major component of any accidental or deliberate release, declines by half after 30 years. Shipping older spent fuel first could therefore be preferable in the event of a deliberate or accidental release during transit. For example, a release of spent fuel that is 25 or 30 years old would be a lesser—though still significant—threat to public health than fuel that is only 5 or 10 years old. Analyses performed for DOE’s environmental impact statement for the Yucca Mountain repository illustrate the reduced impact that a release of older spent fuel can have on public health. In the draft environmental impact statement, DOE estimated that a particular release due to a sabotage attack could result in about 16 latent cancer fatalities. This scenario assumed that the shipped fuel was about 23 years old, which is approximately the average age of the inventory of spent fuel. The final environmental impact statement analyzed the same scenario, except that it assumed that the shipped fuel was about 15 years old. This analysis found that such a release would cause about 48 latent cancer deaths—3 times as many as the older fuel. The age of the fuel was one of two major factors that resulted in the higher estimate of latent cancer fatalities in the final statement. DOE noted that the younger, more dangerous fuel, such as spent fuel discharged 5 years or less from a reactor, makes up a small percentage of the total inventory of spent fuel. As a result, the youngest, hottest fuel would be less likely to be shipped or would represent a small fraction of the fuel that is shipped. requires about 704 million years for its radiation output to be cut in half. shipping older spent fuel first. An analyst under contract with the state of Nevada noted that shipping the oldest fuel first would be the most important factor in protecting public health during transit. Not only would older fuel have lower consequences if released in an accident or a terrorist event, but it also would be safer for transportation workers—drivers and handlers at intermodal transfer points—and the general public. A representative of the National Research Council’s Board on Radioactive Waste Management told us that shipping the oldest fuel first would help minimize potential human health consequences in the event of a release during transit. However, this representative said that if one assumes that the robust shipping containers make a release unlikely, the potential risk reduction associated with the age of the fuel becomes less important. Regardless of the potential transportation-related security benefits, DOE’s contracts with spent fuel owners limit its ability to ship the oldest fuel first. In addition to establishing a shipping queue, the contracts allow each fuel owner discretion to decide which of its spent fuel is actually delivered to DOE, commensurate with the quantity of fuel associated with a particular spot in the queue. For example, the Exelon company—the nation’s largest nuclear power company—has a place in the queue for about 35 tons of spent fuel removed from a reactor located at its plant in Zion, Illinois. When the time comes to ship this fuel to the repository, Exelon may deliver either this fuel or an equal quantity of fuel—possibly much younger and more radioactive fuel—from any of its facilities located at sites in Illinois and sites in Pennsylvania and New Jersey. Because owners have discretion to choose which fuel they will actually ship under the terms of the contract, DOE does not have the ability under the contract to require that oldest fuel be shipped first. Fuel owners will likely select spent fuel for shipment based on their operational needs. For example, representatives of Progress Energy, a fuel owner with reactors in the Southeast, said they would will likely ship from their pools first because their pools are reaching capacity. Similarly, an Exelon official said that shipping from pools first would minimize the need for dry storage facilities. As discussed in the first section of this report, a fire in a wet storage pool, while highly unlikely, is theoretically possible. Shipping spent fuel from densely packed spent fuel pools first could have security benefits. Because DOE has not yet opened a permanent repository, spent fuel has accumulated in quantities that pools were not originally designed to contain. NRC officials noted that while a few spent fuel pools have low density in at least part of the pools, nearly all pools are densely packed. These densely packed pools contain as much as 3.5 times more spent fuel on average than the pools were originally designed to store. Reducing the density of spent fuel in the pools would reduce the likelihood of a fire. Recent NRC and independent studies show that lower-density configurations allow for greater spacing between assemblies, which allows air to more efficiently circulate in the event of coolant loss. According to these reports, greater spacing could also help prevent a fire from spreading among assemblies. Also, in the unlikely event of a fire, fewer assemblies in the pool could result in reduced consequences. As noted earlier, DOE’s contracts limit its ability to influence the order in which spent fuel is shipped. Some owners may prefer to ship fuel from densely packed pools first because when the pools reach full capacity, the fuel must be removed or the plant must shut down. To the extent that, as Exelon and Progress Energy officials stated, utilities are likely to ship from their wet pools first, the threat would be reduced earliest at these pools. This would, however, result in a relatively higher threat during transport from relatively younger, more radioactive, spent fuel. It is not clear whether this will be a common preference. According to some analysts, DOE could enhance the security of spent fuel shipments by using trains dedicated to carrying only spent fuel. Such trains would typically consist of three to five rail cars, carrying one container of spent fuel per car. A truck shipment can carry 1 to 2 tons of spent fuel. In contrast, depending on the containers used, a 3-car train can carry from 50 to 65 tons of spent fuel and a 5-car train can carry from about 80 to 110 tons of spent fuel. Although dedicated trains could enhance the security and safety of spent fuel shipments, these benefits would have to be weighed against potential drawbacks. The benefits would also have to be weighed against constructing a rail line to Yucca Mountain. Currently, no rail line extends to Yucca Mountain. Advocates of dedicated trains told us that such trains offer two primary security and safety advantages. First, the use of dedicated trains would significantly reduce the exposure of spent fuel shipments to a terrorist attack by significantly shortening the trip duration from its point of origin to the repository. A representative of the Association of American Railroads, which recommended that DOE use dedicated trains for the shipment of spent fuel, explained that a spent fuel shipment from the East Coast to Nevada would take about 3 to 4 days by dedicated rail, while the same trip by regular rail would take about 8 to 10 days. Specifically, spent fuel transported by regular rail would spend significant amounts of time in rail yards where trains are broken up and reconfigured. While in the rail yards, spent fuel containers could be stationary targets. Second, using dedicated trains would ensure that spent fuel was not shipped with flammable hazardous materials. If spent fuel were released from its containers in an accident or a terrorist attack, a fire fueled by flammable materials could spread radioactive material over a wide area. For example, NRC recently issued an analysis regarding a rail tunnel fire that occurred in Baltimore in July 2001 that involved more than 28,000 gallons of a flammable solvent. NRC estimated that temperatures as high as 1,800 degrees Fahrenheit were reached at certain locations in the tunnel during the course of the fire but found that temperatures averaged 900 degrees in other parts of the fire. NRC studied the potential effects of this fire on a spent fuel transportation container carrying spent fuel and concluded that, when subjected to similar fire conditions, the container would not release radioactive material. According to transportation officials we spoke to, dedicated trains can also have safety and other benefits beyond sabotage prevention. For example, officials of the Union Pacific Railroad and the Association of American Railroads said that combining cars carrying fully loaded spent fuel containers on trains with those carrying other cargo raises operational and safety issues. Rail cars carrying spent fuel rail containers are extraordinarily heavy—such a car weighs about 470,000 pounds compared to about 200,000 pounds for a standard loaded rail car. This weight differential introduces difficulties in the physical dynamics of a train carrying spent fuel and other cargo, making derailments more likely. On the other hand, it is not clear that the advantages of dedicated trains outweigh the additional costs. In 1980, while considering amendments to its security regulations, NRC examined the case for requiring dedicated trains for rail shipments of spent fuel. NRC noted the advantages of dedicated trains but also noted that dedicated trains are no more capable of avoiding high-population areas than are regular trains, that a regular train in a rail yard would be under surveillance by escorts and railroad police, and that the necessary physical protection measures can be as easily implemented on regular trains as on dedicated trains. For these and other considerations, NRC declined to require dedicated trains. Further, although DOE recognized the possible advantages of shipping spent nuclear fuel by dedicated trains, DOE also concluded in its final environmental impact statement that available information does not indicate a clear advantage for the use of either dedicated trains or general freight service. The events of September 11, 2001, elevated lingering public concerns about the security of spent fuel, and in particular the security and safety of large-scale shipping of spent fuel. NRC and DOE studies show a low likelihood of widespread harm to human health from terrorist attacks or severe accidents involving spent fuel. Nonetheless, DOE could potentially take a number of measures to further enhance the security and safety of the shipping campaign to Yucca Mountain. It is not clear whether the additional security and safety benefits such measures offer are worth the additional costs and effort—possibly including a renegotiation of contracts that DOE has established with the nation’s utilities—that they would entail. In addition, it is not clear which of these measures—some of which conflict with each other—would provide the greatest safety and security benefit. However, we believe they should be explored. To ensure that all reasonable options to further enhance the security and safety of spent fuel in storage at nuclear power plants and in transit are explored, we recommend that the Secretary of Energy assess the potential benefits and costs of (1) minimizing the total number of shipments of spent fuel by consolidating shipments where possible, (2) shipping spent fuel in an order that further minimizes risk, and (3) emphasizing the use of trains dedicated to hauling spent fuel. We provided DOE and NRC with drafts of this report for review and comment. DOE generally concurred with the facts of the report, noting that the information on transit was accurate and well balanced. DOE also concurred with our recommendations, with one exception. DOE noted that the Department of Transportation was expected to release a study later this year on the safety and security implications of transporting spent fuel by dedicated train. DOE stated that it preferred to wait for the outcome of the study before beginning its own review. DOE also provided technical comments, which we incorporated into the report. NRC also generally concurred with the facts of the report, noting that the information provides a reasonable characterization of the current understanding of risks associated with spent fuel storage. However, NRC stated that it does not consider the results of its most recently published studies on spent fuel in a pool and spent fuel in transit, as quoted in the report, to accurately reflect the consequences of a potential terrorist attack. Rather, NRC indicated that the studies started with overly conservative assumptions, resulting in “unrealistically conservative” results. NRC noted that it is currently conducting studies to assess the potential consequences of a terrorist attack that use more realistic assumptions. NRC also noted in its technical comments that preliminary results from these ongoing studies show that potential consequences may be far less severe than reported in the current publications. We revised our report to account for NRC’s preliminary findings from ongoing work involving the risk associated with spent fuel pools. As our report states, these findings indicate that risks from spent fuel pools may be substantially reduced from previous estimates. We used NRC’s February 2001 report, Technical Study of Spent Fuel Pool Accident Risk at Decommissioning Nuclear Power Plants, with the understanding that the report received a high level of scrutiny both within and outside NRC prior to its publication. As stated in the report, “Preliminary drafts of this study were issued for public comments and technical reviews in June 1999 and February 2000. Comments from interested stakeholders, the Advisory Committee on Reactor Safeguards, and other technical reviewers have been taken into account in preparing this study. A broad quality review was also carried out at the Idaho National Engineering and Environment Laboratory, and a panel of human reliability analysis experts evaluated the report’s assumptions, methods, and modeling.” The report also states that, based on the comments received, “staff did further analyses and also added sensitivity studies on evacuation timing to assess the risk significance of relaxed offsite emergency preparedness requirements during decommissioning.” Given this level of review, we believe it to be appropriate to report the results of this study. NRC also took issue with our use of its report, Reexamination of Spent Fuel Shipment Risk Estimates. NRC explained that the analyses in this document are similarly overly conservative. This March 2000 study was conducted by Sandia National Laboratory at the request of NRC to reexamine the conclusions reached in previous studies regarding the risks of spent fuel shipments. As with its February 2001 report, this report also indicated a high level of review prior to publication. Specifically, the report mentions a number of individuals who provided comments to the report, including staff at Sandia National Laboratory, Lawrence Livermore National Laboratory, and “a number of technical experts at the NRC.” Given the intent of this study and its level of review, we believe it to also be appropriate to report the results of this study. We performed our review at DOE and NRC headquarters in Washington, D.C., at NRC’s Region III office near Chicago, Illinois, and at DOE’s Yucca Mountain Project office in Las Vegas, Nevada. We visited several sites where spent fuel is stored, including operating nuclear power plants, a decommissioned nuclear power plant, and independent spent fuel storage sites. We conducted our review from April 2002 to June 2003 in accordance with generally accepted government auditing standards. To determine the potential health effects of a terrorist attack or a severe accident involving commercial spent nuclear fuel, we examined a variety of federally sponsored studies, primarily conducted or sponsored by DOE and NRC. We examined critiques of these studies prepared by a variety of groups and individuals. We also spoke to many of the authors of these federal studies, authors of critiques of these studies, nuclear energy representatives, and other individuals representing a variety of backgrounds, including academia and special interest groups. To identify options for DOE to enhance the security of spent fuel as it develops its plans to ship the fuel to Yucca Mountain, we reviewed documents analyzing DOE’s plans and preferred alternatives, including the environmental impact statement and many of its supporting documents. We also interviewed DOE, NRC, and Department of Transportation officials responsible for developing and coordinating safe shipments of spent nuclear fuel. We also spoke to state and local government officials in a number states, including Nevada; nuclear energy representatives; and a variety of groups and individuals representing a spectrum of viewpoints on the shipment of spent nuclear fuel. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. At that time, we will send copies of this report to other interested parties and make copies available to others who request them. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov/. If you or your staff have any questions about this report, please call me at (202) 512-3841. Key contributors to this report are listed in appendix V. As the regulating agency responsible for spent fuel, the Nuclear Regulatory Commission (NRC) must adequately protect the public health and safety against accidents or acts of sabotage. To provide this assurance, NRC uses a “defense-in-depth” philosophy. Consistent with this philosophy, NRC designs its safety and security requirements to ensure that public safety and health are not wholly dependent on any single element of the design, construction, maintenance, or operation of a nuclear facility. More specifically, NRC designs multiple or redundant measures to mitigate areas of known risk or to increase confidence in areas of uncertainty. Listed below are some of the primary requirements NRC has recognized as protecting spent fuel while in transit, in wet storage, and in dry storage. NRC requires that transporters of spent fuel (1) contain the fuel in NRC-certified shipping containers that must meet stringent durability performance requirements and (2) comply with requirements designed to impede an act of sabotage on the fuel. NRC regulations for spent fuel shipping containers dictate that the containers prevent releases of significant amounts of radiation under both normal operating conditions and in hypothetical accident scenarios. The containers include shielding to ensure that persons near a container are not exposed to significant amounts of radiation. In addition, the containers must remain intact after a series of simulated accident conditions, including an impact test, in which containers are dropped from 30 feet onto a flat, a puncture test, in which containers are dropped from 40 inches onto a 6-inch diameter steel bar at least 8 inches long; a fire test, in which containers are engulfed in a 1,475-degree Fahrenheit fire for 30 minutes; and an immersion test in which containers are submerged in 3 feet of water for 8 hours. The containers must survive each of these tests in succession, without significant levels of surface radiation or release of spent fuel. Containers must also be shown to survive water pressure equivalent to immersion under nearly 670 feet of water for 1 hour. Because of these requirements and the dimensions of the spent fuel assemblies they contain, spent fuel shipping containers are massive and robust. A typical train container is about 25 feet long and 11 feet in diameter, weighs about 100 tons empty, and about 120 tons fully loaded— thus the container can account for over 80 percent of the total weight of a shipment. Though truck containers have significantly less capacity than rail containers, both types have similar basic designs. As figure 2 indicates, they are generally composed of several layers of shielding material, totaling about 5 to 15 inches in thickness, including a radiation barrier consisting of lead or depleted uranium. When in transit, each end of the container is made of material that is designed to absorb much of the force of an impact. Figures 3 and 4 show a spent fuel rail container and a truck container, respectively. Although the shipping container is the most important component in preventing release and dispersal of spent fuel in transit, NRC also requires transporters of the spent fuel to implement measures designed to further protect spent fuel shipments from sabotage. For example, transporters of spent fuel must ensure that shipments are under surveillance, that arrangements have been made with local law enforcement agencies for their response in the event of an emergency, and that rail and highway routes have been approved by NRC. NRC had also required that armed escorts be either aboard the shipping vehicle or in a following vehicle in areas of high population; NRC has since strengthened the security required of shipments following the September 11, 2001, terrorist attacks. Spent fuel pool designs must meet specific performance criteria before NRC can issue a license for construction or operation. The requirements focus on ensuring that the safety features of the pool survive certain natural phenomena or accidents to ensure that, among other things, the pool will retain water and keep the stored fuel sufficiently cool. Spent fuel in wet storage is also protected by the physical security measures in place at the storage site. As part of the licensing process prior to construction and operation, utilities must submit reports that analyze the likelihood of certain natural phenomena, such as earthquakes, hurricanes, floods, and tidal waves. Using probability analyses, historical information, and current information on seismology, geology, meteorology, and hydrology, the utilities must determine the risks of certain types of natural phenomena. Then the utilities must show that the proposed pool designs would survive the most severe natural phenomena or combinations of less severe phenomena expected for that particular area. The utilities must also perform the same exercise for the likelihood and severity of certain accidents, including airplane crashes. For example, pools constructed near airports may have to be designed to withstand certain types of accidental airplane crashes. Consequently, although the specific designs of wet storage pools vary from site to site, they are massive, robust structures. Pools are typically 30 to 60 feet long, 20 to 40 feet wide, and 40 feet deep. Pools could nearly hold three semi-truck tractor-trailers parked side-by-side and stacked three deep. The pool is contained by a structure consisting of a 1/8 inch to 1/4 inch stainless steel liner, and 4- to 6-foot thick walls of steel-reinforced concrete. Generally, the pools are contained in other buildings. The roofs of some of these buildings may be made from industrial-type corrugated steel. The assemblies, stored vertically in racks, must be immersed at least 20 feet below the surface of the water in order to keep the fuel cool and to provide a sufficient radiation barrier. See figure 5 for a photograph of a wet storage pool. Spent fuel pools are also protected by the physical security measures in place at the facilities where they are located. About 95 percent of the spent fuel inventory is stored in pools, most of which are located at operating nuclear reactors. The perimeters of these reactor sites are secured by fences topped with barbed wire, vehicle barriers, and intrusion detection systems—including perimeter cameras and motion detection technology— that are monitored 24 hours per day. Access to the building containing the wet storage pools is impeded by locked steel doors capable of surviving armed assault and security checkpoints where a person’s identity must be verified and where security searches take place. Finally, these facilities are manned by a force of armed guards. In addition, nuclear power plants are required to coordinate an emergency response to the site in the event of a terrorist or sabotage event. The coordination requires contingency plans and joint exercises with local law enforcement agencies to ensure an adequate and timely response to an event. Since the terrorist attacks of September 11, 2001, NRC has added additional requirements, including additional armed guards and vehicle barriers. NRC requires that spent fuel in dry storage be stored in containers that protect workers and other nearby persons from significant amounts of radiation, and that can survive operational accidents at the storage site, as well as extreme meteorological and other natural events. In addition, fuel in dry storage is protected by physical security measures in place at the storage site. Among other things, dry storage containers must be capable of surviving a drop test, in which containers are tested by a drop from the height to which it would be lifted to during operations; a tip-over test, testing containers against seismic, weather, and other forces or accidents that could knock over 100- to 150-ton containers, an explosion test, in which containers are tested against nearby explosions and the resulting pressures created by the blasts; a tornado and tornado missile test, in which high winds and tornado a seismic test, in which containers are tested against the seismic motions that might be expected to occur in its geologic area (certification requirements may differ from region to region); a flood test, in which containers are analyzed for floods; and a fire test, in which containers are engulfed at temperatures up to 1,475 degrees Fahrenheit for 30 minutes. Manufacturers must provide NRC with information on how well a container design meets these performance requirements. NRC does not require physical tests of the containers, but it accepts information derived from scaled physical tests and computer modeling. As with shipping containers, to meet these performance requirements, certified dry storage containers are massive and robust. A typical dry storage container consists of a 1-inch thick steel container housing the spent fuel. At some facilities, the containers are placed horizontally in garage-sized bunkers constructed of concrete. The concrete protects nearby workers and the public from radiation. At other facilities, the container is encased in an outer cask. The outer cask typically is constructed of steel-reinforced concrete, 18 or more inches thick. Like the concrete bunkers, the outer cask shields workers and the public from radiation. The free-standing, upright units, stored on concrete pads, can weigh from 100 to 150 tons each with nearly 90 percent of that consisting of the container weight. A dry storage container can store between 7 and 68 assemblies, depending on the size of the container. See figure 6 for an illustration of a dry storage container. In addition to the physical performance requirements of dry storage containers, the containers are protected by the physical security measures in place at the facilities where they are stored. Dry storage containers at operating nuclear power plants generally benefit from the physical security measures already in place at the sites. The large majority of spent fuel in dry storage is located at operating nuclear power plants. For dry storage containers situated away from a reactor site, NRC requires vehicle barriers, fences, intrusion detection systems, and guards. The guards are also able to contact local law enforcement agencies for assistance, if required. NRC requires that dry storage facilities coordinate response plans with local law enforcement agencies to ensure assistance can be readily provided, if needed. In the wake of the September 11, 2001, terrorist attacks, NRC issued orders to dry storage facility licensees that required enhanced security measures, including additional protections against a vehicle bomb threat. The human health implications of sabotage events and accidents involving spent nuclear fuel shipments described in the report are based on computer-based engineering and other analytic models that rely, in part, on physical experiments. In addition, these studies are the most recent in a series of studies that date back to the 1970s. According to NRC and DOE, better data and improved analytic tools over the years have significantly enhanced the agencies’ confidence in the results of these studies. This appendix provides an overview of the methodology of the most recent studies, as well as the approach and results of previous studies. Methodology of Most Recent Studies. The 1999 Sandia National Laboratory study was undertaken at the request of DOE for use in its preparation of an environmental impact statement for the Yucca Mountain repository. The study relied on computer models to estimate how the two selected armor-piercing missiles would damage shipping containers. Although no physical tests or experiments were conducted in this study, the study used computer models that were validated using the results of previous studies that included experimental data. Two of the most important factors considered in designing the study were the types of shipping containers and the weapons selected for analysis. For the shipping containers, the study used truck and rail containers considered representative of those that would be used to transport the spent fuel likely to be shipped in the early decades of the 21st Century. NRC’s performance standard for these containers requires that they prevent release of significant amounts of radiation under normal operating conditions and in accident scenarios. For example, radiation levels at the exterior of the container must remain below specified minimal levels after a series of tests to simulate accident conditions, including an impact test, in which the container is dropped from 30 feet onto a flat, unyielding surface. In selecting the weapons used in the analysis, the authors researched the latest information available and chose weapons they believed represented the two weapons that would penetrate spent fuel shipping containers, and which could also be available to terrorists. To ensure that the analysis would represent the upper limit of possible damage, the authors made conservative assumptions, including the following: No security measures were in place, such as armed guards who travel with spent fuel shipments and who are required to have the capability to contact local law enforcement personnel in the event of an attack. The weapons would be employed at a distance from these containers that would result in maximum damage to the container and that the weapon would strike the container dead center; if the missile were to strike higher or lower, it could be deflected by the cylindrical shape of most containers, and penetration of the container would be lessened or not occur at all. Previous Studies. The 1999 Sandia study is the most recent in a series of federally sponsored studies dating back to the 1970s that have examined the ability of armor-piercing weapons to penetrate spent fuel containers. A draft version of a Sandia study from 1978, for example, concluded that a successful sabotage attack on a spent fuel container would not cause prompt fatalities but could cause several hundred latent cancer fatalities in a densely populated urban area. The final version of this study reduced the total latent cancer fatalities to fewer than 100, based on a re-evaluation of the quantity of radioactive material released. Based largely on the initial draft of this study, NRC established its regulations for security of spent fuel in transit. Because this study was based on a conservative set of analytical assumptions instead of on experimental data, there was a high degree of uncertainty regarding the quantities of radioactive material released, and the human health consequences. Consequently, in 1983, DOE commissioned Sandia National Laboratory to conduct physical tests, in which armor-penetrating missiles were fired at shipping containers containing mock spent fuel assemblies. The study found that, under the worst-case scenario, about 24 ten-thousandths (0.0024) of 1 percent of the total solid fuel inventory in the container could be released as respirable particles. To estimate the human health impact, the study included conservative assumptions, including that the attacks occurred in Manhattan, in New York City, on a business day, that the fuel had been removed from the reactor for only 150 days (and thus was comparatively more radiologically dangerous), and that no evacuation took place to limit human exposure. Based on these results and assumptions, the study predicted no early deaths and between two and seven long-term latent cancer fatalities. Methodology of Most Recent Studies. According to NRC, the 2000 Sandia National Laboratory study was conducted to address three developments—the likelihood that spent fuel shipments would be increasing as a result of the progress on the Yucca Mountain repository, the use of containers and transportation routes that differed from those considered in previous studies, and the increased effectiveness in risk assessment and computer modeling of spent fuel containers. The overall objective of the study was to determine the degree of risk involved in shipping spent fuel by truck and rail. The study examined the effects of severe collisions and fires on four types of shipping containers—a lead-lined steel truck container, a depleted uranium-lined steel truck container, a lead-lined steel rail container, and a monolithic steel container. The study relied on computer analysis to estimate the probability of such events and the quantity of radioactive material that might be released. The analysis developed 19 representative truck accidents and 21 representative rail accidents. The study simulated the effect on each of the truck and rail containers after slamming them into a rigid surface from a variety of angles at 30, 60, 90, and 120 miles per hour. None of the cases modeled showed that the body of the container would fail. Moreover, the modeling showed that the seals around the lid at each end of the truck container would not allow a release at 30, 60, and 90 miles per hour, although they may leak at 120 miles per hour. The results from modeling the two different rail containers, however, showed that the seals may leak, for some collisions at a speed of 60 miles per hour, depending on the angle of impact. DOE’s study that predicted the health effects of these releases used a computer code. The code calculated the dispersion of radioactive particles and the resultant dose to the population. To estimate latent cancer deaths, DOE made a number of key assumptions. DOE’s analysis assumed the accident occurred in the most populous center of an urban area and that the population distribution from the accident site in the urban center to the outer fringes was similar to the average populations—projected to the year 2035—of the 20 largest U.S. metropolitan areas, plus Las Vegas, Nevada. Stable weather conditions—with comparatively slow wind speeds—were assumed to prevail at the time of the accident. Finally, the population was assumed to be exposed to remnants of the release for 1 year after the accident, with no evacuation or cleanup. Previous Studies. The 2000 Sandia study reexamined the risks associated with the transport of spent fuel by truck and rail and compared the results to two previous studies—one conducted by NRC in 1977 and one performed by DOE’s Lawrence Livermore National Laboratory in 1987. According to NRC, the 2000 Sandia study extended the methods used in the 1987 report for container analysis and used improved risk assessment methods. The 2000 Sandia study found that previous NRC-commissioned studies overestimated the risks of human exposure due to transportation accidents. According to NRC and Sandia officials, they have become more confident in their results as analytical techniques and data have improved. In 1977, NRC examined the risks of shipping a variety of radioactive materials, including spent fuel. At that time, NRC determined that the risks of accidental releases involved in shipping spent fuel and other radioactive materials were quite small—specifically, the study estimated latent cancer deaths to be about 3 in 200 years of shipping spent fuel at estimated rates for 1985. The study concluded that the existing NRC requirements were adequate to protect public health. Partly because this study was based on conservative engineering judgments and did not include physical tests of shipping containers in severe accidents, NRC subsequently commissioned a study published in 1987 that found that the risks of spent fuel releases under transportation accident conditions were much smaller. Performed by Lawrence Livermore National Laboratory for NRC, this study included a more sophisticated analysis than the 1977 study, using historical data on past transportation accidents to determine the likelihood of specific accident scenarios. The study then used a computer-based analysis of accident scenarios involving collisions and fire temperatures exceeding NRC standards. The 1987 study found that in 99.4 percent of all rail and truck accidents, the container would experience no significant damage, and no radioactive material would be released. In addition to the individual named above, Doreen Feldman, Michael Hartnett, Gary Jones, Cynthia Norris, Robert Sanchez, Amy Stewart, Barbara Timmerman, and Dwayne Weigel made key contributions to this report.
Spent nuclear fuel, the used fuel periodically removed from nuclear power reactors, is one of the most hazardous materials made by man. Nuclear power companies currently store 50,000 tons of spent fuel at 72 sites in 33 states. That amount will increase through 2010, when the Department of Energy (DOE) expects to open a permanent repository for this fuel at Yucca Mountain, Nevada. Concerns have been raised since September 11, 2001, that terrorists might target spent fuel. GAO was asked to (1) review federally sponsored studies that assessed the potential health effects of a terrorist attack or a severe accident on spent fuel, either in transit or in storage, and (2) identify options for DOE to further enhance the security of spent fuel during shipping to Yucca Mountain. The likelihood of widespread harm from a terrorist attack or a severe accident involving commercial spent nuclear fuel is low, according to studies conducted by DOE and NRC. Largely because spent fuel is hard to disperse and is stored in protective containers, these studies found that most terrorist or accident scenarios would cause little or no release of spent fuel, with little harm to human health. Some assessments found widespread harm is possible under certain severe but extremely unlikely conditions involving spent fuel stored in storage pools. As part of its ongoing research program and to respond to increased security concerns, NRC has ongoing and planned studies of the safety and security of spent fuel, including the potential effects of more extreme attack scenarios, including deliberate aircraft crashes. While NRC and DOE have found that spent fuel may be relatively safe and secure, DOE could potentially enhance the security of this fuel through options such as minimizing the number of shipments and picking up fuel in an order that would reduce risk, such as moving older less dangerous fuel first. These options could reduce the risk during transport and at some locations where the fuel is currently stored. However, contractual agreements between DOE and owners of spent fuel may limit DOE's ability to choose among these options. In addition, it is not clear that the benefits of these measures would justify the potential costs, including a possible renegotiation of the contracts between DOE and the spent fuel owners.
The DI program, created in 1954, provides monthly cash benefits to workers who have become severely disabled and to their dependents and survivors. These benefits are financed through payroll taxes paid by workers and their employers and by the self-employed. Proof of disability can involve complex technical issues, and section 206(a) of the Social Security Act permits claimants to appoint an attorney to represent them at proceedings before SSA, at any level of administrative review. The disability claims process is complex, multilayered, and lengthy. The following scenario portrays the process for DI claimants who are typically represented by an attorney before SSA—i.e., those cases where the claim is ultimately appealed to SSA’s Office of Hearing and Appeals (OHA). Initially, the claimant would have filed a claim for DI benefits with a local SSA field office. This office would have then forwarded the claim to a state agency to examine the claimant’s evidence for medical disability. The state agency would then have denied the claim in an initial review and denied it again after reconsidering the claim. Once SSA notified the claimant of denial of benefits, the claimant would have then appealed to OHA. At OHA, the claimant would have had a hearing before an administrative law judge who would have reversed the decision of the state agency, finding the claimant eligible for DI benefits. Generally, the claimant appoints an attorney for the OHA level appeal. The fees that attorneys representing DI applicants can charge are limited by law and must be approved by SSA. Since 1967, SSA has administered fee payments to attorneys representing DI claimants. To be compensated, attorneys must file with SSA either a fee agreement—a formal contract signed by the applicant and the attorney setting the fee as a percentage of the applicant’s past-due benefits—or a fee petition that lists the specific costs associated with the case. Of the two, the fee agreement is the much simpler arrangement; generally, it specifies fees limited to 25 percent of the claimant’s past-due benefits up to a maximum of $4,000. In contrast, the fee petitions require attorneys to itemize expenses and hourly charges, and SSA must determine a reasonable fee to compensate the attorneys. Assuming either a fee agreement or a fee petition is approved, SSA withholds the amount of the fee from the beneficiaries’ past-due benefits and pays the attorneys directly. Historically, attorneys representing claimants before SSA submitted fee petitions for their services. As the percentage of claimants represented by attorneys in DI hearings increased from 19 percent in fiscal year 1967 to 66 percent in fiscal year 1987, fee petitions became a significant administrative burden for SSA. To alleviate some burden, the Congress streamlined the fee approval process in 1990 to allow attorneys to use the much simpler fee agreement in cases where SSA finds the claimant eligible for past-due benefits. Since the introduction of fee agreements in 1991, their use has become nearly universal—in 2000, 88 percent of the attorney fees were based on fee agreements. However, even with the prevalence of the simpler fee agreement, SSA continued to have significant delays in paying attorney fees, and attorneys increasingly turned to court action to obtain their fees. In 1995, SSA proposed to stop processing attorney fees for DI claimants, and estimated that, if this were done, it would save $20 million in administrative costs. This cost estimate was the basis for a 6.3 percent assessment on attorneys for use of SSA’s processing services enacted in the 1999 Ticket to Work Act, a charge deducted directly from the attorney’s fee. Under this law, SSA is to determine (for calendar years after 2000) a percentage rate that allows “full recovery of the costs of determining and certifying fees to attorneys for the past-due benefits of the claim,” but is not to exceed 6.3 percent of the total fee. The proceeds from the collection of the user fee are returned to the Federal Old-Age and Survivor Insurance Trust Fund and the Federal Disability Insurance Trust Fund. SSA’s estimate indicated that its administrative costs for attorney fee services in 2000 were $54 million for the two major components of these services: $13.8 million for approval of fee arrangements by OHA and $40.2 million for payment of fees by SSA’s processing centers. Neither OHA nor the processing centers routinely collect information that specifically identifies the costs associated these services. To develop its estimate, SSA relied on various data it adapted from its regular operations, as well as surveying its regional offices to determine time spent on attorney fees in OHA. Our review indicated flaws in these data and suggested that the original estimate should be adjusted downward. However, without adequate data, we were unable to make exact corrections to the estimate. Instead, we made rough assumptions with the best available data and we limited our costs to those related to attorney fee processing but clearly unrelated to normal case processing. Using these assumptions—which may result in understating SSA’s actual costs—we approximated the lower bound of SSA’s administrative costs. From this analysis, we set the lower bound of costs for attorney fee services at $35.4 million in 2000. SSA’s cost estimate indicated that it cost $54 million to provide attorney fee services in 2000. This estimate includes the two major components of fee services: OHA fee approvals and fee payment in SSA payment processing centers. Within SSA, its field offices, OHA, and the processing centers all have important roles in managing a disability claim. However, for the most part, OHA and the processing centers have the central functions of fee processing. OHA must review and approve fee arrangements, while the processing centers pay the attorney fee once the amount of past-due benefits is determined. For OHA fee approval services, SSA estimated costs of $12 million for 1999—which we restated in terms of 2000 costs as $13.8 million. Within OHA only a small portion of staff time is spent reviewing fee arrangements. For fee agreements, SSA estimated that its staff spent about 1 1/2 hours handling each agreement during an OHA appeal that may take about 1 year to complete. However, the small amount of time spent reviewing each fee agreement becomes significant when all such review time is totaled. For example, OHA processed about 179,000 fee agreements in 1999—if each took 1 1/2 hours to process, the total time to process would be the equivalent of 129 work years and result in millions of dollars of costs. While OHA did not have an information system that routinely collected data about the time spent on each fee arrangement, it used operational data to determine the general types of work considered related to these costs—for example, approving fee agreements, reviewing administrative disputes, etc. For each category of work, OHA developed a series of tasks necessary to perform the work. Then, to obtain information on how long it took to complete each task, OHA surveyed its regional offices. Most of SSA’s administrative costs, however, were for paying the attorney fees—in 2000, SSA estimated that this service by its processing centers cost $40.2 million, or three-quarters of the total estimate of $54 million. For the most part, this cost relates to manually handling the attorney payments. Once a claimant’s past-due benefits are determined, a clerk manually processes the payment—filling out a form that shows what payment is authorized, calculating the user fee, and giving the form to the data entry clerks for further processing. As with the OHA fee approvals, even though the time on each task may be small, it becomes significant when all such time is summed up. To develop its estimates for payment processing, SSA relied on the cost allocation system it uses in its normal operations. SSA generally uses this system to account for the expenses of its various types of work so that the proper trust fund account can be charged; the system allocates SSA’s administrative costs to one of the various trust funds SSA administers. Although the system was not developed to analyze the costs related to fee payments, SSA has adapted it to collect data on attorney fee work. Even so, when SSA used the data from this system to make its estimate, it had to first remove costs unrelated to processing attorney fees for DI claims. Our review of SSA’s estimate indicated that it is likely too high. We identified six problems with the SSA estimate: The estimate for the costs of OHA fee approvals included the cost of handling cases from the Supplemental Security Income program (SSI), cases unrelated to DI claims; The OHA estimate also included excessive staff time for processing the In calculating the estimate of the costs for payment processing, SSA used an erroneous cost allocation category that overstated the costs of the services; The estimate for the payment processing did not adjust for one-time use of premium overtime pay used to reduce processing backlogs in February and March 2000; The estimate for the payment processing included costs not clearly associated with fee payment; and The estimate for the payment processing used an average of both higher- and lower-salary costs to calculate staff costs; this did not accurately reflect that staff who routinely work on most payment processing are in the lower salary group. However, we were unable to make precise corrections for these adjustments because of insufficient SSA data and unclear definitions of what should be counted as a relevant cost. For example, there was no data available to calculate exactly how much overtime had been used to process the payment backlogs. As another example, while SSA officials agreed that the majority of staff that routinely work on payment processing tasks had lower salaries than the average calculated, they were unable to provide us with more specific data on staff costs. Furthermore, it was not always clear as to what costs should be included in the estimate— for instance, we eliminated certain costs related to handling attorney inquiries because we believe that they included instances of normal case processing unrelated to the steps needed to process attorney payments. SSA officials, on the other hand, argued that these same costs should be included because they were handling matters dealing with attorneys. Although we were unable to precisely correct for each of these adjustments, we approximated a “lower bound” of SSA’s administrative costs. To do so, we made assumptions with the best available data and we limited our costs to those related to attorney fee processing but clearly unrelated to normal case processing. Using these assumptions—which may somewhat understate SSA’s actual costs—our analysis indicates that administrative costs could be as low as $35.4 million. We discussed each of these adjustments with SSA officials. (See the appendix for further details on our proposed cost adjustments.) We compared our adjusted estimate of $35.4 million with SSA’s original estimate of $54 million. In 2000, SSA processed $512 million in attorney fee payments. Comparing the original estimate to these payments, SSA’s administrative costs were 10.5 percent of the total payments. However, using the adjusted estimate, SSA’s administrative costs were 6.9 percent of the attorney payments. Table 1 presents both the original and adjusted estimates. Although most fees were processed in far less time in 2000 than in 1999, over 20 percent of the fees in both years still took longer than 6 months from the date of the OHA decision to the date when the attorneys were paid. While the major reason for the improved performance in 2000 was the elimination of the 15-day protest period by the Ticket to Work Act, the underlying reasons for the longest periods of delay remained largely unchanged. These included factors that are often outside of SSA’s control, such as the need for additional documentation to complete the calculation of the claimant’s benefits, for example, verification of state workers’ compensation payments. In a recent report, we documented some of the difficulties SSA encounters in obtaining workers’ compensation information. According to SSA data for the 7-month period from June through December, payments in 2000 were dramatically faster than for the same period in 1999. In 2000,12 percent of the payments were processed in 30 days or less from the date of the OHA decision, and 50 percent of the payments were processed in 60 days or less. In contrast, only 1 percent of the 1999 payments were processed in 30 days or less, and only 4 percent of the 1999 payments were processed in 60 days or less. However, in 2000, 22 percent of the payments took over 180 days to process, about the same as 1999. While SSA officials attributed most of the improved processing time in 2000 to elimination of the 15-day protest period (with an added 15-day mailing period), SSA changed other procedures that improved processing time. For example, SSA stopped sending case files that needed additional documentation out of the processing centers to storage centers; instead, the case files stayed in bins near where staff processed the cases. Processing center staff also contacted OHA staff to better track information on attorney fee approvals. However, many of the reasons that it takes an extra period of time to process an attorney’s payment remained the same—for example, the centers still need to track down state workers’ compensation information, they still need to have proof of age to process a claimant’s benefits, and they still need to wait for all claims related to the principal beneficiary to be resolved to determine what to pay the attorney. Recently, SSA conducted a 1-day sample of cases with attorney fees that looked at factors, such as those listed above, that complicate the payment process. Of the 669 attorney fees processed on August 10, 2000, 48 percent had some factor that complicated the processing of the case. Furthermore, of the cases with complicating factors, the most common characteristics were the need to verify information on workers’ compensation (29 percent) and deferred related claims (18 percent). The bulk of SSA’s administrative costs relate to a manual payment process that if improved could cut staff time and reduce processing time. Under the current process, information necessary to make a payment to an attorney is extracted from the main case information system and handled manually to prepare for payment. However, the manager of SSA’s largest processing center indicated that systems support could save one-third of the staff time currently spent on processing this type of payment. Furthermore, Office of Systems officials told us that automating the payment process could save from 3 to 5 days in processing time. Nonetheless, proposals to automate this process have been repeatedly postponed. SSA has, however, recently developed a draft plan to automate the attorney fee payment process, but according to SSA officials, the details related to this plan have not been fully developed. In general, DI cases are processed using an information system known as the Modernized Claims System (MCS). When a claimant first files for DI, a staff person in one of SSA’s field offices enter the claimant’s case history in to MCS. After a favorable decision is issued by OHA, the hard copy of the case file—including information about the attorney and his or her fee—is mailed to a the processing center. When the case file is received at the processing center, staff update the case history which was previously entered in to MCS and complete information needed—such as determining workers’ compensation offset—for processing the claim. Once the information is completed, MCS automatically calculates the claimant’s past-due benefits, withholding 25 percent or $4,000 (whichever is less). However, once MCS determines the amount of the past-due benefits owed the claimant, a series of manual steps is performed to handle the attorney’s fee payment. The case file is sent to a GS 7 or 9 technician (a “benefit authorizer”) who fills out a form that transfers the attorney information to a key punch clerk. The key punch clerk then inputs the data into a separate stand-alone information system. In addition to the problems cited above, there are other inefficiencies with the payment process. For instance, there are no controls to ensure that the amount withheld from the beneficiary is properly paid out to the attorney nor are there controls to ensure that duplicate payments to an attorney are avoided. Furthermore, there is no database (or “master file”) of attorney names, addresses, and payments. Without this, any time an attorney reports a change of address, for example, the new address must be reported for every claimant the attorney represents. In addition, there is no electronic link between the OHA fee approval staff and MCS processing system. As a result, OHA staff mails information on attorney representation and fee arrangements to a processing center where staff manually enter the attorney data into the MCS system. Developing an information system to automate the process may result in reduced staff time associated with processing these payments. According to officials in the Office of Systems, automation could eliminate the need for many staff who are now required to transfer information between the MCS and the payment systems to process the attorney fees. If, for example, there was no need to gather further documentation, the payment to the attorney would be issued automatically at the same time the payment is issued to the beneficiary. The officials also noted that automation might save from 3 to 5 days in processing time. In a memorandum dated January 24, 2000, the Associate Commissioner for Central Operations—the head of the largest DI processing center— recommended that SSA automate this process, which he termed “archaic.” With systems support, he noted that his center would save 34 work years of staff time, one-third of the total staff time the center spent on attorney fee processing. He also pointed out that an attorney master file would “eliminate duplicate work with needless reviews and greatly improve the accuracy of payments.” In 1997, an SSA study group recommended that SSA improve its automation of the current attorney fee process. Despite internal recommendations for a new system, SSA has repeatedly postponed its plans, redirecting funds to other higher-priority projects. Officials from SSA’s Office of System reported that this systems development effort has officially been part of SSA’s systems plans since at least 1998. SSA currently has a draft plan to develop a system that would automate the process so that payment processing would be linked to the MCS. While the plan calls for linking the payment records to the claimants’ records to verify whether the payment withheld was also sent to the attorney, it does not include any provision for an attorney master file or an electronic connection with the OHA fee approval staff. Moreover, according to the Office of Systems staff, there is not yet any definite schedule to complete their plans, nor are any budget funds committed to the project. The Ticket to Work Act also directed that we examine a number of potential changes to the current fee structure including (1) linking the user fee to SSA’s timeliness of payment, (2) making the user fee a fixed charge rather than a percentage of the fee, (3) raising the caps on attorney fees, and (4) extending the fee payment services to the SSI program. The act also directed us to consider whether the recent imposition of the user fee affected attorney representation of DI claimants. Additionally, we looked at the possibility of having SSA issue checks made payable to both the beneficiary and the claimant for the total amount of the past-due benefits. While the information necessary to fully evaluate these issues is not available, our review raised concerns about some of the matters. Though it is not clear that all of the delay in the longest cases is due to legitimate case processing, any decision to link the payment of the user fees to SSA timeliness would need to account for unavoidable additional processing steps. The SSA 1-day study conducted in August 2000—which cannot be extrapolated to the entire case population because it is not statistically valid for all cases—looked at length of payment processing time. The study compared the processing times to the presence of factors that complicate case handling. About one-quarter (172) of the cases in the sample took longer than 120 days from the date of the OHA decision to process. Of these cases, over one-half (52 percent) had at least one factor that required additional processing time. Forty-one percent (71 cases) had issues requiring verification of state workers’ compensation payments. However, 48 percent (84 cases) of the cases with the longest processing times had no complicating factors at all. Currently, SSA does not routinely identify cases that require extra case processing because of complicating factors such as state workers’ compensation payments. However, fair implementation of a link between the user fee and SSA’s timeliness of payments—for example, reducing or eliminating user fee payments if SSA did not pay the attorney within 120 days of the OHA decision—should treat such cases differently from other cases with no complicating factors at all. From our review of the SSA processing system, it is not clear, as a practical matter, how SSA could separate and account for the different types of cases without considerable extra administrative burden. Technically, the vast majority of attorney fee payments each cost the same amount to process; however, equity concerns arise when considering a fixed fee instead of a percentage. The vast majority of fees are based on fee agreements (88 percent in 2000 according to SSA) and the steps to process an approval and payment of a fee agreement remain the same regardless of the ultimate amount of the payment—which is dependent upon the claimant’s past-due benefits, not the amount of work performed. Thus, because the costs are the same regardless of the amount of the payment, a fixed fee more accurately reflects the actual costs borne by SSA per payment. However, the impact of a fixed charge per payment could vary significantly, depending solely on the final amount of the claimant’s past- due benefits. To illustrate, according to SSA data, 17 percent of the attorney fees paid out in 1999 were for amounts of $1,000 or less, and 39 percent were for $2,000 or less, although it is not clear exactly what amount was finally paid an attorney (there can be multiple payments to one attorney). Since fee agreements were applicable in most instances, this would mean that these were cases where the claimant’s past-due benefits were for amounts of $8,000 or less. Using 1999 costs and payments, if attorneys were charged a fixed amount for each payment rather than a 6.3 percent user fee, the fixed charge would have been $176 per payment. Under a fee agreement specifying that the attorney would be paid 25 percent of the past-due benefits, if the claimant’s past-due benefits were $8,000 a user fee of $176 would be 8.8 percent of the attorney’s payment of $2,000. If, on the other hand, the claimant’s past-due benefits totaled $16,000, then the fee would be $4,000 and the same fixed charge would be 4.4 percent of the attorney’s payment. The impact on attorneys representing claimants with smaller benefit claims can be relatively greater than that on attorneys with claimants who are owed larger benefits. The current fee cap—limiting fees under fee agreements to 25 percent of past-due benefits or $4,000, whichever is less—was first set 10 years ago in 1991 and has not changed since that time. However, although the actual cap has not changed, the DI benefits on which the fees are based have been annually increased to account for inflation in the cost of living. Thus, unless attorney fees hit the $4,000 cap, fees should have gradually increased as benefits have risen. However, the data from SSA are not clear as to how frequently attorneys may reach the maximum fee of $4,000 in their cases. According to SSA data, the breakdown of attorney fee payments in various dollar ranges has stayed fairly consistent between 1995 and 1999. Thus, about 40 percent of payments have been less than $2,000, about 20 percent have been between $2,000 and $3,000, while the remaining 40 percent have been between $3,000 and $4,000. SSA does not keep records on how many payments are issued for the maximum $4,000. In SSA’s recent study of a one-day sample of payments processed on August 10, 2000, of 625 fee agreement cases processed that day, one-third (33 percent) had been paid at the $4,000 limit. SSA officials, however, believe that this percentage may have been unusually high. Without reliable data, we were unable to ascertain the full impact of the current cap on attorney fees. The SSI program was created in 1972 as an income assistance program for aged, blind, or disabled individuals whose income and resources are below a certain threshold. SSI payments are financed from general tax revenues, and SSI recipients are usually poorer than DI beneficiaries. While SSA currently approves the fee arrangements between SSI claimants and their attorneys, it does not withhold money from the past-due benefits to send to the attorneys. SSA and some advocates for the poor have argued against the extension of the fee payment services to SSI claimants. According to their view, SSI recipients tend to be poorer than DI beneficiaries, and deducting an attorney fee from their past-due benefits would take money from those who need it the most. SSA also points to the added administrative burden that the additional fee services would entail. On the other hand, others believe that the fee payment services should be extended to the SSI claimants because providing a certain source of compensation for attorneys would tend to increase the representation of SSI claimants and possibly result in more successful cases by the SSI claimants. According to 1999 data from OHA, applicants for DI benefits (or DI and SSI together) were more likely to be represented by an attorney than those applying only for SSI benefits. An official representing SSA hearing officers told us that he believed that applicants with a legal representative tended to fare better than those without one because the cases are better presented in the OHA proceedings. In general, legal representation of DI claimants in OHA proceedings has steadily increased in the past 2 years. During the first quarter of calendar year 1999, attorneys represented DI claimants in 73.4 percent of cases presented to OHA. By the end of calendar year 2000, legal representation of DI claimants had risen to 76 percent. However, there was a slight dip in attorney representation for DI cases in the second full calendar quarter—the months of July through September 2000—following the implementation of attorney fees in February 2000. The percentage of attorneys representing claimants for DI benefits only (not SSI benefits as well) declined to 74.3 percent from 75.3 percent in the months of April though June. In the next quarter (October through December 2000), though, the percentage of attorney representation rose again—to 76 percent. For the first quarter of the calendar year 2001, the rate dipped once more to 75.4 percent. Currently, once SSA determines the past-due benefits owed to DI claimants, it issues two checks—one to the claimant and another to the claimant’s attorney. One proposal would change this process by issuing one single check for the total amount of the past-due benefits—made out jointly to the claimant and the attorney—sent directly to the attorney. The attorney would deposit the check into an escrow account and pay the past-due benefits, minus his or her fee, to the claimant. Such a change could have serious policy implications, however. For instance, SSA currently attempts to pay the claimant as soon as possible after a favorable decision. Joint checks might delay payment to the claimant because the claimant would need to wait until the attorney deposited the check into an escrow account. Also, using a joint check would reduce SSA’s ability to enforce the fee limits and could increase the risk that attorneys might short-change claimants. A number of administrative issues would need to be addressed, as well. Because SSA must report the claimant’s benefits to the Internal Revenue Service, it must track the amount each claimant receives. With joint checks, the attorney would need to certify to the amount provided to the claimant. In addition, SSA’s DI claims processing system would need to be adjusted to handle joint checks. Inefficiencies in the current process increase both the time it takes to pay the attorney fees and the costs of administration. One segment of attorney fee processing—the fee approval process—was substantially simplified in 1991. Systems support could streamline the second segment of the processing—the fee payment—thus lowering the annual administrative costs and cutting processing time. If SSA automated this final segment of the fee processing, it could help improve customer service for both claimants and their attorneys. Mr. Chairman, this concludes my prepared statement. At this time, I will be happy to answer any questions you or other Members of the Subcommittee may have. For information regarding this testimony, please contact Barbara Bovbjerg at (202) 512-7215. Individuals who made key contributions to this testimony include Shirley Abel, Kelsey Bright, Nancy Peters, and Dan Schwimer. This appendix describes our adjustments to the Social Security Administration’s (SSA) estimate of the costs of its fee process services. SSA estimated the costs for the two major components of these services (1) the 1999 Office of Hearings and Appeals (OHA) fee approval process; and (2) the 2000 fee payment process. We describe our adjustments to the costs of each component in separate sections below. In general, we were unable to precisely correct the estimate because of inadequate data and unclear cost definition. However, with rough adjustments to the original estimate, we have attempted to approximate a “lower bound” of the SSA costs. We have discussed each of our adjustments, and our proposed corrections, with SSA officials. According to SSA’s estimate, OHA staff spent 236 work years on about 206,000 fee approval actions, at a cost of $13 million in 1999. These actions included approval of both fee agreements and fee petitions, as well as reviews of disputes over fees. The vast majority of these actions involved approval of fee agreements—in 1999, OHA approved about 179,000 fee agreements. The cost estimate, however, included work not related to disability insurance (DI) cases and used an unrealistically high estimate of staff time taken to review fee agreements. While we could identify these problems, we could only approximate the actual adjustment needed to correct the original estimate because of insufficient data. First, the estimate included costs spent on cases that were not DI cases. In 1999, there were about 185,000 OHA cases with attorney representation that resulted in favorable decisions for the claimant. However, of these cases, only about 79 percent (146,000) involved claims for DI benefits and the remaining 21 percent (39,000) involved claims for benefits under the SSI program only. SSA officials acknowledged that their estimate included work on fee approvals for other than DI cases, but they were unable to provide us with a more detailed breakout of workload (e.g., the number of fee agreements that were also DI cases). In addition, the SSA estimate appears to overstate the time it takes to routinely handle a fee agreement. Over the past 10 years, SSA’s role in regulating attorney fees have become much less burdensome. With the simplified fee agreement, SSA staff can, for the most part, verify that the claimant has agreed to pay his or her attorney 25 percent of past-due benefits, instead of reviewing itemized hourly charges commonly presented in fee petitions. Despite the steady trend towards uniform use of the simplified fee agreement, the most recent estimate of the time it takes to review a fee agreement is twice that used in SSA’s 1995 cost estimate. In 1995, SSA estimated that it took about 45 minutes of staff time to review and process a fee agreement. In 1999, however, its estimate of the same review had risen to 94 minutes per agreement. The 1999 estimate included about 47 minutes to evaluate whether each agreement meets the regulatory criteria—32 minutes by a senior case technician, and once this is done, 15 minutes by the administrative law judge (who also takes 6 minutes to sign each agreement). After the judge signs the order, the estimate included 16 minutes for a clerk to mail the fee approval agreement (with the rest of the case file) to the payment processing center. While we were unable to quantify the actual staff time, the 1995 estimate of 45 minutes appears to be the better approximation of staff time spent handling routine fee agreement approvals, particularly in view of the increasingly uniform use of this simplified fee contract. To develop the 1999 estimate of staff time, SSA officials told us that they polled the OHA regional offices in a 4-day period. They received responses from only 6 of the 10 regional offices, and those responses included wide variations for staff time—for instance, the estimate for the review by the administrative law judge went from 1 minute to 5 days. Additionally, the time for the mailing the fee agreement included the time spent to mail the entire OHA decision. Our review suggests that the OHA costs in 1999 may be as low as $6.4 million, or 51 percent of the original estimate. Our adjustments to the OHA estimate are as follows: 1. Because SSA could not provide us with a detailed breakout of the OHA work on DI cases, we reduced the total estimate by 21 percent—the proportion of non-DI cases in the OHA 1999 workload. This adjustment reduced the estimate by $2.7 million, to $10.3 million. 2. Once we removed the non-DI cases from the estimate, we then reduced the estimate of staff time spent on fee agreement approval by one-half, roughly the difference between the 1995 and the 1999 staff estimate. This change lowered the OHA estimate by $3.9 million (30 percent), to $6.4 million. 3. We restated the estimated costs in terms of costs in 2000, to be comparable to SSA estimates of processing costs. To do this, we inflated the estimated costs (and our proposed adjustments) by 6.6 percent, the amount by which the cost of the average OHA staff year increased in 2000 over 1999. The original OHA estimate, our adjustments to the estimate, and the limitations to these adjustments are shown in table 2. According to SSA, its payment processing centers took 673 work years to process $512 million in attorney fee payments in 2000, at a cost of $40.2 million. SSA developed this estimate from the standard system of cost allocation it uses at the payment centers. Under this cost allocation system, each payment center’s workload is quantified by a random check, conducted daily, of the work done by all employees at the center. Each type of work at the payment centers is categorized, and one major category of work includes that done on attorney fee processing. This work category (called “atfee” in the centers) includes all work done at the payment centers related to handling and paying fee agreements and fee petitions. The work includes all cases that involve attorney fees—field office cases (initial determinations and reconsiderations) as well as OHA cases. Our review indicated that the payment processing estimate appears high. It included an incorrect cost amount; failed to adjust for one-time use of premium overtime pay to reduce processing backlogs; included costs not clearly associated with fee payments; and it used average salary costs when the staff who routinely work on most payment processing receive below-average pay. However, we were, for the most part, unable to make precise adjustments for these problems because of limited data and unclear definitions as to what counts as a fee processing cost. First, the original estimate erred in a calculation of the total estimate by using the wrong amount of total costs for the largest processing center. In creating the estimate, SSA used an incorrect category from its cost accounting system to calculate the center’s costs. This cost category included costs unrelated to the work necessary to process attorney fees. Second, the estimate did not adjust for premium overtime pay. Because the user fee required by the Ticket to Work Act was effective February 1, 2000, SSA staff worked overtime in February and March to clear out the backlog of fee payment cases pending as of February 1. According to testimony by SSA’s Assistant Commissioner before the Subcommittee on Social Security, House Committee on Ways and Means, in June 2000, SSA provided an extra 111 staff work years to handle the backlog of fee cases, diverting resources from other workloads to process the claims on a priority basis. Third, the general “atfee” work category used to designate attorney fee processing in the centers appears to include subcategories of work too broad to be included in the estimate—in our view, the subcategories include work that would be necessary for normal case processing even if SSA did not pay attorney fees. According to staff in the centers, the subcategory “atfee misc” includes correspondence from attorneys that cannot be clearly categorized as dealing with either fee agreements or fee petitions. For example, a letter would be classified as “atfee misc” if it included issues related to the claimant as well as a question about fees. One supervisor told us that the designation of work category was made by a GS 4 or 5 file clerk who would classify any correspondence with an attorney’s letterhead as “atfee misc” if the letter could not be clearly identified to another specific work category. Finally, the staff salary costs included in the estimate should be adjusted to reflect more accurately the lower staff salaries of the technicians who routinely work on payment processing. SSA’s estimate is based on the average salary of all its employees who work on DI cases involving OHA decisions. However, the staff working on these cases includes both claims authorizers (generally paid a GS-11 salary) and benefit authorizers (generally paid between GS-7 and GS-9 salaries). For the most part, the lower-paid benefit authorizers process the attorney fees, while the higher- paid claims authorizers perform the main case processing. From SSA data, it appears that over 50 percent of the work on DI cases with OHA decisions is case processing work routinely performed by the higher-paid claims authorizers. Taking into account the points noted above, we believe that the “lower bound” costs for the processing centers could be as low as $28.6 million. Our calculation of the adjusted estimate is as follows: 1. We corrected the SSA estimate for an error in its calculations of the processing center costs. This correction reduced the estimate by $1.9 million (5 percent) to $38.3 million. 2. We adjusted for the premium overtime pay. We reviewed data provided by SSA on the increase in overtime pay in 2000 over the prior year. Using this information, we allocated a part of the increase in overtime pay to the center’s attorney fee work, reducing the estimate by $0.5 million (1 percent) to $37.8 million. 3. We eliminated the costs associated with the subcategory “atfee misc” from the costs. When these costs were subtracted from the estimate, the original estimate was reduced by $5.5 million (13.7 percent) to $32.3 million. Because some of the work included in this subcategory was likely to be directly related to the fee processing, eliminating this subcategory most likely understated some of SSA’s actual costs. 4. We adjusted the estimate to better reflect the below-average pay of the staff who routinely handle attorney fee processing. SSA was unable to provide us with data to precisely allocate the salary costs of those working on fee processing; hence, we assumed that all staff who worked on attorney fee processing were paid at a GS-8 step 5 level ($33,202) in 2000, while all the rest of the staff who worked on the same cases were paid at GS-11 step 5 level ($44,369). This adjustment reduced the original estimate by $3.7 million (9.2 percent) to $28.6 million. The adjustments to the payment processing estimate are summarized in table 3.
To ensure that people claiming disability insurance program benefits can obtain legal representation at a fair price, the Social Security Administration (SSA) is required to regulate the fees that attorneys charge people to represent their disability claims before the agency. Balancing the needs of the claimants with those of their attorneys, the law limits the amount of fees that attorneys can charge claimants, but also guarantees that those fees will be paid from the claimants' past-due benefits. Inefficiencies in the current process increase both the time it takes to pay the attorney fees and the cost of administration. One segment of attorney fee processing--the fee approval process--was substantially simplified in 1991. Systems support could streamline the second segment of the processing--the fee payment--thus lowering the annual administrative costs and cutting processing time. Automation of this final segment of the fee process could help improve customer service for both claimants and their attorneys.
FAA catalogs its acquisition programs in its annually updated Capital Investment Plan (CIP). The CIP identifies planned capital investment in the National Airspace System (NAS) for the next 5 years consistent with the amount requested in the agency’s annual budget submission. Appendix C of the CIP, which identifies the anticipated budget line items, is divided into five activities: (1) Engineering, Development, Test, and Evaluation; (2) ATC Facilities and Equipment; (3) Non-ATC Facilities and Equipment; (4) Facilities and Equipment Mission Support; and (5) Personnel Compensation, Benefits and Travel. The CIP for fiscal years 2012 through 2016 contains 106 funded acquisition programs with estimated total budgets (through 2016) of more than $14 billion. Of these 83 acquisition programs FAA considers ATC related, 18 involve Engineering, Development, Test and Evaluation and 65 involve ATC Facilities and Equipment. The 83 programs include 30 that have had program baselines approved by FAA’s Joint Resources Council (JRC), which is responsible for approving major programs. These 30 baselined programs include communications, navigation, and surveillance systems that are key to ATC operations. FAA considers 5 of the programs to be foundational parts of NextGen, and all are key to modernizing the existing ATC system. Figure 1 illustrates the universe of FAA acquisitions for fiscal years 2012-2016. FAA has developed and uses its Acquisition Management System (AMS) to provide policies and guidance for managing ATC system programs through all phases of the programs’ life cycles (see table 1). The Air Traffic Organization (ATO) within FAA is responsible for operating, maintaining, and modernizing the nation’s current ATC system. The acquisition program baseline defines the cost, schedule and performance baselines for the investment program. The JRC which determines whether to approve a cost and schedule baseline also approves rebaselining, through which the agency documents and approves major changes to a program’s previously approved budget or schedule. Rebaselining resets the estimated costs and schedule used to determine how the program will be held accountable and can occur before the program is deployed. Once a program is rebaselined, FAA reports on the performance of the program based on the revised cost and schedule. Although the rationale for rebaselining can be reasonable, as, for example, when a program’s scope has been expanded, reporting a program’s performance based on a rebaselined cost or schedule can also skew or conceal from Congress and other stakeholders the program’s actual total costs or overall timeline. We previously reported that the absence of this information on rebaselining in ATO’s performance reporting could cause managers and other stakeholders, including Congress, to think that performance was better than it actually was. We recommended that FAA regularly report on the overall, long-term performance in acquiring ATC systems by providing in FAA’s annual Performance and Accountability Report the original budget and schedule baselines for each rebaselined program and the reasons for the rebaselining. In response to our recommendation, FAA currently provides this information in Appendix D of its CIP, where it details baseline cost and schedule information for major acquisition programs. NextGen involves changes to every aspect of air transportation (see fig. 2). NextGen requires the acquisition of new integrated systems (both software and hardware), flight procedures, aircraft performance capabilities, and supporting infrastructure to transform the current air transportation system into one that uses satellite-based surveillance and navigation operations instead of ground-based radar. are intended to increase the efficiency and capacity of the air transportation system while maintaining safety and accommodating anticipated future growth. The planning for NextGen began in 2003 and is now focused on implementing improvements in the midterm (by 2018) and in the far term (by 2025). GAO, Next Generation Air Transportation System: Challenges with Partner Agency and FAA Coordination Continue, and Efforts to Integrate Near, Mid-, and Long-term Activities Are Ongoing, GAO-10-649T (Washington, D.C.: Apr. 21, 2010). Of the 30 baselined FAA ATC programs we reviewed,19 have not increased in cost, but 11 have experienced cost increases ranging from $2 million to over $2 billion. And of the 19 programs whose costs have not increased, 7 experienced a cost decrease while the remainder have not changed significantly. However, the 11 programs that exceeded their initial estimated costs account for over 60 percent of total program costs for the 30 baselined programs—$11 billion of $17.7 billion. These 11 programs are among the most complex of FAA’s major acquisitions in that each involves a large amount of software engineering. (See table 3.) The 3 programs with the largest cost increases—totaling more than $4 billion—are key to ATC modernization. Several factors contributed to cost overruns for the Standard Terminal Automation Replacement System (STARS), WAAS, and ERAM programs and required additional congressional appropriations or reductions in program scope. Our previous work disclosed that the near tripling of the STARS’s budget resulted from insufficient involvement of stakeholders and requirements growth. The WAAS program began in 1998 with an initial cost estimate of $1 billion and a current estimate of $3 billion. We reported previously that FAA’s lack of scientific and technical expertise resulted in unplanned work and contributed to cost increases as well as delays in the deployment schedule. Additionally, FAA changed how it accounted for certain costs in the capital budget in 1999, which further raised the cost estimate to $3.3 billion. FAA recently revised that estimate down to the current $3 billion during the 2009 rebaselining because, according to FAA officials, they had met certain program requirements in 2006. As previously mentioned, ERAM is a key modernization system and will be the backbone of the NextGen system. FAA originally submitted to Congress an estimated cost of $2.1 billion in 2003, and the program is now expected to cost about $2.4 billion––an increase of about $330 million. According to FAA, various software issues (e.g., unsuccessful transmission messages and inaccurate data pairing of aircraft and traffic display), as well as problems interfacing with other facilities and systems, have contributed to the cost increases and delays. The extent to which unanticipated requirements, unplanned work, and underestimates of the complexity of software development, among other factors, have contributed to other FAA ATC acquisition program cost overruns and scheduling delays is discussed later in this report. We found that 15 of the 30 baselined programs either have experienced no change in schedule or were completed early or on time; however, the other 15 programs are projected to be completed later than originally estimated. These delayed programs range from the Integrated Display System, which will consolidate information from several weather subsystems into a single display, which FAA expects to be completed 2 months after its initial estimated completion date, to WAAS, which FAA estimates will be completed in 2013—more than 14 years after its initial estimated completion date (see table 4). Ten of the 15 programs with schedule delays also experienced cost increases. However, even if a schedule delay does not result in a direct cost increase to that program, the delay can lead to increased costs for FAA because FAA staff must continue to manage the acquisition over the longer term as it is being implemented, as well as maintain any legacy system that the program is replacing. Because of program interdependencies, a schedule delay can also affect how and when other programs will be implemented. Cost increases and schedule delays occurred because of several factors, all of which have been long-standing challenges for FAA and some of which continue to affect programs despite FAA efforts to mitigate the factors. Specifically, these factors include (1) additional, unanticipated system requirements work; (2) insufficient stakeholder involvement throughout system development; (3) underestimates of the complexity of software development; and (4) unanticipated events, including funding decreases or work stoppages (see table 5). Of the 30 programs we reviewed, 15 experienced cost increases, schedule delays, or both, and we were able to determine that cost increases or schedule delays for 11 were attributable to one or more of these factors. Following are some examples of how these contributing factors led to cost increases or schedule delays in some of FAA’s ATC baselined programs: Unanticipated requirements or work: For nine of the programs in table 5, FAA has had to undertake substantially more development work than planned because FAA program officials originally misjudged the extent to which commercial off-the-shelf nondevelopmental solutions, such as those procured by another agency, would meet FAA’s needs. For example, although WAAS was being developed by an integrated product team that included representatives from several FAA offices, the team did not effectively resolve problems in meeting a required performance capability—that pilots be warned in a timely manner when a system may be providing them with potentially misleading and possibly hazardous information. These actions resulted in unanticipated work and contributed to the rise in WAAS’s cost from the original estimate of $509 million in 1994 to about $2 billion in 2005. Insufficient stakeholder involvement: As we previously reported, ERAM was designed at a time when air traffic controllers did not participate in efforts to design and test new systems. Because active users of the system from different locations could not provide insight early on, issues that could have been addressed early in the design phase were not addressed. In response, FAA has taken steps to improve the testing of new systems in order to reduce the likelihood of larger-than-anticipated software issues arising during system implementation. For example, FAA and the controllers’ union recently entered into a memorandum of understanding to bring controllers into the testing and evaluation phase of ERAM. Under this agreement, the controllers’ union will have ERAM technical, evaluation, and training representatives, as well as a team of 16 controllers (including 12 from en route facilities and 4 from terminal facilities), who will be detailed to test and validate software fixes with contractor engineers at the FAA Technical Center (Tech Center). In addition, our previous work disclosed that the near tripling of the Standard Terminal Automation Replacement System’s budget resulted from insufficient involvement of stakeholders and requirements growth—two systemic factors that we found led to acquisitions missing their budget and schedule targets. This, in turn, contributed to cost growth, schedule delays, and eventually a reduction in the number of systems to be deployed. Underestimates of the complexity of software development: This factor contributed to cost increases and schedule delays for ERAM, as well as issues with costs, scheduling, or both for two other programs. In 2010, FAA tested ERAM at two key sites (the Seattle and Salt Lake en route centers) on live air traffic, usually late at night when air traffic volume was low. During this testing, FAA encountered both anticipated and unanticipated software issues, which prompted the test sites, at times, to revert to using FAA’s legacy en route computer system. Specifically, software instructions to a controller in one sector to hand off control of an aircraft to a controller in an adjacent sector failed, and flight data were lost or reassigned to another flight. While some testing at FAA’s Tech Center preceded testing at the two key sites, the Tech Center could only test limited scenarios, and none of the scenario testing identified this software error. In addition, as discussed earlier, ERAM was designed during a time when air traffic controllers did not participate in efforts to design and initially test new systems. FAA anticipated the potential for software issues at the outset of the program and initially scheduled approximately 6 to 9 months of contingency time between the time it achieved initial operating capability and operational readiness demonstrations at these sites, leaving little buffer for any potential delays. FAA worked with its contractor to correct a number of software issues, but further testing on live air traffic at the two test sites continued to produce critical safety errors. As a result, in March 2010, FAA decided, with the support of the air traffic controllers’ union, to halt all ERAM testing on live traffic and to revise the deployment schedule. The program was rebaselined in June 2011, and the program’s completion date was extended from December 2010 to August 2014. As a result of the schedule delays, the rebaselined cost estimate increased from $ 2.1 billion to $2.4 billion. Unanticipated events: Unanticipated events at implementation sites and unanticipated funding issues have delayed several programs’ schedules and increased costs. For example, Airport Surveillance Radar-11 was originally scheduled to be completed in June 2009 but was delayed to June 2010. FAA reports indicated that the delay was due to an unusually protracted real estate acquisition at one site and issues involving validating performance during seasonal radar operations from other another site. Similarly, FAA’s Runway Status Lights program—which involves installing airport lighting equipment that visually signals to pilots when it is unsafe to enter, cross, or begin takeoff on a runway—has experienced schedule delays because of construction issues at five sites (Charlotte, North Carolina; Fort Lauderdale, Florida; Las Vegas; Minneapolis; and Washington- Dulles). FAA officials attributed some of these delays to the furlough of some FAA employees in July 2011 and a freeze on contractor funding during the furlough, which resulted in work stoppage orders for several projects—including Runway Status Lights. FAA program managers will need to assess the impact of the furlough on other programs that had experienced work stoppage orders, including ADS- B, the Standard Terminal Automation Replacement System, SWIM, WAAS, and various weather programs. The interdependencies of ATC acquisition programs have become more prominent as the NextGen program shifts from planning to implementation, so that cost increases and schedule delays in one program could have a cascading effect on other programs. As discussed earlier, due to the integrated nature of NextGen, the development and delivery of many of its component programs are mutually dependent on the development and delivery of one or more other programs. For example, ERAM, FAA’s new en route computer system, is critical to the delivery of ADS-B capabilities such as broadcasting flight information. ERAM is also pivotal to the on-time implementation of two other key which is NextGen programs—Data Communications (DataComm),estimated to cost about $3 billion, and the NextGen information technology architecture, SWIM, which is estimated to cost over $550 million. Due in part to ERAM’s delay, FAA was forced to delay the Data Communications baseline date by approximately 6 months, rebaseline SWIM segment 1, and delay the SWIM segment 2 baseline date to 2012. The longer-term effects of these delays are unclear, but certain SWIM capabilities could be delayed for several years, and the progress of other programs that are dependent on SWIM’s system integration could be hindered, as well. Thus, looking more broadly, the implementation of NextGen—both midterm (now through 2018) and far-term (2019-2025) schedules—will be affected by how well FAA manages program interdependencies. GAO-10-629. developed a full listing of how ERAM schedule slippages, or slippages in other programs that are critical to NextGen, could either affect other programs’ implementation schedules or delay the implementation of capabilities and improvements. In 2008, we recommended that FAA improve the usefulness of ATO’s acquisition performance reporting by including information in the agency’s Performance and Accountability report or elsewhere on the potential effect of any budget or schedule slippages on the overall transition to NextGen. This recommendation remains open, as FAA has not definitively indicated how it will track slippages that will affect other dependent NextGen programs. FAA’s acquisition management system was not designed for managing NextGen programs in an integrated way. To assist in managing NextGen portfolios, FAA is starting to monitor all the activities of a particular operational improvement to ensure integration is on track. As we noted in the 2010 report, as this approach is more fully implemented, it will likely clarify the impact of slippages in one program’s schedule on the implementation status of other NextGen programs and operational capabilities. In addition, as we will discuss in the next section, FAA is developing an Integrated Master Schedule for the entire NextGen initiative that is, in part, intended to show how changes in program schedules affect other programs and the timelines for the NextGen initiative as a whole. However, as we discuss later, the schedules for the four programs we reviewed in detail are not reliable. Reliable schedules at the program level will be needed to develop a reliable Integrated Master Schedule for NextGen. According to FAA, it is taking actions to address the factors that have contributed to cost increases and schedule delays. In 2011, FAA assessed the NextGen effort as part of its Foundation for Success initiative and has implemented the “Idea to In-Service Management” (I2I) process, which it believes will result in improvements in the way the FAA develops, acquires, and implements new NextGen capabilities from conception through implementation. The I2I concept is intended to improve collaboration early in the acquisition process, resulting in better defined capabilities and an early indication of cost and benefits. These enhancements are intended to resolve many of the challenges associated with overall program management and enable FAA to focus on program management best practices. FAA believes that I2I will also result in improvements in specific areas that have presented challenges in the past, such as cost estimating, anticipating requirements and work, stakeholder collaboration, software development, and systems integration. Also in 2011, FAA implemented a reorganization of the NextGen Operations and Planning Office and ATO which FAA believes will support the I2I process and improve acquisitions of NextGen programs. Specifically, FAA created a NAS Lifecycle Planning Division within the NextGen Operations and Planning Office to focus the integration of NextGen programs from a cost, schedule, and systems capability perspective. Within ATO, FAA established a new Program Management Office, which puts the responsibility for the program management of all NextGen and other major ATC acquisitions within a single organization. By combining program managers into one organization, FAA hopes to create a stronger acquisition program and improve the consistency and implementation of best practices. According to FAA, these organizational changes allow responsibilities for acquisitions to be better defined to more efficiently set strategic direction, define operational requirements, ensure system integration, oversee implementation processes, and ensure accountability throughout the acquisition life cycle. To improve the acquisitions management process, FAA has also divided large acquisition programs into segments. A segmented or phased approach is being taken with programs like SWIM and CATMT. This approach breaks a larger program into smaller and more manageable pieces to lower the risk. In the past, we noted that this approach can improve management by providing for midcourse corrections and, thus, help FAA avoid costly late-stage changes. However, this approach can also increase the duration and possibly the cost of the program. According to FAA officials, a segmented approach allows the agency to more effectively manage acquisitions at both the program and enterprise architecture level. An enterprise architecture approach provides the structure to relate organizational mission, vision, and goals to business processes and the technical or information technology infrastructure required to execute them. FAA officials stated that many of the factors we identified that contributed to cost increases and schedule delays highlight the need for an enterprise-level perspective throughout the acquisition process. The I2I process is intended to provide an enterprise-level focus and improve collaboration across related programs. Our review of the ADS-B, CATMT, SWIM, and WAAS cost estimates showed that while each program followed at least some of the four characteristics of high-quality and reliable cost estimates—well- documented, comprehensive, accurate, and credible—none of the programs adhered closely enough to those characteristics to create a reliable cost estimate. As previously noted, these characteristics incorporate the 12 steps consistently applied by cost-estimating organizations throughout the federal government and industry and considered best practices for developing cost estimates. The results of our review of the ADS-B, CATMT, SWIM, and WAAS cost estimates, which are summarized in table 6, show that they were most aligned with the characteristic of comprehensive cost estimates but need improvement in the other three areas, particularly with the characteristics of accurate and credible estimates. Imprecise estimates can result in Congress unnecessarily authorizing and appropriating millions of dollars for programs. As noted, in some cases, FAA, in order to stay within the original cost estimate, modified a program’s requirements and Congress had to appropriate more funds or reduce the scope to allow FAA to finish the program. FAA could better ensure that the cost estimates for these four programs, as well as its other major acquisition programs, are reliable. Our work shows that an assessment of these cost estimates for these programs, as well as FAA’s other major acquisition programs, would allow FAA to better understand if its cost estimation guidelines and our characteristics of high-quality cost estimates are in fact being followed (See table 6). Because the four programs were generally similar in the extent to which they met each of the four characteristics, the following discussion summarizes the strengths and weaknesses we found for each characteristic across the four programs. A more detailed discussion of our findings is contained in appendix IV. Well-documented. Two of the four cost estimates we analyzed substantially met the characteristic of being well-documented; the other two partially met this characteristic. A well-documented cost estimate is thoroughly documented, including identifying specific source data and their significance, detailing calculations and results, and explaining why particular cost estimating methods were chosen. In other words, sufficient documentation exists such that an unfamiliar analyst could recreate the cost estimate and arrive at the same results. For example, the SWIM estimate provided detailed documentation describing the program, in addition to the methodology, calculations, and quantities used to develop the estimate. However, none of the four estimates sufficiently captured the entire source data used, addressed its reliability, or described how various forms of data from disparate sources were normalized (i.e., the data were described in like terms). For example, the WAAS estimate was based, in part, on actual labor costs from a previous contract, but the program office provided no evidence that these data were evaluated for reliability or accuracy. Similarly, the CATMT estimate routinely relied on subject matter expertise as a source for assumptions, such as the cost of labor, but the estimate did not document the experts’ qualifications, background, underlying assumptions, or data sources. Moreover, we noted that three of the four estimates often substantially relied on expert opinion rather than on data. While expert opinion can be useful in the absence of data, it is subjective and generally should be used sparingly for cost estimates. Since data are the foundation of every cost estimate, data quality affects the overall quality of the estimate. In addition, because data are gathered from a variety of sources and take many different forms, normalization helps to improve consistency with other cost information and enable valid comparisons and projections. Comprehensive. All four cost estimates we analyzed substantially met the characteristic of being comprehensive. For an estimate to be comprehensive, it should include full life-cycle costs, completely define the program with sufficient detail, include cost elements that are traceable to the statement of work or objective to ensure they are neither omitted nor double counted, and document all cost-influencing ground rules and assumptions. We found that the ADS-B, CATMT, and SWIM cost estimates included all life-cycle costs, regardless of program phase or funding source, and the ADS-B and SWIM cost estimates completely defined the program with an appropriate level of detail. In particular, the ADS-B cost estimate included cost estimates for both government and contractor costs, and the WAAS cost estimate thoroughly defined the program and reflected the current schedule. The four estimates did not fully meet the comprehensive characteristic because they lacked evidence that all cost influencing ground rules and assumptions were considered. Accurate. None of the four cost estimates met or substantially met the characteristic of being accurate. The estimates generally adjusted costs for inflation and contained few computation or mathematical mistakes, but they were not regularly updated to reflect schedule and requirement changes, did not provide evidence of documenting or reviewing differences between planned and actual costs, and were not based on historical cost data from comparable programs. For example, the ADS-B, CATMT, and SWIM cost estimates provided no evidence that they were updated to reflect program changes, such as schedule slippages or varying assumptions, and did not include the current actual costs of the program. Although the WAAS estimate included evidence that it was updated to reflect major changes in technical and program requirements, such as the four rebaselinings the program has undergone since its 1998 inception, it did not include evidence that estimated costs were replaced with actual costs as the program advanced. Cost estimates that are not regularly updated with current information cannot provide decision makers with accurate information that is necessary, for example, when new system requirements are called for under tight budget conditions. In addition, comparing planned and actual costs enables cost estimators to measure the accuracy of their estimates and refine their processes. In addition, none of the four programs more than minimally used historical data to develop their cost estimates. Had historical data been used, the estimators would have had additional insight into actual costs on programs that used similar technologies, which could be used, for example, to challenge overly optimistic assumptions and bring more realism to the cost estimate. Credible. None of the four cost estimates met or substantially met the characteristic of being credible, which includes obtaining an independent cost estimate from a group outside the acquiring organization, and cross-checking the major cost elements in that estimate against cost drivers identified through sensitivity and risk analyses. The ADS-B, CATMT, SWIM, and WAAS estimates lacked credibility largely because FAA did not obtain an independent cost estimate for any of the programs. In addition the CATMT, SWIM, and WAAS estimates provided little evidence that it conducted sensitivity or risk analyses. Instead, each program received independent cost reviews as part of the investment decision process—even though such reviews are not required by FAA policy. FAA stated that the Investment, Planning and Analysis (IP&A) Office in the FAA Finance Organization does not prepare independent estimates, but it is organizationally independent of the acquisition programs and conducts independent reviews of all cost estimates. However, an independent cost review is less rigorous than an independent cost estimate. According to our cost guide, an independent cost estimate is often more accurate because the estimating team is further removed from the program office and less prone to accept overly optimistic assumptions or be burdened by organizational bias. Other federal agencies, including the Department of Defense, require independent cost estimates. Had an independent cost estimate been completed, the estimating team and program team could have identified the major differences between their estimates, reconciled those differences where possible, and provided a synopsis of the two estimates and their differences to acquisition program management. In addition, without sensitivity and risk analyses, cost estimators cannot measure the effects of varying assumptions, and managers cannot determine, for example, the rational level of contingency reserves necessary to cover increased costs that may result from uncertainties such as unexpected design complexity, changes in requirements, or budget shortfalls––all of which FAA ATC programs, and in particular NextGen programs, have experienced in recent years. We found evidence that some level of risk analysis was conducted for ADS-B, CATMT, and SWIM, although the analysis was not sufficiently robust. For example, key cost drivers were not identified, and additional context about how the estimate could be affected by software design and development issues was not included. We determined that the schedules for the four programs we reviewed are unreliable because none met or substantially met all nine of the best practices for developing a reliable schedule. (see table 7). For example, none of the schedules fully met best practices for capturing all activities in an integrated master schedule, identifying critical paths and reasonable float for all activities, or assigning resources to those activities. Moreover, none of the schedules had documentation that provided more than minimal evidence that they conducted a schedule risk analysis. As was the case with our review of cost estimates for the four programs, our work regarding the schedules for these programs shows that an assessment of the schedules, as well as schedules for FAA’s other major acquisition programs, would allow FAA to understand if the nine best practices for reliable schedules are being followed. Because the scheduling best practices are interrelated in such a way that deficiencies in one best practice will cause deficiencies in the others, a schedule must meet or substantially meet all nine practices to be reliable. For example, preparing a schedule that is program-wide––including an integrated breakdown of the work to be performed by both the government and its contractors over the expected life of the program––is a best practice. If the schedule does not capture all activities, then there will be uncertainty about whether activities are sequenced correctly or if the schedule properly reflects the resources needed to accomplish the work, which is also a best practice. Logic and durations (that is, the time it takes to complete a specific activity) should be used and maintained to ensure realistic start and completion dates and to reflect the true status of the project––a necessary condition for conducting follow-on schedule risk analyses. Moreover, if activities are not properly sequenced with logical links, it will not be certain if the critical path—which represents the chain of dependent activities with the longest total duration—is valid. Collectively, the weaknesses in not fully meeting or substantially meeting all nine key practices increases the risk of schedule slippages and cost overruns since a well-defined schedule helps to identify the amount of human capital and fiscal resources that are needed to execute the program. Therefore, by not having reliable schedules, FAA cannot conduct meaningful oversight of an acquisition program’s progress or determine whether the program is achieving the desired results. The following discussion summarizes the extent to which the schedules for the four programs we examined met best practices. More detailed information for each program regarding scheduling best practices is presented in appendix V. We reviewed the schedule prepared by FAA and found it did not fully meet any of GAO’s nine scheduling best practices, resulting in an unreliable schedule. Evidence provided in the ADS-B schedule indicates that it substantially met three of the nine best practices and partially, minimally, or did not meet the other six. For example, although the ADS-B schedule provided evidence of periodic updating, it did not capture all of the effort currently called for in the approved baseline for the entire ADS- B program and, therefore, was not a fully integrated schedule. Without fully integrating government activities with contractor activities, and thereby capturing all key activities, the schedule will not reliably estimate the program’s completion. In addition, the ADS-B schedule we reviewed did not identify critical paths or include a schedule risk analysis, which uses statistical techniques to predict a level of confidence in meeting a program’s completion date; did not logically sequence all activities and establish their durations; and had excessive float on a majority of current and planned activities. According to program officials, a number of the issues our analysis identified were, in part, the result of the schedule’s limited time frame, which covered only a defined transitional period (October 2010 through April 2011) during which responsibility for about a third of the effort passed from the FAA to its prime contractor. Officials also stated that although their schedule contains critical activities, it has not had a traditional critical path since its contractor began managing the deployment of deliverables. The FAA uses contract options to order the scope, sequence, and requirements for key milestones. Within those options, the contractor has the authority to implement the sequence of more discrete activities in the order they deem most appropriate. FAA program officials plan to rectify this problem, noting that with negotiations now completed, they will in the near future identify a critical path to span all program milestones. Because the CATMT program did not prepare an FAA schedule and instead relied on its contractor schedule, we reviewed the contractor schedule, which we found to be unreliable. Our analysis found that the contractor’s CATMT schedule substantially or fully met four of the nine best practices: capturing all activities, assigning resources, establishing the durations, and updating the schedule. For example, the CATMT contactor schedule pertains to the current phase of the program that is being implemented in software releases, or phases. However, there was no overarching FAA government owned schedule that accounts for all software releases for the entire program and would thus delineate the relation of current software release tasks to the upper-level milestones for the overall CATMT program. The CATMT schedule included detailed resource information, and the program office provided evidence that resources are tracked in detailed labor-hour spreadsheets. We also found that 90 percent of the activities were of short duration and that the program office regularly reviews the schedule, which is in line with best practices. On the other hand, five of the nine best practices were either partially, minimally, or not met. Specifically, the CATMT schedule lacked evidence indicating that it established a critical path, accurately identified float between activities, integrated the schedule vertically and horizontally, sequenced all activities, or performed a schedule risk analysis. Regarding the critical path, our analysis determined that the CATMT schedule does not identify a critical path for the entire program. Instead, the program is being accomplished multiple 6-month spirals; thus, there is only a critical path for each software release, not for the program as a whole. Without a valid program-wide critical path FAA management cannot determine which tasks, if they slip, will have the most detrimental effects on the project finish date. We also found that 68 percent of the remaining activities to be completed had unreasonably high float exceeding 1,000 days, meaning that those activities could slip about 5 work years without affecting the overall project finish date, a highly unlikely scenario. The accurate identification of critical paths and float are inextricably linked. For example, if the schedule is missing activities or they are not correctly linked, float estimates will be miscalculated, resulting in an invalid critical path. Without a schedule that can produce a true critical path, the program office will neither be positioned to provide reliable timeline estimates nor be able to identify when problems or changes may occur and determine the impact they may have on subsequent work. CATMT program officials acknowledged that the schedule did not include program-wide critical paths but noted that a critical path exist for individual segments of the program. They also noted that a schedule risk analysis was not performed because it was not a contractual deliverable. Because the SWIM schedule did not fully or substantially meet any of GAO’s nine scheduling best practices, we found it to be unreliable. The SWIM program differs from the others in that it is an aggregation of NextGen acquisition programs, each developing an aspect of the SWIM information sharing capability. Because SWIM program managers are reliant on schedule information from a number of other programs, SWIM schedule integration is particularly important. However, our analysis found that the SWIM schedule was not, by any measure, fully integrated because it provided only a synopsis of the individual system implementing program schedules and, thus, did not fully represent the work required to complete the overall SWIM program. This resulted in float calculations that were unrealistic, and the resulting critical path calculations were invalid. In addition, while the many missing activities negatively impacted the schedule logic and the accuracy of durations, it also made the accurate allocation of resources and comprehensive integration of schedule activities, both horizontally and vertically, impossible. We also noted that FAA made no effort to identify a program-wide critical path. Program officials said that because each of the system implementing program schedules has its own critical path, involve disparate capabilities, and are independent of one another their individual critical paths are not accessible through the SWIM schedule software. They therefore are not used for overall SWIM program management. We believe that the SWIM program itself should have its own critical path that includes, at a minimum, acceptance of major deliverables from the system implementing program schedules. Without a program-wide critical path, management does not have a clear picture of the underlying project tasks that must be performed to achieve the overall program target completion date. Finally, although there was no risk analysis conducted on this schedule, our analysis found that this best practice was minimally met because a risk analysis was conducted on a separate but related schedule, and the SWIM program office considered risk to some extent. Like the other three programs, we found the WAAS program schedule prepared by FAA unreliable because it did not fully or substantially meet any of GAO’s nine scheduling best practices; however, we also reviewed the contractor’s schedule for the same segment and found it fully or substantially met six best practices. For example, FAA’s WAAS program schedule did not fully sequence activities in the order in which they are to be carried out. More specifically, the WAAS program schedule showed nearly half of the remaining activities were missing sequenced logic, causing us to question the calculated dates of activities. Logic is necessary for a schedule to show program managers when activities are expected to start and finish; when logic is missing, activity dates cannot adjust correctly to changes in activities. To test the ability of the schedule to dynamically update its dates due to changes, we artificially extended the duration of an activity to 1,500 days, which changed the activity’s finish date. However, the duration extension had no effect on successor activities because this activity is not tied to any successor activities. Extending the duration to 1,500 days also pushed the project planned finish date from September 22, 2016, to June 29, 2017; however, because the logic links are not in place, we questioned whether the projected finish date under this scenario is reliable. Moreover, the WAAS program schedule had too many artificial constraints that were driving the start and finish dates for more than 70 Constraints are usually substitutes percent of the remaining activities.for logic and can mean that the schedule is not well planned or feasible. Constraints also greatly reduce the ability of the program to take advantage of possible time savings. Further, our analysis found that the schedule did not fully capture or assign resources to all government and contractor activities; it also did not accurately allocate resources or consistently establish the duration of activities. In addition, while WAAS program officials told us that the schedule was integrated vertically and horizontally, we did not find evidence of such integration. Furthermore, we found the WAAS program office’s schedule did not identify a critical path for the entire program. As noted earlier, critical path and float determinations are closely related. Our analysis of the WAAS program office schedule found that more than half of the remaining activities had float of more than 1,000 working days, which we believe to be unreasonably high. Without proper determination of float, management cannot properly reallocate resources from tasks to other tasks without adversely affecting the overall completion date. Although program officials said that they maintained a risk register listing the potential risks that could impact the schedule and adjusted the schedule for these risks, we did not find evidence that the program office had conducted a risk analysis of its schedule. While the schedule prepared by the contractor did not fully or substantially meet three of the scheduling best practices, it fully or substantially met six: capturing, sequencing, assigning resources to, and establishing the duration of all activities; establishing the critical path; and identifying reasonable float between activities. For example, our analysis found that all activity durations are consistently estimated in days and adhere to a standard 5-day workweek that accounts for holidays, and no activities were scheduled to begin on a weekend. Officials from the contractor said duration estimates for the schedule are based on historical information from past performance, comparable releases, lessons learned, similar work, and other data requirements. In addition, our analysis traced several critical paths in the schedule. Though we found minor interruptions in the various critical paths, the schedule’s logic, reasonable durations, and low total float estimates allow the calculation a valid critical path. As noted, FAA did not perform a complete schedule risk analysis for any of the four programs we reviewed and, thus, cannot accurately estimate these programs’ completion dates with confidence. A schedule risk analysis, which is one of our best practices for program scheduling, uses statistical techniques to predict a level of confidence in meeting a program’s completion date. The objective of the analysis is to develop a probability distribution of possible completion dates that reflect the project and its identified risks. This analysis can help program managers both understand the most important risks to the program and focus on mitigating those risks. Other federal agencies, including the Department of Defense and the National Aeronautics and Space Administration, require schedule risk analysis for major acquisitions; the Department of Veterans Affairs, in response to a GAO recommendation,require schedule risk analysis for major construction projects. The other three schedules did not have the required information to conduct a schedule risk analysis. showed four potential risks to the project. We then conducted interviews with FAA program and contractor staff and asked them to discuss other potential risks to the project, including how the risk would affect the project’s timeline and the likelihood of the risk occurring. Using this information we identified an additional 16 risks for a total of 20 risks. The fact that our interviews identified a relatively large number of new risks could be an indication that the contractor did not systematically analyze the full range of risks when developing the program’s risk register. We then consolidated the 20 risks into 14 broader risks and tested how each would impact the duration of specific activities in the schedule. We then ran a Monte Carlo simulation, which consisted of the computer- generated results of 3,000 estimates of the future schedule based on the activities in the schedule, the chance that some of the activities would be affected by some risks and the predicted effect of those risks on the duration of each activity. We then analyzed the potential impact of risks on the program schedule. Since risks can effect the schedule in various ways––for example, risks can have a large impact on the durations of activities they affect, or they can introduce critical paths that are different from the baseline critical path–– we analyzed the marginal impact of each of the risks we identified to determine which would have the greatest effect on the overall schedule. We found the following three key risks to the program, only the first of which (limited WAAS program office resources) was originally identified by the contractor. The three risks are limited WAAS program office resources such as staffing; delays in software yet to be released and additional changes to software already released and in use; and a potentially optimistic schedule completion date. Our schedule risk analysis showed the completion of the segment of the WAAS program covered by the schedule could slip as much as 2 months. Specifically, the analysis showed that there is less than a 5 percent probability that the program segment would be completed by September 6, 2012, the current baselined date for completion. However, it appears that the segment will be completed close to the deadline since we found a 50 percent probability that the program segment will be completed by October 23, 2012 (about 1.5 months after the current estimated date for completion); and an 80 percent probability that the program will be completed by November 13, 2012 (about 2.25 months after the current estimated date for completion). Although we did not conduct a schedule risk analysis for other FAA programs, the result of our analysis provides examples of the types of risks that major acquisition programs face and the impact those risks can have on meeting acquisition program milestones, especially given the interrelation and interdependencies among NextGen acquisitions discussed earlier. More information on our schedule risk analysis can be found in appendix V. FAA has begun developing an integrated master schedule for the entire NextGen initiative that would, in part, capture related NextGen program schedules, governance activities, and other performance and financial data to provide real-time monitoring of the overall NextGen effort. However, the unreliability of the four program schedules for programs that are integral to the NextGen initiative puts this high-level master schedule at risk. Having a reliable integrated master schedule would enable FAA to determine how delays in one program impact other programs and the overall NextGen implementation timeline. While it is encouraging that FAA is beginning to develop an integrated NextGen master schedule, the effort could be hampered by the lack of schedule integration at the program level, as well as the failure of individual program schedules to meet best practices. For example, since FAA does not perform schedule risk analysis on individual programs, it cannot predict with certainty if any of the programs will be completed on time. Therefore, the integrated master schedule for NextGen would be built on schedules that may not reflect accurate program completion dates. Similarly, none of the four schedules we reviewed, which were for segments of the entire program, had reflected how tasks for the segment affected milestones for the entire program. Without integrated schedules at the program level, an integrated master schedule at the NextGen initiative level would be problematic. In response to our review of the extent that the four selected acquisition programs met best practices for cost estimates and schedules, FAA provided information on steps it is taking to improve its processes for both cost estimates and schedules and noted that some of the cost estimates and schedules we reviewed were developed before the improvements were in place. FAA stated that strengthening its cost estimation process is part of the seven key acquisition processes it has developed, including program management, contractor management, requirements, risk management, measurement and analysis, verification and validation, and quality assurance. FAA stated that it has updated its Guidelines for FAA Cost Estimating to be consistent with the GAO Cost Guide, filling in gaps that it had identified during a comparison of its practices to those contained in the Cost Guide. As of November 2011, 11 of the 12 best practices are addressed in the guidelines. According to FAA officials, the remaining best practice—involving the creation of independent cost estimates—is unlikely to be implemented at FAA in the foreseeable future because FAA believes the resources required to create independent estimates are prohibitive in current budget environments. FAA has more than tripled the number of cost estimators in the Investment Planning and Analysis organization, many of which work with the acquisition program offices to provide guidance on preparing estimates. Additionally, as part of FAA’s effort to improve acquisition certification and training, the agency is preparing to launch a cost estimating certification program. Coupled with a competency-based training program, FAA believes the certification program will enhance and improve consistency of the skills of FAA cost estimators. In describing its efforts to improve schedules, FAA stated that it views the development and maintenance of integrated schedules as an inherent and critical part of its seven key acquisition functions. FAA noted that included in its standard process for acquisition schedules are toolkits that require programs to develop integrated program schedules that address all nine of GAO’s best practices. FAA stated that that the current procedures for developing best practices were not fully in place when the four programs we reviewed began the implementation phase. FAA has made improvements in its management of air traffic control modernization acquisitions, and most of the 30 we reviewed are currently within the original cost estimate and half are on schedule. FAA is also taking steps to address past issues to ensure cost estimates and schedules are more accurate in the future, including incorporating best practices in its acquisitions guidance and policies. Nevertheless, our review of the FAA acquisitions found that it has yet to fully implement several GAO-identified best practices or follow others. Following best practices is particularly important for FAA, which must manage large, complex, and interdependent acquisitions associated with NextGen. Cost estimates that are imprecise can result in Congress appropriating millions of dollars for projects based on estimates that prove to be inaccurate, and program schedule delays can increase costs and affect the implementation of interdependent programs. In such cases, FAA will be forced to reduce the scope of the programs to stay within the original estimates or Congress will need to appropriate unanticipated funds to complete the programs. Delays and cost increases in individual programs could have a cascading effect on other programs and ultimately affect FAA’s timelines and goals for NextGen implementation. Our analysis of the cost estimates and schedules for the four programs we reviewed indicates that FAA needs to further develop requirements for critical cost estimation and schedule procedures. Independent cost estimates can improve the accuracy and credibility of cost estimates and better ensure that programs will be completed within budget. A schedule risk analysis can help FAA determine the likelihood that a program will be completed on time. FAA stated that it has no immediate plans to conduct independent cost estimates due to current budgetary constraints. We recognize that conducting independent cost estimates and schedule risk analysis takes both financial resources and some time and that it may be appropriate to limit one or both of these analyses to instances where a program is particularly costly, complex, or on a compressed schedule. However, conducting independent cost estimates, schedule risk analyses, and other analyses called for in our best practices can not only help minimize the risk of cost overruns and schedule delays, but also provide FAA, congressional decision makers, and other stakeholders with important information about these critical acquisitions. It is also important that FAA develop master schedules at the individual acquisition program level. FAA’s lack of a fully integrated master schedule for the programs we reviewed hampers its ability to provide accurate information on the schedule for these programs. This information will be needed as FAA simultaneously works to develop an integrated master schedule for the overall NextGen initiative. The use of an integrated master schedule can assist FAA in monitoring a program, identifying problems that could affect later stages of the program’s implementation, improving the accuracy of cost estimates and schedules for individual programs, and improving the accuracy of information FAA is compiling to monitor the costs and schedules for the NextGen initiative. FAA has incorporated 11 of our 12 steps that are associated with the characteristics of a high-quality and reliable cost estimate into their acquisition guidelines. However, our analysis of the four major programs indicates that FAA has not adequately integrated all of the steps for these programs into its cost estimation processes, and thus the estimates are not reliable. Similarly, although FAA addresses our nine scheduling best practices in its acquisition guidelines, our analysis of the schedules for the four programs indicates that the schedules are not adequately following these best practices and are not reliable. Although the cost estimates and schedules for some of the four programs were developed prior to FAA’s revision of acquisition guidelines, our work shows that FAA needs to assess its major acquisition programs to understand if its guidelines and other best practices are, in fact, being followed. Such an assessment would then allow FAA to better ensure that best practices for cost estimates and schedules are being applied. To improve cost estimates and schedules for NextGen and other major air traffic control acquisition programs, GAO recommends that the Secretary of Transportation direct FAA to take the following three actions when appropriate for major acquisition programs based on a program’s cost, schedule, complexity, and risk: Conduct independent cost estimates and schedule risk analysis for major acquisition programs. Require a fully integrated master schedule for each major acquisition program, including those that are components of NextGen. An integrated master schedule should horizontally and vertically link all program activities and milestones, including government and contractor schedules and program segments. Conduct an assessment of major acquisition programs to ensure they meet all of the established best practices for cost estimates and schedules contained in GAO guidance. Given constrained budgets, FAA should determine which programs should be subject to these recommendations, such as those that are particularly costly, complex, or on a compressed schedule. We provided a draft of this report to the Department of Transportation for review and comment. DOT and FAA responded by email and did not comment whether or not they agreed or disagreed with our recommendations. DOT provided comments on the results of our analysis of the cost estimates and schedules for the four programs we reviewed in depth. In response to our finding that the ADS-B, CATMT, SWIM, and WAAS estimates lacked credibility largely because FAA did not obtain an independent cost estimate for any of the cost estimates, and provided little evidence that they conducted sensitivity or risk analyses, FAA stated it is not convinced that an independent organization will reduce the uncertainty of cost estimates. FAA noted that it does not have an independent organization such as the Department of the Navy’s Center for Cost Analysis. However, FAA stated that the Finance Organization within ATO assessed the ADS-B program office’s Basis of Estimate as part of the JRC Decision and that this level of independence, combined with specific entry and exit criteria, allowed the program offices to manage these acquisitions so that costs were controlled, risks mitigated, and technical parameters achieved, while adhering to the planned milestone schedule. We agree that the Finance Organization assessment of the two cost estimates provided some degree of independence and may have improved the accuracy of the ADS-B estimates, but it is not clear that such an independent review will guarantee similar results for other programs. As we stated in the report, such an independent cost review is less rigorous than an independent cost estimate. According to our cost guide, an independent cost estimate is often more accurate because the estimating team is further removed from the program office and less prone to accept overly optimistic assumptions or be burdened by organizational bias. DOT also provided technical clarifications, which we incorporated into the report as appropriate. We are sending copies of this report to interested congressional committees, the Secretary of Transportation, and the Acting Administrator of FAA. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. In response to a congressional request, we examined the Federal Aviation Administration’s (FAA) ability to modernize, upgrade, and replace the National Airspace System’s (NAS) facilities and equipment to meet projected increases in traffic volumes, enhance the system’s safety, and increase the efficiency of the air traffic control (ATC) system—a principal component of the NAS. FAA’s ATC acquisitions are critical to maintaining the NAS and transitioning to the Next Generation Air Transportation System (NextGen) over the next 10 years. Given that some key legacy and NextGen acquisitions have experienced schedule delays and cost overruns, which may risk the timely implementation of NextGen, we (1) determined whether the planned costs and schedules of current FAA ATC acquisition programs have changed since they were first submitted to Congress; (2) examined the reasons for any changes in planned costs and schedules; and (3) assessed the extent to which select ATC programs adhered to best practices for determining acquisition costs and schedules. To describe any changes in costs and schedules of the current 30 FAA capital ATC acquisitions, we gathered and analyzed agency data on the estimated cost and schedules of these ATC acquisitions. We drew upon past work in which we undertook detailed reviews of the status of ATC and other acquisition programs and obtained updated documentation as necessary from FAA. We interviewed FAA officials to obtain information on FAA’s acquisition process and summarized the status of all acquisitions, including FAA’s original and current cost estimates and completion dates. For baselined acquisitions, we compared estimated costs when they were submitted to Congress for approval against their current estimates, and we analyzed planned and actual schedules. To determine the reasons for changes in cost estimates and schedules, we interviewed FAA officials and FAA contractors and reviewed acquisition documentation. We analyzed information on cost increases and delays to determine if systematic issues exist that have effects on other FAA acquisitions. To determine the extent to which select ATC programs adhered to best practices for determining acquisition costs and schedules, we conducted an in-depth review of 4 of the 30 acquisitions programs: The Automatic Dependent Surveillance-Broadcast (ADS-B) system, the Collaborative Air Traffic Management Technologies (CATMT) system, the System Wide Information Management (SWIM) system, and the Wide Area Augmentation System (WAAS). We selected these four acquisitions based on the following criteria: (1) existence of baselining, (2) the acquisition is at a point in the acquisition process where risks can be identified, and (3) the acquisition is key to NextGen and legacy systems. In addition to interviews, we collected documentation, and we analyzed and summarized the views and information collected. We also identified best practices that FAA could adopt or strengthen to improve its acquisitions cost estimation and scheduling and ensured that acquisitions follow cost and schedule best practices outlined in our Cost Estimating and Assessment Guide. We also performed a schedule risk analysis of the WAAS program to determine the likelihood of the project finishing on schedule. GAO-09-3SP. a conclusion about whether the cost estimate is reasonable. Therefore, a good cost estimate—while taking the form of a single number—is supported by detailed documentation that describes how it was derived and how the expected funding will be spent in order to achieve a given objective. For example, the documentation should capture in writing such things as the source data used and their significance, the calculations performed and their results, and the rationale for choosing a particular estimating method or reference. Moreover, this information should be captured in such a way that the data used to derive the estimate can be traced back to and verified against their sources. Finally, the cost estimate should be reviewed and accepted by management to ensure there is a high level of confidence in the estimate and the estimating process. Comprehensive: The cost estimates should include both government and contractor costs of the project over its full life cycle, from inception through design, development, deployment, operation, and maintenance to retirement of the project. The cost estimate should be structured in sufficient detail to ensure that cost elements are neither omitted nor double counted, and they should document all cost-influencing ground rules and assumptions. Accurate: The cost estimates should provide for results that are unbiased, and they should not be overly conservative or optimistic. Estimates are accurate when they are based on an assessment of most likely costs, adjusted properly for inflation, and contain few, if any, minor mistakes. In addition, the estimates should be updated regularly to reflect material changes in the project, such as when schedules or other assumptions change so that the estimate is always reflecting the project’s current status. Among other things, the estimate should be grounded in documented assumptions and a historical record of cost estimating and actual experiences on other comparable projects. Credible: The cost estimates should discuss any limitations of the analysis because of uncertainty or biases surrounding data or assumptions. Major assumptions should be varied, and other outcomes recomputed to determine how sensitive they are to changes in the assumptions. Risk and uncertainty analysis should be performed to determine the level of risk associated with the estimate. Furthermore, the estimate’s results should be cross-checked, and an independent cost estimate conducted by a group outside the acquiring organization should be developed to determine whether other estimating methods produce similar results. After reviewing documentation submitted by FAA and information obtained during interviews, we determined the extent to which the cost estimates met the characteristics of cost-estimating best practices for the four projects we reviewed. Our review of project schedules was based on research that identified a range of best practices associated with effective schedule estimating. In addition, we obtained the consulting services of David Hulett, Ph.D., to assist in our risk analysis of the WAAS project schedule. We also conducted multiple interviews with project managers, contractors, and schedulers to determine the extent to which current project schedules met the best practices criteria. These nine practices are: Capturing all activities: The schedule should reflect all activities (steps, events, outcomes, and other factors) as defined in the project’s work breakdown structure, including activities to be performed by both the government and its contractors. Sequencing all activities: The schedule should be planned so that it can meet project-critical dates. To meet this objective, activities need to be logically sequenced in the order that they are to be carried out. In particular, activities that must finish prior to the start of other activities (i.e., predecessor activities) and activities that cannot begin until other activities are completed (i.e., successor activities) should be identified. Identifying interdependencies among activities that collectively lead to the accomplishment of events or milestones can be used as a basis for guiding work and measuring progress. Assigning resources to all activities: The schedule should realistically reflect what resources (i.e., labor, material, and overhead) are needed to do the work, whether all required resources will be available when they are needed, and whether any funding or time constraints exist. Establishing the duration of all activities: The schedule should reflect how long each activity will take to execute. In determining the duration of each activity, the same rationale, data, and assumptions used for cost estimating should be used for preparing the schedule. Furthermore, these durations should be as short as possible and should have specific start and end dates. Excessively long periods needed to execute an activity should prompt further decomposition of the activity so that shorter execution durations will result. Integrating schedule activities horizontally and vertically: The schedule should be horizontally integrated, meaning that it should link the products and outcomes associated with already sequenced activities. These links are commonly referred to as “hand-offs” and serve to verify that activities are arranged in the right order to achieve aggregated products or outcomes. The schedule should also be vertically integrated, meaning that traceability exists among varying levels of activities and supporting tasks and subtasks. Such mapping or alignment among levels can enable different groups to work to the same master schedule. Establishing the critical path for all activities: With the use of scheduling software, the critical path—the longest-duration path through the sequenced list of activities—should be identified. The establishment of a project’s critical path is necessary for examining the effects of delays in any activity along this path. Potential problems that may occur on or near the critical path should also be identified and reflected in the scheduling of the time for high-risk activities (see the next activity, “Identifying float”). Identifying reasonable float: The schedule should identify float—the time that a predecessor activity can slip before the delay affects successor activities—so that schedule flexibility can be determined. As a general rule, activities along the critical path typically have the least amount of float. Conducting a schedule risk analysis: A schedule risk analysis uses a good critical path method schedule and data about project schedule risks, as well as Monte Carlo simulation techniques, to predict the level of confidence in meeting a project’s completion date, the amount of time contingency needed for a level of confidence, and the identification of high-priority risks. This analysis should focus not only on critical path activities but also on other schedule paths that may become critical. A schedule/cost risk assessment recognizes the interrelationship between schedule and cost and captures the risk that schedule durations and cost estimates may vary for a variety of reasons, including limited data, optimistic estimating, technical challenges, lack of qualified personnel, and other external factors. As a result, the baseline schedule should include a buffer or a reserve of extra time. A reserve of extra time for contingencies should be calculated by performing a schedule risk analysis. As a general rule, the reserve should be held by the project manager and applied as needed to those activities that take longer than scheduled because of the identified risks. Reserves of time should not be apportioned in advance to any specific activity since the risks that will actually occur and the magnitude of their impact are not known in advance. Updating the schedule using logic and durations to determine the dates: The schedule should use logic and durations in order to reflect realistic start and completion dates for project activities. The schedule should be continually monitored to determine when forecasted completion dates differ from the planned dates. This information can be used to determine whether schedule variances will affect downstream work. Maintaining the integrity of the schedule logic is not only necessary to reflect the project’s true status but is also required before conducting a schedule risk analysis. The schedule should avoid logic overrides and artificial constraint dates that are chosen to create a certain result on paper. Individuals trained in critical path method scheduling should be responsible for updating the schedule. Based on our work, we determined the extent to which estimates and schedules for the four projects we selected met each best practices criterion: Not Met—project officials provided no evidence that satisfies any portion of the criterion. Minimally Met—project officials provided evidence that satisfies a small portion of the criterion. Partially Met—project officials provided evidence that satisfies about half of the criterion. Substantially Met—project officials provided evidence that satisfies a large portion of the criterion. Met—project officials provided evidence that satisfies the entire criterion. We conducted this performance audit from August 2010 to February 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix contains detailed information for 30 individual air traffic control programs. Each overview presents information and data that was provided by FAA. The overviews provide a description of the program and the cost and schedule status. The overviews are based on program office reported information as of August 2011. In most cases, we did not validate the data provided, but reviewed the data and performed various checks to determine they were reliable enough for our purposes. This appendix provides the results of our analysis of the extent to which the processes and methodologies used to develop and maintain the four FAA cost estimates meet the characteristics of high-quality cost estimates. These characteristics incorporate the 12 steps consistently applied by cost-estimating organizations throughout the federal government and industry and considered best practices for developing cost estimates and that are listed in table 2 of the report. The following tables provide the detailed results of our analysis of the program cost estimates for Automatic Dependent Surveillance-Broadcast (ADS-B), Collaborative Air Traffic Management Technologies (CATMT), System Wide Information Management (SWIM), and Wide Area Augmentation System (WAAS). “Not met” means the program provided no evidence that satisfies any of the criteria. “Minimally met” means the program provided evidence that satisfies a small portion of the criterion. “Partially met” means the program provided evidence that satisfies about half of the criterion. “Substantially met” means the program provided evidence that satisfies a large portion of the criterion. “Fully met” means the program provided evidence that completely satisfies the criterion. This appendix provides the results of our analysis of the extent to which the processes and methodologies used to develop and maintain four FAA integrated master schedules meet nine best practices associated with effective schedule estimating. The following tables provide the detailed results of our analyses of the schedules for the Automatic Dependent Surveillance-Broadcast (ADS-B), Collaborative Air Traffic Management Technologies (CATMT), System Wide Information Management (SWIM), and Wide Area Augmentation System (WAAS) programs compared to the nine best practices. “Not met” means the program provided no evidence that satisfies any portion of the criterion. “Minimally met” means the program provided evidence that satisfies a small portion of the criterion. “Partially met” means the program provided evidence that satisfies about half of the criterion. “Substantially met” means the program provided evidence that satisfies a large portion of the criterion. “Fully met” means the program provided evidence that satisfies the entire criterion. A best practice that the WAAS contractor schedule did not meet is conducting a schedule risk analysis, which is not required by FAA’s schedule specifications. FAA officials told us that they do not conduct schedule risk analysis. In August and September 2011, we performed our own schedule risk analysis on the WAAS contractor schedule, through which we analyzed the latest version of the contractor’s schedule available to us at the time of the analysis. A schedule risk analysis uses statistical techniques to predict a level of confidence in meeting a program’s completion date. This analysis focuses on critical path activities and on near-critical and other activities since any activity may potentially affect the program’s completion date. The objective of the simulation is to develop a probability distribution of possible completion dates that reflect the program and its quantified risks. From the cumulative probability distribution, the organization can match a date to its degree of risk tolerance. For instance, an organization might want to adopt a program completion date that provides a 70 percent probability that the program will finish on or before that date, leaving a 30 percent probability that it will extend beyond, or overrun that date, given the schedule and the risks. The organization can thus adopt a plan consistent with its desired level of confidence in the overall integrated schedule. This analysis can give valuable insight into what-if drills and quantify the effects of program changes. In developing a schedule risk analysis, probability distributions for each activity’s duration have to be established. Furthermore, risk in all activities must be evaluated and included in the analysis. Some managers focus only on the critical path, but because we cannot be certain how long activities will take, we cannot know the true critical path. Consequently, it would be a mistake to focus only on the software-calculated critical path (those activities in which, if delayed, will negatively impact the overall project completion date) when some off-critical-path activity might become critical if a risk were to occur. Typically, three-point estimates— that is, best, most likely, and worst-case estimates—are used to develop the probability distributions for the duration of workflow activities. Once the distributions have been established, a Monte Carlo simulation uses random numbers to select specific durations from each activity probability distribution and calculates a new critical path and dates, including major milestone and program completion dates. The Monte Carlo simulation continues this random selection thousands of times, creating a new program duration estimate and critical path each time. The resulting frequency distribution displays the range of program completion dates along with the probabilities that these dates will occur. Table 17 provides a range of dates and the probability of the project’s completion on those dates or earlier, based on our 3,000 iterations that are chosen at random during the Monte Carlo simulation. For example, according to our schedule risk analysis, there is a less than 5 percent chance that the project will be finished on or before September 13, 2012. Likewise, there is an 80 percent chance that the project will be finished on or before November 13, 2012. Because completion on any date is uncertain, it is more realistic to show a range of possible completion dates than to focus on a single date. In deciding which percentile to use for prudent scheduling, there is no international best practice standard. The chosen percentile depends on the riskiness and maturity of the project. For some projects, we emphasize the 80th percentile as a conservative promise-of-completion date. While the 80th percentile may appear overly conservative, it is a useful promise-of-completion date if a number of new but currently unknown risks (i.e., “unknown unknowns”) are anticipated. The 50th percentile date may expose the project to overruns. In the case of the WAAS contractor schedule, our analysis concluded that management should realistically expect cutover, or completion, between October 23, 2012, and November 13, 2012, the 50th and 80th percentiles, respectively. The artificial must-finish date constraint of September 6, 2012, built into the schedule is unlikely. Our analysis shows the probability of completion by September 6, 2012, is less than 5 percent likely with the current schedule without risk mitigation. There are two reasons why the planned end completion date is not likely to occur, according to the results of our schedule risk analysis. First, most risks are threats only. Only two opportunities were identified during the analysis: (1) the estimating error of the schedule may be between -10 percent and 15 percent, and (2) there is a 65 percent chance that the 11-day formal shadow test will not be needed. Second, there are parallel paths within the structure of the schedule that lead to merge points. If several paths converge to one milestone, the latest merging path determines the date. This “merge bias” cannot accelerate schedule dates and usually adds structural risk to the schedule. The contractor supplied six different risks that are currently identified in the project’s risk register. Using these risks as a basis for discussion, we interviewed 16 experts familiar with the project, including prime contractor officials, FAA officials, and technical FAA consultants to identify any other risks. Each interviewee was asked four questions to address four related points. To estimate the probability that an identified risk will occur on the project in such a way that some activity durations are affected. The estimated probability is translated into the percentage of iterations that are chosen at random during the simulation. For example, if the expert estimates weather will have a 10 percent chance of affecting some activities, then, on average, weather risk will occur in 10 percent of the Monte Carlo iterations. If the interviewee believed the identified risk was likely, the interviewee was asked to identify which activities’ durations would be affected. For example, activities related to steel erection or concrete pouring may be affected if the weather risk is realized and bad weather occurs. After the interviewee identified affected activities, the interviewee was asked to provide a three-point estimate of the risk’s effect on duration—low, most likely, and high. Estimates were provided as percentages, which were applied to the activity durations in the Monte Carlo simulation if the risk occurred. For example, if bad weather occurs, the duration of a 10-day steel erection activity may be affected a minimum of 110 percent, a most likely of 150 percent, or a maximum of 200 percent. These percentages translate into increases in the activity’s duration of 11 days under a low-risk scenario, 15 days under the most likely risk scenario, and 20 days under the maximum risk scenario. If the risk is not realized, there is no change to the activity’s original estimated duration. Finally, the interviewee was asked to identify any unaccounted for risks. We began the interviews with 6 risks and, through the interview process, identified 16 more risks. During data analysis, some risks were consolidated with others or eliminated because of limited data. In all, 14 risks were identified and incorporated into the Monte Carlo simulation. These include 9 risk drivers, 4 existence risks, and 1 schedule duration risk. The final risk drivers used in the schedule risk analysis are as follows: Some software Release 3B problem reports require testing with a live geostationary satellite; software simulation environment may not be able to test all requirements. Potential difficulty repairing Fullerton contractor’s lab safety computers. Software delays in software Release 3A and additional changes to software Release 2B are likely to delay the 3B schedule. Parallel implementation of Releases 2B, 2C, 3A, and 3B create competition for FAA resources. Release 3B problem report testing is more complex and subject to uncertainty. FAA WAAS personnel move around, and inexperienced staff must be trained on WAAS. Workshare strategy between FAA and its prime contractor affects the schedule for on-the-job training learning curve. Changes to the simulation tools may be more difficult or more time- consuming to implement than anticipated. The final existence risks are as follows: FAA performs formal software release 3B shadow test. Late problem report may come from FAA. FAA program office is resource-limited. Inserting existence risk activities does not affect dates within the baseline schedule because the activities initially have zero duration. The activities have duration only if they happen to occur during an iteration of the simulation. The final uncertainty risk is schedule without buffer may be optimistic. Most risks were identified by multiple respondents during the interviews. During data analysis, data from the interviews are combined and analyzed to create ranges and probabilities for each of the 14 risks. Because risks are multiplicative, several risks occurring on the same activity may overestimate the true risk. That is, by default in the Monte Carlo simulation, risks occur in a series, one after another, so that an activity that has several risks may be unrealistically extended if all risks occur. In reality, an activity may recover from two or more risks simultaneously, so that the actual risk is not multiplicative. Therefore, to avoid overestimation of risk, risks can explicitly be defined as occurring in parallel rather than in series. Risks that occur in a series will occur one after the other and add (or subtract) their respective effects on duration to the affected activity. If risks occur in parallel, on the other hand, only the maximum effect of all risks will affect the duration. For example, if the risk of complexity adds 3 days to the duration of software development and the risk of staff shortages adds 4 days, then development will extend 7 days if the risks are defined in series. However, the duration will extend only 4 days if the risks are defined as parallel. This definition of parallel risks helps temper any risk overestimated by a multiplication of risk factors. We defined one risk in series: software delays in Release 3A and additional changes to Release 2B are likely to delay Release 3B schedule. All other risks were assumed to be parallel. Most risks were assigned directly to existing activities in the schedule. However, some risks required adjustments to the schedule. These adjustments involved replacing lags with activities and inserting existence risk activities. Lags: During our initial analysis of the contractor’s schedule, we identified 25 remaining activities with lags and 21 remaining activities with leads (negative lags). While lags represent the passing of time between activities, they are often misused to put activities on a specific date or to insert a buffer for risk. That number was reduced to 20 activities with lags and 6 activities with leads with the “Rev 1” schedule that was altered by the prime contractor for our use in the schedule risk analysis. We replaced the 33-day lag between the R3B Start Cutover and R3B End Cutover activities with an actual activity, ID 258 “Cutover Task.” Replacing lags with actual activities does not affect dates within the baseline schedule because the activities have the same duration as the lags. Existence risk: We identified some risks that would add an indeterminate amount of time to the overall schedule if they were realized. For example, if the Tijuana facility is not ready because software requirements are misunderstood, it could add 4 to 26 days to the schedule in the form of additional facility work. Prioritizing risks and risk mitigation: Risks can affect the schedule in several ways: They can have a high probability of occurring; have a large- percentage impact on the durations of the activities they affect; or they can apply to risk-critical paths, which may differ from the baseline deterministic critical path. Beyond applying 14 risks to the schedule, we were interested in identifying the marginal impact of each risk. That is, we were interested in identifying which risks have the largest impact on the schedule because these were the risks that should be targeted first for mitigation. To find the marginal impact of a risk on the total project risk at a certain percentile, the Monte Carlo simulation was performed with the risk removed. The difference between the finish dates of the simulation with all the risks and the simulation with the missing risk yielded the marginal impact of the risk. Table 18 gives the priority of risks at the 80th percentile and the marginal impact of each risk. The marginal impact directly translates to potential calendar days saved if the risk is mitigated. Once risks are prioritized at the percentile desired by management, a risk mitigation workshop can be implemented to deal with the high-priority risks in order. The prioritized list of risks will form the basis of the workshop, and risk mitigation plans can be analyzed using the risk model to determine how much time might be saved. Project managers cannot expect to completely mitigate any one risk, and it is not reasonable to expect to mitigate all risks. In addition, risk mitigation will add to the project budget. However, some opportunities may be available to partially mitigate risks. In addition to the contact named above, individuals making key contributions to this report include Edward Laughlin and Karen Richey (Assistant Directors), Lindsey Bach, David Brown, Pamela Davidson, Tisha Derricotte, Kevin Egan, James Geibel, Bert Japikse, Delwen Jones, Jason Lee, Dominic Nadarski, Josh Ormond, and Brian Welsh.
The Federal Aviation Administration (FAA), partnering with other federal agencies and the aviation industry, is implementing the Next Generation Air Transportation System (NextGen), a new satellite-based air traffic management system that will replace the current radar-based system and is expected to enhance the safety and capacity of the air transport system by 2025. Concurrently, FAA continues to maintain and upgrade existing air traffic control (ATC) systems that will also be needed for NextGen. This involves acquiring and implementing new software and hardware. GAO was asked to determine (1) how, if at all, costs and schedules of FAA ATC acquisitions programs, including those related to NextGen, have changed since they were first submitted to Congress, (2) the reasons for any such changes, and (3) the extent that selected ATC programs adhere to cost and schedule best practices. To do its work, GAO reviewed 30 programs and conducted cost and schedule analysis on four programs that had an approved baseline and were NextGen related. GAO reviewed acquisition documents and interviewed FAA officials. In a review of 30 major ATC acquisition programs, all of which will contribute to the transition to NextGen, GAO found that costs for 11 of the 30 programs have increased from their initial estimates by a total of $4.2 billion and 15 programs experienced delays. The 11 acquisitions that experienced cost increases account for over 60 percent of FAA’s total acquisition costs ($11 billion of $17.7 billion) for the 30 programs. The 15 acquisitions that experienced schedule delays, of which 10 also had cost increases, ranged from 2 months to more than 14 years and averaged 48 months. Cost increases and schedule delays occurred due to several factors, many of which have been longstanding challenges for FAA. Specifically, these have involved (1) additional or unanticipated system requirements; (2) insufficient stakeholder involvement (such as controllers’ input) throughout system development; (3) underestimating the complexity of software development; and (4) unanticipated events including funding shortfalls or work toppages. These challenges, if they persist, will impede the implementation of NextGen, especially in light of the interdependencies among many acquisition programs, where cost increases or delays in one program can affect the costs and schedules of other programs. For the four programs GAO selected to analyze in depth, FAA is not consistently following the characteristics of high-quality cost estimates and scheduling best practices that GAO previously identified. Regarding cost estimates, GAO found that although all four of the programs generally provided well documented and comprehensive estimates, which are two of the four characteristics, no program fully met the two other characteristics. Specifically, each program estimate was not credible because each lacked an independent cost estimate, which provides a check against FAA’s estimate and three programs lacked risk or uncertainty analysis. The estimates also lacked accuracy because they were not updated regularly or based on comparable programs. Regarding scheduling practices, most programs did not substantially or fully meet the majority of the 9 best practices GAO previously identified including developing a fully integrated master schedule of all program activities and performing a schedule risk analysis. For example, without a schedule risk analysis, FAA is unable to predict, with any degree of confidence, if the estimated completion dates are realistic. FAA is implementing new processes and organizational changes to better manage acquisitions. However, by not consistently following the characteristics of highquality cost estimate and scheduling best practices, FAA cannot provide reasonable assurance to Congress and other stakeholders that NextGen and other ATC programs will avoid additional cost increases or schedule delays. To better estimate the cost and completion dates for major acquisitions, FAA should, among other things, require cost and schedule risk analysis, independent cost estimates and integrated master schedules. FAA did not comment on whether or not it agreed with the recommendations.
The Marine Corps was established on November 10, 1775, to provide security to naval vessels and boarding parties and to conduct limited land engagements in support of naval operations. In fiscal year 2012, the Marine Corps reported that it had about 198,000 active duty marines, 39,000 reservists, and 22,000 civilian employees. At any given time, approximately 30,000 marines are deployed in operations supporting the nation’s defense or military operations other than war. The Commandant of the Marine Corps has overall responsibility for Marine Corps operations, including the operating forces and supporting bases, air stations, and installations. To support its core mission, the Marine Corps received $28.7 billion in General Fund appropriations for fiscal year 2012—or 16.6 percent of the Department of the Navy’s appropriations. Figure 1 shows the amounts of the Marine Corps’ appropriations, including allocations of funds from appropriations shared with the Navy. The Marine Corps’ efforts to achieve audit readiness for its budgetary data were conducted within DOD’s overall high-risk environment. GAO’s High-Risk Series includes DOD risks related to weaknesses in financial management operations, business transformation, and business system modernization. DOD has acknowledged that long-standing weaknesses in its internal controls, business systems, and processes have prevented it from demonstrating that its financial statements are reliable, including information on budgeted spending reported in its SBR. Our February 2015 High-Risk Series updates on DOD financial management, business transformation, and systems modernization reported that the department had made limited progress in resolving long-standing weaknesses in these areas. DOD has undertaken several financial management improvement initiatives over the years to address weaknesses in business systems, processes, and controls through its FIAR strategy, semiannual FIAR Plan Status Reports, and financial management reform methodology contained in the FIAR Guidance. DOD also spends billions of dollars annually to maintain key business processes and operations and acquire modern systems that are fundamental to achieving its business transformation goals, including systems that support key functions, such as personnel, financial management, health care, contract management, acquisition, supply chain, and logistics. However, progress in making system and process improvements has been slow, and weaknesses in these areas have adversely affected the efficiency and effectiveness of DOD operations and hindered DOD’s ability to achieve financial audit readiness. While the department has made some progress toward demonstrating leadership commitment and developing capacity and action plans in all three areas, DOD continues to face challenges in monitoring corrective actions and demonstrating progress. In August 2013, we reported that DOD risk management policies associated with preparing auditable financial statements through the FIAR Plan are not in accordance with widely recognized guiding principles for effective risk management. For example, DOD has not addressed key risks associated with its component agencies’ reliance on service providers for significant aspects of their financial operations and their inability to maintain documentation to support transactions. In addition, DOD has continued to identify a department-wide need for qualified and experienced personnel—not only at working levels, but also in senior leadership positions—as a risk to achieving its financial management improvement and audit readiness goals. Because our related reports include numerous recommendations to DOD for addressing these and other financial management and audit readiness weaknesses, we are not making additional recommendations related to these matters in this report. The Marine Corps initially asserted that it was ready to undergo an audit of its fiscal year 2009 General Fund SBR on September 15, 2008. However, after reviewing the status of the Marine Corps audit readiness efforts, on April 10, 2009, the DOD OIG reported that the Marine Corps’ assertion of audit readiness was not accurate and that the documentation supporting its assertion was not complete. Although the Marine Corps made progress toward audit readiness during fiscal year 2009, the DOD OIG reported that a number of issues led auditors to conclude that an audit of the Marine Corps’ fiscal year 2009 SBR would not have positive results. For example, the OIG stated that after 3 months of extensive effort by the Marine Corps, adequate supporting documentation was received for only 74 percent of the sampled budgetary transactions. The DOD OIG reported that unless the issues were resolved, the risk of a disclaimer of opinion would be high. The DOD OIG also reported that the Marine Corps had identified remediation activities that needed to be accomplished before an audit of its SBR was undertaken. The DOD OIG suggested that the Marine Corps consider requesting an audit of its fiscal year 2010 SBR. The OIG subsequently contracted for assistance from an audit firm in performing an audit of the Marine Corps’ fiscal year 2010 SBR. Because the Marine Corps asserted SBR audit readiness at the beginning of fiscal year 2010, it was not subject to DOD’s May 2010 FIAR Guidance, which required each DOD component to review its processes and controls to identify needed corrective actions and develop a financial improvement plan with roles, responsibilities, and milestone dates for completing actions on assessable units as part of a component-level, overall financial improvement and audit readiness plan. In September 2011, we reported that the DOD OIG issued a disclaimer of opinion on the Marine Corps’ fiscal year 2010 SBR because the Marine Corps could not provide documentary support for transactions in a timely manner, and support for transactions was missing or incomplete. We also reported that the Marine Corps experienced difficulty identifying and providing complete populations of transactions that the auditors could confirm and use as a basis for substantive testing. In addition, the DOD OIG reported that the Marine Corps did not have adequate processes, systems, and controls over accounting for and reporting on the use of budgetary resources. Further, the Marine Corps could not provide evidence that reconciliations for key accounts and processes, such as the reconciliation (or matching) of payments (outlays) to bulk (estimated) obligations for shipments of household goods recorded in its Military Personnel appropriation account, were being performed. The OIG reported that Marine Corps management had not asserted that all corrective actions from eight previously identified material weaknesses had been completed. These weaknesses included, among others, deficiencies in financial management systems and deficiencies in controls over Fund Balance with Treasury and unobligated balances. During its fiscal year 2011 SBR audit effort, the Marine Corps again experienced difficulty in identifying complete populations and providing supporting documentation for samples of transactions selected by the auditors for testing. In November 2011, the DOD OIG issued a disclaimer of opinion on the Marine Corps’ fiscal year 2011 SBR, basically for the same reasons as the fiscal year 2010 disclaimer. However, based on discussions with DOD Comptroller, Navy, and Marine Corps officials after the audit report was issued, the OIG decided to give the Marine Corps additional time to provide audit documentation that had not been obtained during the original time frame of the audit. Consequently, on December 29, 2011, the OIG extended the audit of the Marine Corps’ fiscal year 2011 SBR to March 31, 2012. Despite the extended testing period, the Marine Corps was still unable to provide timely and relevant supporting documentation necessary for completing audit procedures to determine whether the Marine Corps’ fiscal year 2011 SBR was presented fairly. As a result, the DOD OIG’s November 2011 disclaimer of opinion on the Marine Corps’ fiscal year 2011 SBR was not amended. For fiscal year 2012, the DOD OIG continued as the auditor with responsibility for issuing the audit opinion and contracted with an audit firm for assistance in performing an audit of the Marine Corps’ budgetary activity reported on a current year General Fund schedule, beginning with fiscal year 2012 appropriations. The Marine Corps’ fiscal year 2012 General Fund Schedule is an interim, DOD component-level special report intended to provide a building block to an SBR audit through audits of consecutive fiscal year schedules of budgetary activity. The schedule of budgetary activity, like the SBR, is designed to provide information on budgeted spending authority as outlined in the President’s Budget, including budgetary resources, availability of budgetary resources, and how obligated resources have been used. The SBR and the schedule of budgetary activity aggregate account-level information reported in the Standard Form (SF)-133, Report on Budget Execution and Budgetary Resources, and summarize budgetary data reported in the Program and Financing schedules in the subsequent President’s Budget. Both the SBR and the schedule of budgetary activity consist of four separate, but related, sections that provide information about budgetary resources, the status of budgetary resources, changes in obligated balances, and outlays for major budgetary accounts. However, instead of covering the full range of SBR activity on current and expired appropriations that have not canceled, the first-year Schedule of Budgetary Activity covers only activity on current fiscal year appropriations. Subsequent fiscal year Schedules of Budgetary Activity would include activity in subsequent years’ appropriations, building toward an SBR. For example, in the second year, the fiscal year 2013 Schedule of Budgetary Activity would include fiscal year 2013 budgetary activity related to fiscal year 2012 and 2013 appropriations. Budgetary Resources. This section of a first-year schedule of budgetary activity shows total budgetary resources made available to the agency for obligation during the current fiscal year only. It consists of new budget authority, reimbursements, and other income. The first-year schedule of budgetary activity does not include unobligated amounts from prior periods, commonly referred to as beginning balances. In contrast, the SBR includes unobligated amounts available from prior reporting periods; transfers available from prior year balances; and adjustments, such as recoveries of prior year obligations. In addition, the SBR includes all other information provided in this section of the schedule of budgetary activity. Status of Budgetary Resources. This section of the schedule of budgetary activity and the SBR displays the status of budgetary resources at the end of the period and consists of obligations incurred and the unobligated balances at the end of the period that are available for future use. For the schedule of budgetary activity and the SBR, the total for this section must agree with the total for the Budgetary Resources section described above, as this section describes the status of total budgetary resources. In addition to the current year activity, the SBR includes obligations that are unavailable except to adjust or liquidate obligations chargeable to prior period appropriations. Change in Obligated Balance. This section of the schedule of budgetary activity consists of obligations incurred in the current year, less current year outlays. In addition to current year activity, the SBR would also include unpaid obligations brought forward from the prior years and recoveries of prior year unpaid obligations. Outlays. This section shows the relationship between obligations and outlays (also referred to as disbursements or expenditures) and discloses the payments made to liquidate obligations. Obligations are usually liquidated by means of cash payments (outlays), such as currency, checks, or electronic fund transfers. This section reconciles outlays with obligations incurred and the change in obligated balances during the year. The content of this section is the same for the SBR and the schedule of budgetary activity. The Office of Management and Budget (OMB) requires federal government financial statements, including the SBR, to be presented in accordance with GAAP for the federal government. The Federal Accounting Standards Advisory Board (FASAB) establishes GAAP for federal governmental entities. Federal government entities also are required to follow the U.S. Standard General Ledger (USSGL) Chart of Accounts, established by the Department of the Treasury (Treasury) for budgetary and proprietary accounting. Budgetary accounts related to the SBR and schedule of budgetary activity are used to recognize and track budget approval and execution, whereas proprietary accounts are used to recognize and track assets and liabilities reported on the Balance Sheet and revenue and expenses reported on the Statement of Net Cost. The USSGL accounts with the most significance to the Marine Corps’ General Fund Schedule are those accounts related to budget authority, including Appropriations and Collections; Obligations for orders of goods and services; and Outlays, or cash payments for goods and services that have been delivered (received and accepted by the agency). Figure 2 shows the flow of budgetary resources from receipt of appropriations and collections through apportionment and allotment of funds, obligation of funds for orders of goods and services, and receipt and acceptance of goods and services to cash outlay or payment for the items received. Audits provide essential accountability and transparency over government programs. The purpose of a financial statement audit is to provide financial statement users with an opinion by the auditor on whether the financial statements are presented fairly, in all material respects, in accordance with an applicable financial reporting framework, which would include GAAP for the reporting entity. An audit conducted in accordance with GAGAS enables the auditor to form that opinion, which enhances the degree of confidence that intended users can place on the financial statements. OMB requires that audits of federal financial statements be performed in accordance with GAGAS and OMB Bulletin 07-04. For the federal government, OMB issues financial reporting requirements that are incorporated into GAAP and audit requirements for audits of federal financial statements that supplement GAGAS. OMB guidance is particularly important because of the unique requirements related to the preparation of the SBR and the consolidation of the federal government’s financial statements. GAGAS provide a framework for performing high-quality audits with competence, integrity, objectivity, and independence to provide accountability and to help improve government operations and services. For financial audits, GAGAS incorporate the American Institute of Certified Public Accountants (AICPA) fieldwork and reporting standards and the related Statements on Auditing Standards (SAS), unless specifically excluded or modified by GAGAS. The SAS are codified into audit sections, referred to as AUs. For this report, we generally refer to GAGAS and the specific, underlying AICPA standards, where appropriate. We also refer to the Financial Audit Manual, which is jointly approved and issued by GAO and federal agency inspectors general, for applicable audit guidance. The Financial Audit Manual presents a methodology for performing financial statement audits of federal entities in accordance with professional standards. As the basis for the auditor’s opinion, GAGAS require the auditor to obtain reasonable assurance about whether the financial statements as a whole, or an element of the financial statements being audited in a Special Report, such as the Marine Corps’ Fiscal Year 2012 General Fund Schedule, are free from material misstatement, whether due to fraud or error. Reasonable assurance is a high, but not absolute, level of assurance that is reached when the auditor has obtained sufficient, appropriate audit evidence to reduce audit risk (that is, the risk that the auditor expresses an inappropriate opinion when the financial statements are materially misstated) to an acceptably low level. In general, misstatements, including omissions, are considered to be material if, individually or in the aggregate, they could reasonably be expected to influence the economic decisions that users make based on the financial statements. Judgments about materiality are made in light of surrounding circumstances and involve both qualitative and quantitative considerations. These judgments are affected by the auditor’s perception of the financial information needs of users of the financial statements, by the size or nature of a misstatement, or both. The auditor has no responsibility to obtain reasonable assurance that misstatements that are not material to the statements as a whole, whether caused by fraud or error, are detected. Management is responsible for the fair presentation of financial statements that reflect the nature and operations of the entity. When undergoing an audit, management represents that the financial statements are fairly presented in conformity with GAAP. By doing so, management implicitly and explicitly makes assertions regarding the recognition, measurement, presentation, and disclosure of the information in the financial statements and related disclosures as a whole. In accordance with auditing standards, the auditor should assess the risk of material misstatement at the financial statement and relevant assertion levels, and design and perform audit procedures to reduce the risk of material misstatement to an acceptably low level. Auditing standards state that financial statement assertions used by the auditor about classes of transactions and events for the period under audit fall into the following categories. Occurrence. Transactions and events that have been recorded have occurred and pertain to the entity. Completeness. All transactions that should have been recorded were recorded. Accuracy. Amounts and other data relating to recorded transactions and events have been recorded appropriately. Cutoff. Transactions and events have been recorded in the correct accounting period. Classification. Transactions and events have been recorded in the proper accounts. For the schedule of budgetary activity, this includes ensuring that transactions are recorded to the proper appropriation or fund. In addition, federal agency management is responsible for establishing and maintaining internal controls to achieve the objectives of effective and efficient operations, reliable financial reporting, and compliance with laws and regulations under the law commonly known as the Federal Managers’ Financial Integrity Act (FMFIA). FMFIA and OMB Circular No. A-123 require the head of each executive agency to annually report to the President and the Congress assurance statements, including assurance regarding the effectiveness of internal controls over financial reporting and, for designated large federal agencies like DOD, whether financial management systems conform to government-wide requirements mandated by the Federal Financial Management Improvement Act of 1996 (FFMIA). In conducting a financial audit, the auditor develops the audit plan; assesses internal controls; performs testing; forms conclusions based on the audit evidence obtained; and based on that evidence, issues an opinion or a disclaimer. These four areas of work are referred to as the four phases of an audit. In the planning phase, the auditor obtains an understanding of the audited entity’s operating environment, including business processes and the related systems and controls; reviews financial activity related to significant financial statement line items and accounts; assesses the risk of material misstatement; and develops an audit strategy. Planning continues throughout the audit as decisions are made about the risk of material misstatement and whether to perform additional procedures. During the internal control phase, the auditor identifies and tests key internal controls and information technology system controls as a basis for determining the extent to which the auditor will be able to rely on controls in conducting the audit. Based on the information obtained during the planning and internal control phases, the auditor determines the nature, extent, and timing of substantive testing. Depending on the extent to which controls can be relied on for assurance of fair presentation of the financial statements, the auditor will perform more or less substantive testing. In the testing phase, the auditor performs substantive testing of detail support for transactions and may also perform analytical procedures. In accordance with AU Section 318, these substantive procedures are performed to detect material misstatements at the relevant assertion level and include tests of classes of transactions, account balances, and disclosures. During the reporting phase, the auditor reviews the body of evidence obtained, reviews the conclusions reached about that evidence, and determines the materiality of uncorrected misstatements and untested amounts as a basis for forming an opinion. Depending on issues identified during the audit, the auditor may decide to perform additional procedures to support a conclusion on the audit results. Our review of the audit documentation supporting the audit of the Marine Corps’ Fiscal Year 2012 General Fund Schedule identified key areas where sufficient audit procedures were not performed, under professional auditing standards, and consequently sufficient, appropriate evidence was not obtained to support the reported audit opinion. Specifically, the audit documentation does not provide evidence that the auditors had (1) performed sufficient procedures to determine the completeness of budgetary transactions reported on the Marine Corps’ Fiscal Year 2012 General Fund Schedule, (2) performed sufficient procedures to determine the reliability of certain evidence used to support transactions included on the Marine Corps’ Schedule, (3) performed sufficient procedures to determine whether budget activity was recorded in the proper period and shipment obligations were properly recorded, and (4) properly considered and evaluated the audit evidence in concluding and reporting on the audit results. On March 23, 2015, the DOD OIG announced the withdrawal of its Auditor’s Report on the Marine Corps’ Fiscal Year 2012 General Fund Schedule. In a memorandum to DOD and Marine Corps leadership, the OIG’s Deputy Inspector General for Auditing stated that subsequently discovered facts identified during the audit of the Marine Corps’ Fiscal Year 2014 General Fund Schedule caused the OIG to question the completeness of the information on which the OIG based its opinion. More specifically, the OIG reported that (1) suspense accounts, which the U.S. Treasury maintains and which are used to temporarily hold transactions that could not be posted to a valid appropriation, contained Marine Corps transactions; (2) it believes that this condition existed in fiscal year 2012; and (3) it was unable to determine whether such transactions were material in relation to the Marine Corps’ Fiscal Year 2012 General Fund Schedule. Marine Corps transactions recorded to suspense accounts would not have been recorded in the Marine Corps’ Fiscal Year 2012 Schedule. At that time, the OIG indicated that once additional information has been gathered and analyzed, the fiscal year 2012 audit opinion will be revised in light of its analysis and reissued. In commenting on our report, the OIG stated that it would consider all relevant information, including the findings and recommendations in our report, the findings of the four ongoing audits of suspense accounts, and a report from the OIG’s Quality and Standards Office before deciding whether to reissue an opinion on the Marine Corps’ Fiscal year 2012 General Fund Schedule. Auditing standards require, among other things, that the auditor (1) assess the risk of material misstatement at the relevant assertion level and (2) perform substantive procedures for all relevant assertions related to material classes of transactions, account balances and disclosures. Auditing standards further state that existence and completeness are always relevant assertions. Testing for completeness may be performed in a number of ways, including the following: Tests of detail transactions. When testing detail transactions for the completeness assertion, the auditor should select from audit evidence indicating that an item should be included in the relevant financial statement amount and should investigate whether the item is so included. For example, the auditor would select from data sources outside or independent of the amounts being tested. Reconciliations. In performing a reconciliation, the auditor reconciles two populations and tests reconciling items to determine whether the two populations are consistent. For example, reconciliation would provide evidence that the transactions recorded in one population, in the aggregate, were also recorded in the other population. We noted several areas where there is a high risk of material misstatement related to the completeness of outlays and obligations reported on the Marine Corps’ Fiscal Year 2012 General Fund Schedule, for which the auditor either did not perform any testing procedures or did not perform sufficient procedures to determine whether there were material misstatements. Specifically, there is a high risk of material misstatement that nonpayroll transactions recorded in feeder systems may not be reported in the Marine Corps’ general ledger system—the Standard Accounting, Budgeting, and Reporting System (SABRS)—and transactions recorded in the current year may be improperly recorded to appropriations not included in the Marine Corps’ Fiscal Year 2012 General Fund Schedule. Figure 3 shows the business system data flow from the feeder systems through the Defense Cash and Accountability System (DCAS) to the Marine Corps’ SABRS general ledger system and through the Defense Departmental Reporting Systems (DDRS) to financial statements and the schedules of budgetary activity. The Marine Corps has reported that over 90 percent of the financial transactions in SABRS originate in feeder systems and that it has 25 primary feeder systems. Typical tests for completeness might include (1) tracing of samples of transactions from significant feeder systems to ensure that the transactions were recorded in SABRS; (2) reconciling feeder system data to transactions in SABRS; and (3) confirming that rejected feeder system transactions were properly identified, isolated, and corrected in a timely manner. The audit documentation shows that the audit team reconciled the transfer of fiscal year 2012 civilian and military payroll data from the related payroll systems to SABRS and concluded that the military and civilian payroll populations in SABRS were sufficiently complete. However, the audit documentation does not include audit procedures to test the completeness of fiscal year 2012 nonpayroll feeder system data recorded in SABRS. The risk of material misstatement in the Marine Corps’ Fiscal Year 2012 General Fund Schedule related to the transfer of transactions from nonpayroll feeder systems is high, we believe, based on the following conditions: Nonpayroll feeder system transactions were material, accounting for about half of the Marine Corps’ reported fiscal year 2012 budgetary activity. We identified examples of feeder system transactions that were not included in SABRS or were not included in SABRS on a timely basis. The Marine Corps did not have adequate processes for determining whether all transactions in the nonpayroll feeder systems were included in SABRS. There were reported internal control weaknesses that prevented the Marine Corps from reasonably assuring that all transactions in nonpayroll feeder systems were recorded in SABRS. For example, 11 of the Marine Corps’ open recommendations were related to weaknesses in controls over transfers of feeder system data to SABRS. Open Marine Corps’ recommendations, discussed later in this report, addressed actions to (1) assure the completeness of populations of transactions and account balances, (2) test interface controls between various feeder systems and the Marine Corps’ SABRS general ledger system, and (3) perform reconciliations of feeder system data to SABRS. There were a significant number of rejected transactions. For example, the audit documentation related to the Marine Corps’ corrective actions on data transfers to SABRS included examples of daily reports of rejected feeder system transactions covering the months of April through July of 2012, each of which listed thousands of transactions that were rejected by SABRS. Our analysis of the rejected transactions determined that 70 percent of these transactions related to significant Marine Corps nonpayroll-related feeder systems involved with supply order and shipment transactions. In addition, the Marine Corps did not have a formal policy and control procedure for correcting errors that occur during data interface processing. DOD’s November 2013 FIAR Status Report, issued 1 month prior to the OIG’s audit report on the Marine Corps’ Fiscal Year 2012 General Fund Schedule, showed that most Statement on Standards for Attestation Engagements (SSAE) No. 16 examinations of the effectiveness of controls over key DOD business feeder systems had not been completed, raising questions about the completeness and the integrity of the processes and underlying data residing in these systems. With regard to the completeness of transaction data in SABRS, members of the audit team told us that they had performed certain other tests. First, the team indicated, and the audit documentation showed, that it traced data from SABRS through the financial reporting process to the Marine Corps’ Fiscal Year 2012 General Fund Schedule. As described in the audit documentation, this procedure would help confirm that data were not lost in processing from the general ledger to the Marine Corps’ Fiscal Year 2012 General Fund Schedule. However, it does not provide evidence concerning the completeness of the data residing in SABRS, most of which originate in business systems outside of SABRS. Second, members of the audit team told us that they traced the Marine Corps’ SABRS general ledger system transaction data to transactions included in the Marine Corps’ Fund Balance with Treasury reconciliation process and did not identify any missing transactions. However, these procedures would not be effective for testing completeness of transactions recorded in SABRS because they begin with items that are already recorded in SABRS. Further, the audit documentation does not include evidence of a complete comparison of fiscal year 2012 SABRS transaction activity to fiscal year 2012 Fund Balance with Treasury reconciliations. For example, the audit documentation did not include a review of Marine Corps transactions submitted to Treasury by other federal agencies and other DOD components, such as the Army and U.S. Transportation Command, to determine whether they were properly recorded in SABRS. One reason that feeder system transactions may not be recorded in SABRS relates to rejected transactions. According to Defense Finance and Accounting Service (DFAS) officials, transactions originating in feeder and other systems that cannot be posted to a valid appropriation are rejected and temporarily held by the Marine Corps for research, and if not resolved within the month, are recorded to suspense accounts until they are investigated, resolved, and correctly recorded. Any Marine Corps budgetary transactions that were included in suspense accounts at the end of fiscal year 2012 were not included in the Marine Corps’ Fiscal Year 2012 General Fund Schedule. As noted earlier, in late March 2015, the OIG withdrew its opinion on the Marine Corps’ Fiscal Year 2012 General Fund Schedule because it was unable to determine whether transactions recorded in suspense accounts maintained by Treasury that were not included in the Marine Corps’ Schedule were material to the Schedule. As discussed later in this report, the Marine Corps had not yet addressed all information technology system recommendations from its fiscal years 2010 and 2011 SBR audits related to control weaknesses over data transfers between feeder systems and SABRS. One such recommendation relates to the lack of a formal policy and control procedure for correcting errors that occur during data interface processing, in which transactions flow from feeder systems through DCAS and ultimately into SABRS. Such a policy would help assure that identified errors and rejected transactions are reviewed by management, resolved, and resubmitted for processing. The audit documentation on the Marine Corps’ corrective actions for this recommendation shows that the Marine Corps took action to develop a report to monitor system rejects. However, the audit documentation used to support closing this recommendation does not include evidence that the auditors, in closing the recommendation, had performed procedures to (1) determine the causes of the rejected transactions as a basis for determining if appropriate corrective actions had been designed and implemented or (2) confirm that rejected feeder system transactions were properly resolved. If audit procedures to confirm the completeness of transfers of nonpayroll feeder system data to SABRS are not sufficient, there may be undetected material amounts of transactions that are not properly included in the Marine Corps’ Fiscal Year 2012 General Fund Schedule. Further, (1) populations used for substantive testing throughout the audit may not be complete, (2) sample sizes may not be appropriate, and (3) statistical tests may not be reliable for concluding on the results of the audit. Another risk related to completeness is the risk that transactions recorded in fiscal year 2012 to prior year appropriations, which are excluded from the Marine Corps’ Fiscal Year 2012 General Fund Schedule, should have been charged to 2012 appropriations included in the Schedule. The Marine Corps’ Fiscal Year 2012 General Fund Schedule is represented to include only budgetary transactions recorded to fiscal year 2012 current appropriations. Typical tests for completeness of the general ledger with respect to such transactions would include examining appropriate evidence that samples of fiscal year 2012 budgetary transactions charged to prior year appropriations were properly charged to such prior year appropriations. The audit documentation and discussions with the audit team did not disclose any testing of transactions related to fiscal year 2012 activity recorded to fiscal year 2011 and prior appropriations to determine whether there was evidence that such transactions should have been recorded to fiscal year 2012 appropriations. However, we believe the risk of material misstatement to the Marine Corps’ Fiscal Year 2012 General Fund Schedule related to transactions recorded in fiscal year 2012 to prior year appropriations that should have been charged to fiscal year 2012 appropriations is high based on numerous reported Marine Corps’ weaknesses in controls over accounting and financial reporting and the magnitude of fiscal year 2012 Marine Corps’ outlays that were recorded to prior fiscal year appropriations. For example, Treasury’s Combined Statement of Receipts, Outlays, and Balances, Fiscal Year 2012 includes data on federal agency fiscal year 2012 outlays that were recorded to prior fiscal year appropriation accounts. Our review of the reported Marine Corps’ fiscal year 2012 outlay activity determined that over $3.8 billion in such outlay activity was recorded to fiscal year 2011 appropriations. Despite these reported conditions, there was no evidence in the audit documentation that the OIG assessed the risk of material misstatement associated with fiscal year 2012 appropriation activity being improperly recorded in a prior fiscal year appropriation account, and no evidence that the OIG performed tests for completeness with respect to fiscal year 2012 appropriation transactions that may be improperly recorded in prior year appropriations. In response to our concern, the OIG stated that the scope of its audit only covered fiscal year 2012 current activity. As such, fiscal year 2011 or prior activity would not be in the scope of the audit. However, absent testing to identify fiscal year 2012 transactions improperly recorded to fiscal year 2011 and prior appropriations, there may be material budgetary transactions that were improperly excluded from the Marine Corps’ Fiscal Year 2012 General Fund Schedule. Testing of detail transactions is a basic audit test designed to determine whether the recorded transactions are supported by sufficient, appropriate evidence. It involves comparing recorded information to supporting documents to determine whether the transaction is valid (authorized and approved) and is recorded in the proper period, to the proper appropriation, and at the proper amount. For example, if the sampled transaction is an outlay for an item purchased, the auditor would review documents, such as the original purchase order, invoice, receiving report, and payment voucher, to substantiate the validity and amount of the sampled transaction. In some instances, the auditor may be unable to obtain sufficient, appropriate evidence to support a selected transaction. In such cases, the auditor should perform alternative procedures to determine whether the transaction was properly supported. For example, the auditor may confirm the details of the transaction with a third party. If the auditor is unable to obtain sufficient, appropriate evidence from alternative procedures, such items are generally treated as misstatements and are accumulated to determine whether such unsupported amounts are material in the aggregate. In examining evidence supporting a transaction, the auditor should consider the reliability of the information to be used as audit evidence, such as electronic documents, including consideration of controls over their preparation and maintenance where relevant. Such consideration would normally include any information that raises doubts about the reliability of the evidence. Also, when the auditor uses information produced by the entity to perform audit procedures, the auditor should obtain audit evidence about the accuracy and completeness of the information, for example, by performing procedures to determine whether the related controls over the data are effective. Auditing standards also state that the reliability of audit evidence is influenced by its source and by its nature and is dependent on the individual circumstances under which it is obtained. Even when audit evidence is obtained from sources external to the entity, circumstances may exist that could affect the reliability of the information obtained. For example, audit evidence obtained from an independent external source may not be reliable if the source is not knowledgeable. This means that regardless of the source of the information, if the auditor has doubts about the reliability of information to be used as audit evidence or is aware of problems with the reliability of the data, the auditor should determine what modifications or additions to audit procedures are necessary to resolve the issues. The audit documentation shows that the auditors had requested appropriate transaction documents from the Marine Corps, including orders, receiving reports, and invoices. However, the audit documentation also shows that when the Marine Corps was unable to provide the requested documents for a selected transaction, the auditors relied on data generated from other DOD agencies that provided goods or services as evidence to support the transaction. However, the auditors did not document their consideration of the reliability of the evidence provided from these other DOD agencies, although there was evidence that should have raised doubts about its reliability. In addition, the auditors relied on support produced from certain Marine Corps systems without obtaining sufficient evidence about the accuracy and completeness of the information. The following examples describe well-known, documented issues related to certain DOD systems that, in our view, raise significant doubts about the reliability of data from those processes and systems that the OIG relied on in its transaction testing. DOD reported the Defense Logistics Agency’s (DLA) Military Standard Requisitioning and Issues Procedures (MILSTRIP) as a department-wide material weakness in its fiscal year 2012 agency financial report, stating that the department could not effectively account for transactions and balances in the MILSTRIP orders process. DOD’s reported target date for completing corrective actions was 2014. U.S. Transportation Command had not yet asserted audit readiness, and it had not undergone an SSAE No. 16 examination as of the end of fiscal year 2012. Further, U.S. Transportation Command uses the Defense Enterprise Accounting and Management System (DEAMS) as its official billing system, and DEAMS had not yet undergone testing of its financial reporting controls. However, the DOD OIG had previously reported that DEAMS managers did not maintain an adequate general ledger chart of accounts and that DEAMS managers did not take the steps needed to ensure that DEAMS had the capability to record and track transaction data. As a result, instead of properly recording transactions, such as budget authority, obligations, collections, and disbursements (outlays) at the time of the related events, DEAMS managers relied on DFAS to record journal vouchers (adjusting entries) in DDRS and used other offline electronic processes, such as spreadsheets, to record accounting entries. According to the DOD OIG, because funds control accounting was not being managed in DEAMS, budget execution reports and SBRs were developed using budgetary status data that could not be traced to actual transaction data within the official accounting system. These weaknesses increase the risk of accounting, billing, and financial reporting errors. In disclaiming an opinion on DOD’s department-wide financial statements for fiscal year 2012, the OIG reported that DOD financial management and business feeder systems were unable to adequately support material amounts on the financial statements as of September 30, 2012. The OIG also reported that financial systems did not comply with FFMIA. DOD continued to report that the vast majority of the information needed to prepare the department’s financial statements originates in feeder systems that input data into its financial systems and that the effectiveness of controls over most feeder systems has not been tested to determine whether information in such systems is reliable. These data integrity issues should have raised significant doubts about the reliability of the information used as evidence to support some of the Marine Corps’ transactions, and should have triggered an assessment of the evidence to determine if it was sufficiently reliable to support the selected transactions. In addition, the auditors should obtain evidence of the accuracy and completeness of audit evidence produced by Marine Corps systems that they rely on for audit testing. If the evidence is not sufficiently reliable, the related amounts recorded in the Marine Corps’ Fiscal Year 2012 General Fund Schedule should be considered misstatements and evaluated to determine whether such inadequately supported transactions are material. Our review of the audit documentation for sample outlay transactions that the auditors indicated were properly supported by sufficient, appropriate evidence identified numerous instances where the auditors relied on data from certain Marine Corps and other DOD agency business systems and processes with data reliability issues. We were unable to determine the full extent of transactions supported by such evidence because the support for transaction samples that passed the auditor’s tests (i.e., were not identified as exceptions) was not always readily available. However, our review of the audit documentation identified the following examples of outlay transactions selected for substantive detail testing that were supported solely by data generated from these DOD business systems and processes. When the Marine Corps could not provide original support for sample military supply order transactions, the audit firm relied on data from feeder systems and business processes with data reliability issues. These systems included the Marine Corps’ Supported Activities Supply System (SASSY) and other Defense agency business systems, including systems involved with DLA’s MILSTRIP business process. Our review of the OIG’s audit documentation for 257 outlay sample items that were retested by the OIG as part of its oversight of the audit firm’s substantive tests of Marine Corps outlays found that at least 42 of the 257 supply order outlay sample items (16 percent) shown as tested without exception were supported solely by data generated directly from such DOD systems and processes. As discussed later, OIG management accepted the same type of feeder systems data as sole support for 13 DLA MILSTRIP transactions and 1 U.S. Transportation Command shipment transaction in control tests for proper cutoff on fiscal year 2012 outlays. Audit documentation on the results of substantive testing of 94 outlay sample items related to U.S. Transportation Command shipments of military supplies and equipment and shipments of household goods showed that 72 of the 94 shipment outlay sample items (77 percent) were supported solely by consolidated Interfund billings generated by U.S. Transportation Command systems. Interfund billings are transfers of funds between federal agency appropriations that are processed through Treasury’s Intergovernmental Payment and Collection (IPAC) system. Marine Corps Interfund billings included coded accounting lines for multiple transactions, generally without any of the original supporting documentation for the individual transactions. According to the audit documentation, the auditors concluded that 71 of the 72 outlay sample items were tested without exception. The one exception was a sample item for a fiscal year 2011 shipment that the auditors believed was outside the scope of the Marine Corps’ fiscal year 2012 audit and thus recorded an exception. Figure 4 shows examples of source documents used in DLA’s MILSTRIP and U.S. Transportation Command’s shipment processes compared with the types of DOD system-generated data that the auditors relied on when the Marine Corps could not locate and provide the original transaction documentation to the auditors. The data reliability issues related to these systems should have been identified in the auditor’s assessment of the risk of material misstatement, and appropriate audit procedures should have been performed to assess the reliability of such evidence and to determine the accuracy and completeness of evidence produced by Marine Corps’ systems. Absent performing sufficient procedures to assess the reliability of such information, there is insufficient evidence to support the accuracy and completeness of transactions that are based solely on this evidence. The OIG’s audit documentation did not contain evidence of sufficient procedures for fiscal year 2012 cutoff testing and testing of shipment obligations. As noted previously, cutoff is one of the financial statement assertions that the auditor considers during a financial statement audit. The cutoff assertion relates to whether transactions and events have been recorded in the correct accounting period. Cutoff includes consideration of two aspects. The first aspect, which relates to the existence or occurrence assertion, is that all transactions recorded in the current period relate to the current period. The second aspect, which relates to the completeness assertion, is that all transactions that should have been recorded in the current period have been recorded in the current period and are properly included in the financial statements. Although the OIG performed certain cutoff testing, our review of the audit documentation and discussions with the OIG determined that certain risks of material misstatement related to cutoff were not identified and addressed. The length of the cutoff period tested was not based on a complete assessment of the risk of material misstatement. Further, the auditors did not consider the lengthy transaction cycle for certain transactions that pose a higher risk of transactions being recorded to the wrong fiscal year appropriation. Specifically: No cutoff testing procedures were performed related to the risk that fiscal year 2012 transactions may have been recorded improperly as fiscal year 2011 activity. Given the lag time in properly recording certain types of transactions, risk exists that fiscal year 2012 transactions that were recorded after the cutoff period, or that certain types of transactions recorded during the end-of-year cutoff period, could be improperly charged to fiscal year 2013 appropriations. Because of these risks and uncorrected Marine Corps accounting and financial reporting weaknesses, the risk of material misstatement was high and additional procedures should have been performed to determine whether budgetary activity related to fiscal year 2012 appropriations was recorded in the proper period. Further, because such additional cutoff procedures were not performed, there may be material transactions related to fiscal year 2012 appropriations that were not properly recorded in the Marine Corps’ Fiscal Year 2012 General Fund Schedule. The objective of cutoff testing is to obtain evidence about whether transactions were recorded in the proper accounting period. Cutoff tests, intended to test for completeness, determine whether transactions recorded prior to the fiscal year or after the end of the fiscal year should have been included in the year being audited. As previously discussed, the Marine Corps’ Fiscal Year 2012 General Fund Schedule was intended to cover current year activity on fiscal year 2012 appropriations. Typical cutoff tests for completeness include testing transactions recorded before the beginning of the reporting period and after the end of the reporting period to determine whether there are material amounts of transactions that should have been recorded in the current reporting period. Obtaining sufficient evidence of proper cutoff may also necessitate that the auditor perform other procedures. For example, if there is a risk that transactions may be recorded after the cutoff testing period or the audit completion date, such procedures may include examining open purchase orders, unpaid invoices, and contracts as of a date near the audit completion date, or estimating amounts that should be recorded in the current year based on appropriate evidence. To assess the risk of material misstatement related to cutoff and determine the scope of cutoff testing with respect to budgetary activity, auditors would generally determine the length of transaction cycles from when a transaction occurs to when the transaction is properly recorded for significant business processes. Certain business processes may have short cycle times. For example, the transaction cycle for payroll is typically fairly short. For business processes with long cycle times, such as certain types of shipment transactions, obligations made in the last quarter of a fiscal year may not be recorded until the first month or the first quarter in the next fiscal year, or until the outlay is made, which could be several months into the next fiscal year. In such instances, obligations and outlays may not be recorded to the proper accounting period, particularly if subsequent adjustments were not recorded timely. Accordingly, as shown in figure 5, depending on the entity’s transaction cycle times and level of assessed risk of material misstatement, the auditor would plan cutoff testing that considers the length of significant transaction cycles with regard to the beginning and end of the accounting period audited. Auditing standards provide detailed guidance on obtaining an understanding of the entity and its environment to (1) assess the risks of material misstatement at the financial statement and relevant assertion levels and identify risks by classes of transactions; account balances, and disclosures in the financial statements; (2) relate the risks to what could go wrong at the relevant assertion level; and (3) consider the significance and likelihood of material misstatement for each identified risk in order to design appropriate substantive tests. As noted above, the auditor should assess the risk of material misstatement related to relevant assertions. In this case, the OIG identified proper cutoff as a risk of material misstatement. We agree with the OIG’s identification of cutoff as having a risk of material misstatement. Our assessment included consideration of the following factors that we believe result in a high risk of material misstatement related to cutoff. There are identified examples of transactions being recorded in the wrong period—DOD reports of Antideficiency Act violations provided to GAO identified numerous DOD-wide instances of transactions that were recorded to the wrong period. DOD has also reported a violation of the act related to a late U.S. Transportation Command shipment billing that was recorded in the subsequent fiscal year. When the need for an adjustment was identified, funds allocated for shipments in the previous fiscal year had been exhausted. In addition, we previously reported that the Marine Corps’ use of bulk estimated obligations for shipments of household goods related to permanent change-of-station moves that generally take 2 or more fiscal years to fully liquidate (i.e., for the final payment or outlays to be made) poses a risk of Antideficiency Act violations if the estimated obligations are too low and outlays exceed the bulk obligation. The Army, the Navy, and the Air Force have each reported violations related to the use of bulk estimated obligations. For certain types of transactions, such as certain U.S. Transportation Command billings, obligations sometimes may not be recorded until the outlay is made, which can be from a few days or weeks to several months or several years after the obligation should have been recorded. For certain types of transactions, there may be an extended period between when the transaction occurred and when the transaction is recorded. For example, U.S. Transportation Command shipment billings that cover multiple fiscal years are initially charged to current fiscal year appropriations, and may not be analyzed and, for shipments related to prior year obligations, may not be properly charged to such prior year appropriations until several months after the end of the fiscal year. There are reported internal control weaknesses related to reasonably assuring that all transactions are recorded in the proper period, particularly with regard to liquidations of estimated bulk obligations related to permanent change-of-station moves and U.S. Transportation Command billings. AU Section 326 states that the auditor should obtain audit evidence to draw reasonable conclusions on which to base the audit opinion, including performing procedures to detect material misstatements at the relevant assertion level. As part of these procedures, the auditors must perform procedures to assess the risk of material misstatement at the financial statement and relevant assertion levels. AU Section 339 requires documentation of significant findings and issues, actions to address them, and conclusions reached. Although the above risks were known at the time of the audit, and the audit documentation includes a discussion of these risks, the documentation does not include evidence that the auditors appropriately considered these risks as a basis for designing and performing sufficient audit procedures to address these risks. For example, the audit documentation does not contain evidence that DOD OIG auditors performed procedures to assess the risk of proper cutoff and determine the nature, extent, and timing of substantive testing related to (1) the length of transaction cycles for significant volumes of transactions and (2) certain significant general ledger accounts. Our review of the audit documentation determined that the OIG only performed testing of transactions recorded in October 2012 (the first month of fiscal year 2013) for cutoff purposes, based on the assumption noted in the audit documentation that there was a low risk that material amounts recorded in periods subsequent to October could relate to fiscal year 2012. However, there was no documented basis for this judgment. For example, during our discussions with OIG auditors, they told us that based on their experience and auditor judgment, they considered this risk to be low. The auditors did not document their understanding of the length of transaction cycles for significant categories of transactions and the pattern and volume of those transactions at fiscal year-end. In addition, the audit documentation noted the process whereby U.S. Transportation Command submits summary Interfund billings through IPAC to the Marine Corps that are initially charged to the Marine Corps’ fiscal year 2012 appropriations and the Marine Corps’ subsequent analysis to determine the allocations of the underlying transactions to the appropriate fiscal year appropriations. However, the audit documentation did not include evidence that the auditors performed any procedures to (1) test the accuracy of the Marine Corps’ allocation of fiscal year 2012 shipment billings to previous fiscal year appropriations or (2) confirm that the related adjustments were recorded to ensure that the portion of the outlays that pertained to previous fiscal year appropriations, and in some cases, other military services, were excluded from the outlays reported on the Marine Corps’ Fiscal Year 2012 General Fund Schedule. Our analysis of U.S. Transportation Command billings and discussions with the auditors and the Marine Corps determined that the OIG was aware that the Marine Corps was performing analysis of approximately $21 million of fiscal year 2012 shipment billings in January 2013—4 months after the end of fiscal year 2012—to determine the extent of adjustments needed to record the related outlay transactions to fiscal year 2012 and prior appropriations. Further, for an audit of budgetary transactions, auditors should test for proper classification to assure that transactions are recorded to the proper fiscal year appropriation or fund account. The OIG told us that testing performed for the audit of the Marine Corps’ Fiscal Year 2013 General Fund Schedule would identify any fiscal year 2013 transactions that should have been recorded to fiscal year 2012. The OIG stated that if any cut-off errors were identified during the fiscal year 2013 audit, it would then determine if a restatement of the Marine Corps’ Fiscal Year 2012 General Fund Schedule was needed. However, audit evidence obtained in the current year audit should be sufficient to support the auditor’s opinion. The OIG auditors also stated that a normal audit reporting schedule in the federal environment requires issuance of the financial statements and the associated opinion 45 days after the fiscal year ends and this does not allow time for more testing. However, the OIG was not required to meet this reporting time frame for its audit of the Marine Corps’ Fiscal Year 2012 General Fund Schedule and had already significantly exceeded it. Given that this was a first-year audit of a 1-year schedule of budgetary activity, additional testing could either have confirmed that a 30-day window was appropriate and thus set a baseline, or would have shown that further efforts were needed by the Marine Corps to address processing delays so that the future 45-day reporting cycle could be met without increasing audit risk. The audit documentation shows that the OIG tested only the transactions recorded in October 2013 that the Marine Corps applied against fiscal year 2012 appropriations to determine whether they should have been recorded to fiscal year 2012. The OIG did not test transactions recorded in October that were recorded against fiscal year 2013 appropriations to determine if these transactions were properly recorded. As a result, there is a risk that transactions posted to 2013 appropriations should have been recorded in fiscal year 2012 to fiscal year 2012 appropriations. For example, U.S. Transportation Command shipment billings initially recorded to fiscal year 2013 may not have been adjusted and may affect fiscal year 2012 appropriations. Figure 6 shows a high-level illustration of the scope issue posed by the outlays that actually related to multiple fiscal year appropriations being recorded to fiscal year 2012 appropriations. The audit documentation on substantive testing results for shipment outlays showed that the auditors concluded that the recording of all shipment outlays that were made during fiscal year 2012 to fiscal year 2012 appropriations was accurate, even though the sample shipment outlay transaction documents generally identified allocations that needed to be made to various previous fiscal year appropriations. Support for some sampled shipment outlay transactions initially recorded to fiscal year 2012 appropriations included receiving reports that were dated in August 2011 and September 2011, indicating that they pertained to fiscal year 2011 or earlier appropriations. Further, the documentation on testing results did not include auditor comments that refer to additional procedures performed to ensure that necessary adjustments had been identified by the Marine Corps and that these adjustments were recorded by the close of the fiscal year 2012 accounting period. The audit documentation also shows that the auditors did not perform any substantive cutoff testing for two general ledger accounts: the obligation account for delivered orders and the outlay account. The OIG told us that it did not test for proper cutoff of the obligation account for delivered orders because any errors identified would result in an adjustment to the obligation account for undelivered orders and would have no net effect on the Marine Corps’ Fiscal Year 2012 General Fund Schedule because the two obligation accounts are both reported on the “Obligations Incurred” line item of the Schedule. However, the OIG’s testing of the obligation account for delivered orders during fiscal year 2012 substantive testing identified 11 errors for which the corresponding adjustments were recorded to other general ledger accounts and were reported on different line items of the Marine Corps’ Fiscal Year 2012 Schedule. Thus, without testing obligations related to delivered orders for proper cutoff, there may be misstatements related to delivered orders that would not be detected by the audit. With regard to cutoff testing of outlay transactions, the audit documentation showed that after the OIG’s tests of internal controls over proper cutoff for outlay transactions resulted in an unacceptably high error rate, the OIG requested that the Marine Corps provide documentation for a sample of 334 outlay transactions for substantive testing of end-of- period cutoff. According to OIG auditors, the Marine Corps responded that it was not able to provide support for this large substantive sample because it was responding to requests for support on sampled transactions that related to the audit of its Fiscal Year 2013 General Fund Schedule at that time. As a result, the OIG attempted to rely on its initial tests of the Marine Corps’ internal controls over proper cutoff and extended the time frame for completing its control tests to attempt to resolve the initial exceptions. The audit documentation included statements that the Marine Corps provided additional documentation and that the OIG determined that the documentation was sufficient to resolve all 21 transactions that were initially tested with exception (errors). Our review of Marine Corps documentation identified available support for 18 of the 21 transactions, and we determined that the support was sufficient to resolve only 6 of them. Given that we were unable to find adequate support for 12 transactions, we believe that controls were not effective. Further, even when control tests are effective, they do not eliminate the need for substantive testing. Shipment obligations pertain to shipments of military supplies and equipment and household goods related to permanent change-of-station relocations and related personnel mobilization and permanent change-of- station travel. The Marine Corps reported that it had $529.5 million in fiscal year 2012 shipment obligations. Depending on the type of shipment, the time between obligation and outlay varies. Obligations for shipments of household goods for military members and civilians who are deployed or relocated include amounts for storage costs and reshipment of the items when the personnel return. These obligations, which are funded by Military Personnel appropriations, typically liquidate over a period of 2 or more years. Obligations for shipments of military supplies and equipment, funded by Operation and Maintenance appropriations, and obligations for shipments funded by Procurement appropriations generally are liquidated within several days or a few weeks. The audit documentation showed that the OIG had identified several audit risks associated with shipment transactions. For example, the OIG had determined that the Marine Corps (1) did not have sufficient documentation available to support its multiple obligation processes for shipment transactions and (2) was unable to match the liquidations (outlays) with corresponding obligations. The audit documentation also showed that the OIG had attempted to perform substantive testing of the Marine Corps’ shipment obligations; however, the Marine Corps was unable to provide support for $231.5 million of its reported $529.5 million in fiscal year 2012 shipment obligations. The audit documentation noted that the lack of supporting documentation related to the Marine Corps’ practice of recording (1) bulk estimated obligations for U.S. Transportation Command shipments and (2) obligations for commercial shipments either at the same time or after the associated payments were made. Further, because the Marine Corps was unable to match outlays for specific shipments to its bulk estimated obligations, the auditors could not determine the reliability of obligated balances through detail testing of transactions. Given the identified issues related to the reliability of recorded transportation obligations, the Marine Corps developed a model to estimate the unliquidated obligations as of the end of the fiscal year. The model was based on historical outlay patterns, using outlay data for fiscal years 2008 through 2012. To illustrate, if historically 75 percent of the outlays relating to an appropriation were expended at the end of the first year, the model would estimate that the remaining 25 percent would be unliquidated obligations for the appropriation. The reliability of the model depends on several factors, including the reliability of the outlay data used in the model; the appropriateness of assumptions used in the model; and the consideration of factors that may affect historical patterns, such as the different outlay patterns for the different types of shipments. The audit documentation stated that the OIG relied on the auditing standards in testing the Marine Corps’ estimated liquidations of shipment obligations. In auditing estimates, auditing standards state that the auditor’s objective is to obtain sufficient, appropriate evidence to provide reasonable assurance that the accounting estimates are reasonable in the circumstances. In assessing the reasonableness of the estimate, auditing standards state that the auditor normally concentrates on key factors and assumptions that include sensitivity to variations, deviations from historical patterns, susceptibility to misstatements and bias, and the entity’s historical experience related to the reliability of prior year estimates. The auditing standards also identify procedures that the auditor may consider when reviewing and testing the process used to develop management’s estimates, including controls over the process, and the relevance, reliability, and sufficiency of historical data used in the estimate. The audit documentation showed that the OIG performed some review and analysis of the Marine Corps’ model for estimating obligated balances related to shipments and made minor adjustments to the model. However, the audit documentation did not contain evidence that the OIG sufficiently performed certain other procedures in AU Section 342 that we believe are important related to (1) identifying whether there were controls over the preparation of the Marine Corps’ accounting estimates and the testing of such controls and (2) considering whether sources of data and factors that management used in forming the assumptions were relevant, reliable, and sufficient for the purpose of the estimates based on information gathered in other audit tests. For example, the audit documentation did not contain evidence that the audit team validated the factors management used to form the accounting estimate or performed procedures to test controls over preparation of management’s estimates. The audit documentation stated that the auditors performed procedures to assure that the sources and data used in the estimating methodology were relevant, reliable, and sufficient. However, the documentation did not include evidence of sufficient audit procedures performed to provide assurance of the reliability of outlay transaction data used for determining obligation liquidation rates (referred to as the historical burn rates) as a basis for estimating the Marine Corps’ obligated balance for shipment transactions at the end of fiscal year 2012. The following examples summarize our concerns with respect to the sources and reliability of the data the OIG used to validate the Marine Corps’ model for estimating obligated balances related to shipments at the end of fiscal year 2012. The audit documentation showed that the OIG could not validate the completeness of the population of the Marine Corps’ reported shipment obligations as a basis for estimating the balance of shipment obligations at the end of fiscal year 2012 because (1) about $213 million related to bulk estimated obligations for which specific supporting documentation was not available and (2) about $19 million related to obligations that were based on billings and payment amounts and it was not possible to determine additional obligation amounts for shipments that had been made but had not yet been billed. Further, the audit documentation stated that the Marine Corps was unable to match liquidations (outlays) to reported obligations. The audit team did not perform procedures to confirm the reliability of U.S. Transportation Command system-generated Interfund billing data reported through IPAC, even though the DOD OIG had previously reported issues with the reliability of budgetary transactions reported by DEAMS and the OIG was aware that controls over other U.S. Transportation Command systems had not been tested. The OIG performed limited internal control tests over shipment outlays for a 5-year period covering fiscal years 2008 through 2012. The audit documentation showed that the audit procedures relied on (1) Marine Corps fiscal year 2008 and 2009 outlay data that had not been audited, (2) fiscal year 2010 and 2011 outlay data included in SBRs for which the OIG disclaimed an opinion, and (3) fiscal year 2012 outlay data that were tested by comparing SABRS shipment outlay transactions to the dates and amounts on disbursement vouchers instead of original transaction support and concluded that there were no errors. However, these disbursement vouchers are used to record shipment outlay transactions in SABRS and thus do not provide independent assurance of the accuracy of the outlay transactions. The audit documentation for internal control tests on outlays for each fiscal year used in the model consistently noted that the auditors were unable to determine the completeness of the shipment outlay populations used for testing. Further, the issues discussed in this report, such as those related to completeness and cutoff, may affect assurance of the reliability of outlay data used in the model. The audit documentation did not show that the OIG sufficiently considered the effect that different types of shipment transactions liquidated at different rates might have on estimated obligation balances because the OIG could not determine the populations for the various shipment processes. Members of the audit team told us that they generalized their tests and did not separately test liquidations for different types of shipments. Based on its audit of the Marine Corps’ accounting estimate of its fiscal year-end 2012 balance of shipment obligations, the OIG determined that the Marine Corps’ reported balance of obligations at the end of fiscal year 2012 was overstated, and the audit documentation indicated that the OIG proposed a downward adjustment of $53.7 million, which was recorded by the Marine Corps. However, the reliability of the estimated fiscal year- end obligated balance reported in the Marine Corps’ Fiscal Year 2012 General Fund Schedule is uncertain because of (1) the lack of assurance over the completeness and reliability of the shipment obligation and outlay data used to estimate the ending balance of obligations and (2) the application of a generalized liquidation rate for shipments that had significant differences in liquidation periods. As a result, obligations related to shipments reported in the Marine Corps’ Fiscal Year 2012 General Fund Schedule may not be complete and reliable. As discussed later in this report, because of the significance of U.S. Transportation Command activity to DOD-wide audit readiness, in September 2013, the department initiated a DOD-wide Transportation Financial Auditability working group to document and test transportation processes, systems, and controls. The OIG is aware of this initiative. Accordingly, the OIG should have appropriately considered the risk associated with the Marine Corps’ shipment outlay transactions and performed sufficient procedures to assure the reliability of shipment outlay amounts reported in the Marine Corps’ Fiscal Year 2012 General Fund Schedule. The OIG’s conclusion on the results of the audit of the Marine Corps’ Fiscal Year 2012 General Fund Schedule did not consider all known misstatements and untested amounts; explain the basis for certain significant assumptions and auditor judgments; or properly resolve disagreements among the audit team, statisticians, and OIG management. As discussed in the auditing standards, in evaluating whether the financial statements are presented fairly, in all material respects, in conformity with GAAP, the auditor must consider the effects, both individually and in the aggregate, of misstatements (both known and likely) that are not corrected by the entity. At the conclusion of the audit, the auditor accumulates identified misstatements and considers whether such misstatements are material to the entity’s financial statements. In addition to quantitative measures, the auditor is also required to consider qualitative factors when assessing the materiality of misstatements. Auditing standards further state that as the aggregate misstatement identified in testing approaches materiality, the risk that the financial statements could be materially misstated also increases; consequently, the auditor should consider the effect of undetected misstatements in concluding on whether the financial statements are fairly stated. As previously discussed, in concluding on the audit, the auditor makes judgments about materiality in light of surrounding circumstances and qualitative and quantitative considerations. These judgments are affected by the auditor’s perception of the financial information needs of users of the financial statements by the size or nature of a misstatement, or both. As a basis for quantitative considerations on the results of testing, the auditor establishes a materiality level, or the maximum level of misstatement the auditor is willing to accept in concluding on the audit without the amount of misstatement being misleading to the users of the financial information. Federal government auditors generally set materiality for reporting on audit results at 3 percent of the materiality base. The materiality base is the element of the financial statement(s) that the auditor judges as most significant to the primary users of the statements. For the audit of the Marine Corps’ Fiscal Year 2012 General Fund Schedule, the OIG used the reported Obligations Incurred line item amount of $27.5 billion as the materiality base. Accordingly, the OIG set materiality at 3 percent of the materiality base for the audit of the Marine Corps’ Fiscal Year 2012 General Fund Schedule, which was $826 million. The audit documentation showed that the OIG calculated the level of identified misstatement related to errors and untested amounts identified in its audit as approximately $773 million. Based on this evaluation, the auditors concluded that the aggregate of identified misstatements and untested amounts was not material to the Marine Corps’ Fiscal Year 2012 General Fund Schedule. Our review of the audit documentation found that the OIG’s analysis of its test results omitted certain known errors and untested amounts. Specifically, the OIG’s audit calculation of identified misstatements omitted $18.3 million in contract progress payment errors identified in tests of obligations and another $17.5 million related to insufficient documentation to conclude on tests of contract outlays—a total of $35.8 million. The audit documentation showed that the audit team had initially determined that it could not conclude on the accuracy of sampled contract outlay transactions for which there was no support that the goods and services paid for were received. Accordingly, the OIG audit team counted the related transaction amounts as untested and planned to include them in the calculation of identified misstatements. The audit documentation showed that OIG management made an assumption that the unsupported outlay transactions could be adjusted and reported as advance payments to avoid counting the amounts as untested. The audit documentation stated that because outlays and advances are reported on the same line item of the General Fund Schedule, the adjustment would have no net effect on the Schedule. However, advances typically require authorization in law or in contract and without documentation of such authorization the advance should be considered untested. Had the auditors included the contract progress payment errors as untested amounts, the identified misstatement would have totaled over $808 million. The OIG’s handling of differences of opinion between the audit team and OIG management is discussed further below. Additionally, the audit documentation did not include evidence that the OIG considered potential undetected misstatements in concluding on the fair presentation of the Marine Corp’s Fiscal Year 2012 General Fund Schedule. Also, the audit documentation did not include evidence that the OIG considered qualitative factors in concluding on the effect of identified and potential undetected misstatements. As noted above, the OIG’s identified misstatements and untested amounts are quantitatively near the calculated materiality. Based on the issues discussed above and other issues discussed previously in this report—including those related to (1) completeness of transactions reported in the Marine Corps’ Fiscal Year 2012 General Fund Schedule, (2) transaction cutoff, (3) estimation of obligations, and (4) reliance on information in other DOD systems—additional misstatements may exist that may have been identified had additional audit procedures been performed. Such further misstatements, when aggregated with identified misstatements, could be material. Consequently, sufficient, appropriate evidence was not obtained to support the conclusion that the Marine Corps’ Fiscal Year 2012 General Fund Schedule is presented fairly. The OIG’s Audit Handbook describes roles and responsibilities of its Quantitative Methods Division’s (QMD) technical support of DOD audits. QMD’s roles in support of financial audits include technical assistance in determining the appropriate population as a basis for ensuring defensible results, guidance on statistical sampling methods, design of a sampling plan, and analysis of sample results. The OIG’s Audit Handbook states that the QMD analyst will attend project debriefs and exit conferences and answer any questions about the quantitative (statistical) sampling approach and the uses and limits of the quantitative results. In addition, the QMD analyst will help the audit team correctly present quantitative results in the audit report and will certify the defensibility of the significant quantitative methods used in the audit report. However, the audit documentation showed that QMD did not sign off as certifying the auditors’ projections of sample results because of concerns about the auditors mixing two methods for making statistical estimates. Instead, QMD added a note to the certification form, stating that it expressed no opinion as to the application (i.e., projection) of results with respect to the evaluation of sample results against materiality. QMD officials told us that the reason they did not sign off on the auditors’ materiality assessment is that they were not included in the materiality assessment process and did not know the basis for the auditor judgments made. QMD officials explained that this was unusual and stated that they are generally included in auditor assessments of materiality. Auditing standards recognize that auditors must use professional judgment in concluding on an audit. Auditors also are required to document significant decisions in their audit documentation. The audit documentation for the audit of the Marine Corps’ Fiscal Year 2012 General Fund Schedule showed inconsistencies and conflicting conclusions between the audit team and OIG management regarding the scope of audit testing and the OIG’s conclusions on the results of audit testing, including testing for cutoff, shipment obligations and outlays, and acceptance of unaudited system-generated data for substantive testing of transactions. These conflicting conclusions indicate that significant auditor judgments had been made regarding the audit results and audit conclusion, but the audit documentation did not include a reconciliation or explanation for the conflicting statements. Further, these undocumented auditor judgments related to decisions made by OIG management that overturned the audit team’s test results and conclusions. The following examples illustrate this issue. The audit team’s conclusions on cutoff testing stated that because the Marine Corps did not have controls for assuring that obligations were recorded in the proper period, the team was unable to gain assurance of the completeness of populations used for this testing and, as a result, was unable to conclude on the completeness of the Obligations Incurred and the Outlays line items or the fair presentation of the Obligations Incurred line item in the Marine Corps’ Fiscal Year 2012 General Fund Schedule. The audit documentation did not contain any further audit procedures that were performed or auditor explanations that indicated that this issue had been resolved. Further, because the Obligations Incurred line item, reported at nearly $27.5 billion, represents all but about $1.9 billion of the Marine Corps’ fiscal year 2012 budgetary resources, the inability to conclude on the fair presentation of this line item would mean that the extent of fair presentation of the Marine Corps’ Fiscal Year 2012 General Fund Schedule also could not be determined. Further, the audit documentation did not show OIG management’s basis for determining that cutoff testing was sufficient. The audit documentation related to the OIG’s application of the Marine Corps’ model for estimating the year-end balance of shipment obligations included at least six individual workpapers in which the audit team had concluded that it was unable to gain assurance as to the completeness of populations used for testing historical (fiscal year 2008 through 2011) shipment liquidation transactions (outlays). In concluding on the testing for this category of transactions, the audit team stated that this issue posed a scope limitation. However, as previously discussed, the OIG ultimately relied on historical liquidations data for determining a “burn rate” (liquidation or outlay rate) for fiscal year 2012 as a basis for assessing the reasonableness of reported fiscal year-end 2012 obligated balances. We found no documentation of the basis for the OIG management decision that the limited procedures performed were reliable for use in estimating year- end obligated balances. This is a significant issue because shipment obligations reported by the Marine Corps as totaling over $529 million represent two-thirds of the materiality threshold used by the OIG to conclude on the audit. As previously discussed, identified misstatements and untested amounts were quantitatively near the calculated materiality without considering this amount. In addition, our review of the audit documentation identified numerous e- mail communications during the months of November and December 2013, shortly before the audit report was issued, that indicate there was a disagreement between the audit team and OIG management regarding whether there was sufficient, appropriate audit evidence to support an unqualified (“clean”) audit opinion. The e-mails showed that the audit team did not believe it had the evidentiary support for the clean opinion and was asking for OIG management guidance regarding the basis for issuing an unqualified opinion. The e-mails also showed that OIG management instructed the audit team that a decision was made that the Marine Corps had “earned” an unqualified opinion and that the audit documentation needed to be updated to support the clean opinion. The audit documentation did not include an explanation of the basis for the OIG management judgment related to the opinion. Consequently, the audit documentation showed a gap between the audit team’s conclusions relating to a disclaimer and the clean opinion that was reported by the OIG in December 2013. Audit quality control standards (designated QC by the AICPA) state that audit organizations should establish policies and procedures for addressing and resolving differences of opinion within the engagement team; with those consulted; and, when applicable, between the engagement partner and the engagement quality control reviewer. Such policies and procedures should enable a member of the engagement team to document his or her disagreement with the conclusions reached after appropriate consultation. Such policies and procedures should require that (1) conclusions reached be documented and implemented and (2) the audit report not be released until the matter is resolved. Our review of the OIG’s Audit Handbook and the DOD Audit Manual, and discussions with the OIG Audit Policy and Oversight officials, determined that the OIG does not have policies and procedures for resolving disagreements between the audit team and OIG management. The OIG issued 177 recommendations to address deficiencies in internal controls over Marine Corps’ accounting and financial reporting and information technology system general operating controls as a result of its audits of the Marine Corps’ fiscal year 2010 and 2011 SBRs and the Marine Corps’ Fiscal Year 2012 General Fund Schedule. Based on our review of OIG documentation, 130 (73 percent) of these recommendations had not been fully addressed by the end of the fiscal year 2012 Marine Corps audit. This includes 22 recommendations that we determined the OIG closed prior to verifying and documenting that implementation of the recommended controls was complete and fully addressed the recommendations. In addition, we made 3 recommendations to the Marine Corps in our September 2011 report on the Marine Corps’ fiscal year 2010 SBR audit results, all of which remained open as of March 2015. The Marine Corps has improved its remediation plan and strengthened its monitoring process and is taking a more risk-based approach to corrective actions. However, significant uncorrected control weaknesses continue to impair the Marine Corps’ ability to produce consistent, reliable, and sustainable financial information for day-to-day decision making on its missions and operations. The lack of reliable financial information and systems, processes, and controls also impedes the Marine Corps’ ability to achieve sustainable, cost-effective audit efforts. Our review of the OIG’s documentation on the status of actions to address its recommendations to Marine Corps management resulting from its fiscal years 2010 through 2012 audits of the Marine Corps budgetary activity showed that as of the end of its audit of the Marine Corps’ Fiscal Year 2012 General Fund Schedule, 130 of the 177 recommendations issued had not been fully addressed. The 130 open recommendations included 16 recommendations that were issued from August 2012 through February 2013 to address deficiencies identified in the audit of the Marine Corps’ Fiscal Year 2012 General Fund Schedule. The majority of the 130 open recommendations related to the Marine Corps’ fiscal year 2010 first-year SBR audit. In addition to presenting impediments to the Marine Corps’ financial management operations, the weaknesses that gave rise to these recommendations also impede the Marine Corps’ ability to respond to audits and the auditors’ ability rely on the Marine Corps’ internal controls in planning and conducting audits. This results in the auditors having to perform labor-intensive substantive tests of larger samples of transactions that consume more time and resources than would be required if the Marine Corps’ internal controls were effective. While it is important for the Marine Corps to address these recommendations timely, Marine Corps officials told us that progress has been limited because Remediation Team staff are used to support financial audits and the Marine Corps has experienced difficulty hiring additional qualified staff. Table 1 summarizes our analysis of the OIG’s documentation on the status of Marine Corps actions taken to address the OIG recommendations from the fiscal year 2010 through 2012 audits. OIG managers told us that their policy is to evaluate corrective actions on OIG and GAO recommendations and close them as appropriate. However, as noted in table 1, we determined that 22 recommendations that the OIG had closed should have remained open. Our analysis of the audit documentation on the Marine Corps’ corrective actions determined that support was not sufficient for closing these recommendations for the following reasons. Four recommendations called for development of written policy and procedures and the implementation of the related control procedures. The documentation on the Marine Corps’ actions only supported the development of the written policy and procedures. There was no documented evidence that the policy and procedures as designed had been effectively implemented. Six recommendations related to the completeness and accuracy of data transfers from DOD business systems to the Marine Corps’ SABRS general ledger system were closed without any evidence of procedures being performed to confirm that the data transferred to SABRS were complete. Four recommendations were closed because the auditors’ substantive testing did not identify any related exceptions, even though there was no documentary evidence that the Marine Corps had designed and implemented corrective actions. The remaining eight recommendations were closed without sufficient documentation that actions were completed and verified as effective. The auditors told us that they planned to test implementation of several controls in a subsequent audit. However, absent evidence that the new controls had been effectively implemented, closing these recommendations creates a risk that corrective actions needed may not be completed and that the related weaknesses will continue to exist. Our review of the Marine Corps’ open recommendations identified numerous uncorrected financial reporting and information system control weaknesses that if effectively resolved, would significantly improve the Marine Corps’ ability to achieve reliable financial reporting and more efficient audit efforts. The following examples summarize recommendations related to significant weaknesses that had not yet been corrected and thus impair the Marine Corps’ ability to generate reliable financial management information on an ongoing basis for decision making and achieve and sustain auditable budgetary information. Sixteen of the Marine Corps’ open recommendations related to weaknesses in controls for assuring completeness, including transfers of feeder system data to its SABRS general ledger system and timely recording of transactions. These open recommendations addressed actions to (1) assure completeness of populations of transactions and account balances, (2) test interface controls between various feeder systems and the Marine Corps’ SABRS general ledger system, and (3) perform reconciliations of feeder system data to SABRS. Thirty-five open recommendations related to weaknesses in controls over the reliability of feeder system data, including systems security, access controls, and data processing controls. Open recommendations related to data reliability include recommendations to (1) implement periodic review of input processing and edit checks that could produce exception reports; (2) ensure timely, accurate recording of transactions; and (3) strengthen information system data integrity and access controls. Forty-three open recommendations related to weaknesses in controls for assuring proper support for obligations and outlays. These weaknesses affect the support for MILSTRIP, shipment, and contract transactions. Open recommendations related to the reliability of reported obligations and outlays include actions to (1) ensure proper recording of obligation and outlay transactions; (2) reconcile shipment outlays to obligation transactions; (3) periodically review accrued delivered orders and identify amounts that should be deobligated; (4) review support for existing bulk estimated obligation documents and adjust the beginning obligated balance, as appropriate; (5) ensure supporting documentation traces to and supports amounts recorded in SABRS; and (6) improve monitoring controls over IPAC transactions. In addition to achieving improvements in the overall integrity and reliability of its financial operations and information, the Marine Corps would benefit from resolving these significant control weaknesses because (1) strengthened processes and controls would provide a basis for the auditors to reduce sample sizes and (2) strengthening controls for assuring the reliability of feeder system data would reduce efforts to locate original support for transactions, thereby reducing the Marine Corps’ efforts to respond to requests for large samples and auditor efforts to perform labor-intensive substantive tests of larger samples of transactions that consume more time and resources than would be required if the Marine Corps’ internal controls were effective. Further, developing audit support agreements with other DOD components that support the Marine Corps’ mission by providing services and supplies as well as the related obligation and outlay data would help support the Marine Corps’ efforts to respond to its financial audits. For example, such agreements could assist the Marine Corps in documenting mission- related processes, systems, and controls and taking appropriate actions to address any weaknesses identified in such efforts. The overall benefit from these efforts would be financial management improvement. In August 2014, we followed up with Marine Corps officials to discuss their progress on addressing open recommendations from the Marine Corps’ fiscal years 2010 through 2012 audits. Of the 75 open accounting and financial reporting recommendations, our analysis showed that in February 2014, the auditors closed 48 recommendations and consolidated and reopened 22 of them as new recommendations associated with performance of the audit of the Marine Corps’ Fiscal Year 2013 General Fund Schedule. The officials told us that the purpose of this effort was to clarify finding and recommendation language to help the Marine Corps identify underlying control weaknesses and develop appropriate corrective actions to resolve the causes of the weaknesses. In reissuing the consolidated recommendations, the auditors grouped findings with similar causes and remediation steps into an overall recommendation. However, our analysis determined that the other 27 recommendations were closed by the auditors. Documentation that the Marine Corps provided us in August 2014 stated that the weaknesses remained, including those related to 6 recommendations for correcting weaknesses associated with use of bulk estimated obligations; 10 recommendations for timely fund manager reviews, including review of “stale” obligations (obligations without activity for more than 120 days) to see if they are needed or should be deobligated; and 6 recommendations related to timely correction of DDRS financial reporting errors and monthly management reviews of all journal vouchers for proper recording. The auditors did not consolidate or close any of the previously issued information technology system recommendations during this period. The auditors told us that their decision to close these 27 recommendations was based on the results of substantive testing performed for the audit of the Marine Corps’ Fiscal Year 2013 General Fund Schedule. The auditors explained that nothing related to the previously identified weaknesses came to their attention during their substantive testing for the Marine Corps’ fiscal year 2013 audit. However, the absence of identified misstatements alone is not sufficient for determining whether internal control weaknesses have been remediated. Regardless of whether the number of recommendations to address control weaknesses has been reduced, for example, because the auditors consolidated them, timely and effective actions to resolve underlying causes of control deficiencies related to (1) completeness of data transferred from DOD feeder systems to the Marine Corps’ SABRS general ledger system, (2) reliability of financial data and information generated by DOD feeder systems, and (3) ensuring availability of supporting documentation for obligations and outlays will be critical to achieving sustainable financial management improvement and financial audit efforts. Our September 2011 report on the Marine Corps’ fiscal year 2010 SBR audit results included three recommendations to the Marine Corps. While the Marine Corps has made progress in addressing our recommendations, all three recommendations remain open. The Marine Corps has not yet fully addressed our recommendations that it (1) use the results of its fiscal year 2010 and 2011 SBR audits to develop a comprehensive, risk-based plan for designing and implementing corrective actions that provide sustainable solutions for SBR auditor recommendations; (2) review Marine Corps SBR remediation actions under way and confirm that the actions are fully responsive to the auditor recommendations; and (3) develop and implement timely and effective agreements for audit support with the appropriate DOD components in accordance with the FIAR Guidance where remediation actions require a coordinated effort. The Marine Corps has established the Risk and Compliance Branch to support its audit readiness efforts. The Marine Corps also assigned new leadership to its Remediation Team and moved the team under the Risk and Compliance Branch to provide more focus on remediation of identified weaknesses. The Remediation Team is responsible for coordinating, monitoring, and validating the design and effectiveness of corrective actions to address audit recommendations and findings from management and internal reviews. DOD stated that the Marine Corps disagreed with our recommendation to develop a comprehensive, risk-based corrective action plan, stating that it was too prescriptive with regard to identifying roles and responsibilities and including performance indicators to measure performance against action plan objectives. However, under its new Risk and Compliance Branch, the Marine Corps subsequently developed a detailed remediation process that includes elements of a comprehensive, risk-based plan as called for in our recommendation. For example, according to the Marine Corps, it now identifies weaknesses associated with audit findings that the auditors grouped by categories and works with process owners and stakeholders to understand the causes of the weaknesses and develop corrective action plans that will be effective in resolving them. Our review of the Marine Corps’ new remediation process found that Marine Corps officials also had assigned a high, medium, or low priority to each recommendation based on risk; however, they had not yet developed written criteria or guidance for determining how to apply these priorities in order to focus corrective actions on the most significant areas of weakness. In response to our second recommendation, as part of the new remediation process, the Marine Corps also incorporated an independent stakeholder review and monitoring role with responsibility for ensuring that corrective actions fully address auditor recommendations as well as any recommendations resulting from internal management reviews. However, the Marine Corps has not yet provided documentation of the stakeholder reviews to demonstrate that this action is fully implemented and operating as intended. With regard to our third recommendation, Marine Corps officials told us that they have initiated efforts to develop agreements for audit readiness support with the appropriate DOD components. For example, they have a draft audit support agreement with DLA that covers audit support related to DLA-performed business processes that generate financial information that the Marine Corps will rely on for financial statement reporting and audit purposes. These DLA business processes include (1) receiving and accepting goods, (2) storing material, (3) issuing and distributing material, (4) disposing of material, and (5) updating accountability records. Marine Corps officials told us that where audit support depends on DOD-wide systems, processes, and controls related to MILSTRIP and U.S. Transportation Command shipments, they believe the DOD Comptroller and FIAR Directorate should take the lead in developing the service-level agreements. Our review of the audit of the Marine Corps’ Fiscal Year 2012 General Fund Schedule identified major areas where key Marine Corps business processes depended on other DOD agencies’ business processes and feeder systems with data reliability issues that transferred financial data and information to its general ledger system. Because other DOD components also rely on many of those same DOD agencies’ business processes and feeder systems, these issues will likely present DOD-wide challenges related to (1) ensuring the completeness of populations used for transaction testing and the proper cutoff of transactions for the accounting period, (2) determining the reliability of feeder system data transferred to the general ledger system, and (3) determining the reliability of reported obligations and outlays. These DOD-wide challenges have been known for many years. Since December 2011, DOD’s FIAR Guidance has included these challenges in a list of “dealbreakers” that if not effectively resolved, would pose a significant challenge to achieving financial management improvement as well as audit readiness. To the extent that these challenges are not resolved, they will pose serious obstacles to the military services, which are currently undergoing first-time audits of their fiscal year 2015 General Fund schedules of budgetary activity, and could also pose obstacles to DOD’s efforts to achieve audit readiness on a full set of financial statements for fiscal year 2018. In May 2014, we reported that DOD had an inventory of 2,329 business systems, including 286 financial management systems; 702 logistics systems; 730 human resources management systems (including payroll systems); and numerous acquisition, logistics, and other business systems. The vast majority of the department’s financial transactions originate in these business systems that then feed financial transaction data—including data for military and civilian payroll, supplies and procurements, travel, work orders, and shipments—to DOD general ledger systems. As identified in our review of the Marine Corps’ fiscal year 2012 audit, performing tests to assure the completeness and reliability of DOD business systems data and performing periodic reconciliations of business system data to general ledger systems are necessary to provide reasonable assurance that military service and defense agency financial statements include all transactions and balances that should have been recorded for the period. This will be a challenge across DOD given the large number of feeder systems and the fact that the controls over most systems have not yet been tested. Without assurance of completeness of populations used for audit testing, auditor sampling and testing results will not provide the reasonable assurance necessary for concluding on an audit and forming an opinion. DOD’s FIAR Guidance continues to identify the inability to provide assurance of complete populations (i.e., reconcile the general ledger to transaction detail, including feeder system detail) as an audit readiness dealbreaker. As a subset of completeness, proper fiscal year-end cutoff of transaction activity and assurance that appropriation data are recorded to the proper fiscal year are essential to ensuring that the financial statements and the schedules of budgetary activity include all data for the accounting period audited. As previously discussed, the population of transactions for shipments of household goods and military items used in the Marine Corps’ fiscal year 2012 audit contained liquidations (outlays) related to one or more previous fiscal year appropriations. Because the Marine Corps was unable to reconcile its fiscal year 2012 bulk estimated obligations to the related outlays, and outlays recorded to fiscal year 2012 included outlays that were properly chargeable to prior fiscal year appropriations, the populations of obligations and outlays provided to the auditors for sampling and testing were not consistent with the reported scope of its Fiscal Year 2012 General Fund Schedule. Since the other military services also use bulk estimated obligations to fund their business processes whose transaction cycles cover multiple fiscal years, the inability to segregate outlays by appropriation fiscal year poses a significant risk to the integrity of their schedules of budgetary activity, particularly with regard to first-year schedules. For example, when bulk estimated obligations liquidate over several fiscal years, identifying a population of transactions that relates to a first- or even a second-year schedule of budgetary activity is problematic. This issue poses a significant audit readiness challenge for the other military services’ first- time audits of their schedules of budgetary activity, which have been initiated for fiscal year 2015. The Marine Corps’ fiscal year 2012 audit demonstrated the difficulty in performing a fully substantive audit. For example, when the Marine Corps was unable to provide documentary support for certain transactions, it attempted to rely on (1) data and information generated by DLA systems and processes that support MILSTRIP transactions and (2) information generated by U.S. Transportation Command systems and processes for shipments of military items and household goods. This is directly contrary to the DOD FIAR Guidance on audit dealbreakers related to DOD feeder systems, which states that substantive testing of transactions to supporting documentation cannot overcome ineffective or missing information technology system controls when transaction evidence is electronic and only maintained within a system or the key supporting evidence is system-generated reports. The other military services and some DOD agencies use these same mission support agencies’ business processes and systems to issue and ship military supplies and equipment and ship household goods, and they make payments (outlays) based on billings generated by these agencies’ business feeder systems. To the extent that the other military services are unable to locate original support for tested transactions, there is a potential risk that if DOD mission support agencies’ systems and processes are not tested to reasonably assure the reliability of transaction data, the other military services’ and DOD will experience the same problem as the Marine Corps. Accordingly, DOD’s FIAR Guidance recognizes that for large volumes of transactions, it is more effective and efficient to rely on internal controls, including information system controls, rather than planning to fully rely on substantive testing of larger numbers of sampled transactions for which documentary support must be located and provided to the auditors. Since December 2011, DOD’s FIAR Guidance has stated that DOD mission support agencies are responsible for resolving dealbreakers related to their information systems, processes, and controls and obtaining SSAE No. 16 examinations. However, because of uncorrected accounting, reporting, and information system weaknesses, the Marine Corps has relied primarily on costly, labor-intensive efforts to locate and provide documentary support for substantive tests of transactions. According to DOD’s November 2014 FIAR Plan Status Report, DLA and U.S. Transportation Command are still in the beginning stages of their audit readiness efforts. As a result, the military services and defense agencies have asserted audit readiness for their fiscal year 2015 schedules of budgetary activity without these mission-support agencies having undergone SSAE No. 16 examinations. Until these support agencies’ systems, controls, and processes have been tested and are deemed reliable for financial management reporting and audit purposes, the Marine Corps, the other military services, and defense agencies that rely on these systems and processes may experience the same challenges we identified in the Marine Corps’ fiscal year 2012 audit with regard to providing support for shipment transactions in audits of their fiscal year 2015 General Fund schedules of budgetary activity. The Marine Corps’ fiscal year 2012 audit identified serious issues regarding the reliability of reported obligations and outlays. These issues relate to effective processes and controls for reasonably assuring (1) proper cutoff of beginning- and end-of-period obligations and outlays and (2) reported shipment obligations and outlays reflect activity for the accounting period audited. Because the Marine Corps and other military services record shipment obligations and outlays that occurred during each accounting period to current year appropriations, subsequent research and analysis are required to determine the appropriate fiscal year appropriation to be charged and to make necessary adjustments to both obligations and outlays. If the billings are made after the end of the accounting period and research to determine the proper appropriations to be charged extends several months into the next accounting period, first- and second-year schedules of budgetary activity may reflect activity outside the scope of the schedule. To address audit readiness concerns related to shipment obligations and outlays, in September 2013, the DOD Comptroller and the Office of the Under Secretary of Defense (Acquisition, Technology and Logistics), Office for Transportation Policy, established the DOD-wide Transportation Financial Auditability Working Group to facilitate DOD component audit readiness in the department’s freight (military equipment and supplies and materials) and personal property (household goods) process areas. The Working Group is approaching the transportation audit readiness issues in two phases: (1) developing an obligation methodology with enterprise guidance based on FIAR requirements and input from financial management function representatives and (2) achieving overall improvements in transportation processes, systems, and controls. The DOD Comptroller reviewed and approved an obligation methodology to provide direction on establishing policies and procedures for managing transportation transactions funded with bulk estimated obligations. In July 2014, the Transportation Working Group distributed the obligation methodology to the Army, the Navy, the Marine Corps, the Air Force, U.S. Transportation Command, DLA, and DFAS. The obligation methodology was intended to provide a baseline for DOD components, including the military services, DLA, and U.S. Transportation Command, to develop and refine corrective action plans in preparation for the audits of their fiscal year 2015 schedules of budgetary activity. Overall transportation business function improvements focus on long- standing transportation financial issues across DOD that require in-depth process analysis and development of standard processes and procedures across the department. The first six focus areas relate to management of transportation account code usage from obligation to payment. The remaining focus areas, which cover information systems, bill payment and expenditure processes, and key supporting documentation, will begin in fiscal year 2015. Efforts to improve business processes, establish business rules (i.e., policy), and achieve systems integration are expected to be completed in fiscal year 2019 or 2020, to support sustainment of auditability. While these are important efforts, until DOD components and service agencies implement effective processes and controls to ensure that shipment obligations and outlays are recorded to the proper fiscal years, they will face significant challenges in audits of their schedules of budgetary activity and, ultimately, their SBRs. The unqualified opinion on the Marine Corps’ Fiscal Year 2012 General Fund Schedule initially reported by the DOD OIG was not supported by sufficient audit procedures or sufficient, appropriate audit evidence. Specifically, the OIG did not (1) perform sufficient procedures to determine the completeness of transactions reported on the Marine Corps’ Fiscal Year 2012 General Fund Schedule, (2) perform sufficient procedures to determine the reliability of certain evidence used to support transactions in the Marine Corps’ Schedule, (3) perform sufficient procedures to determine whether budget activity was recorded in the proper period and whether shipment obligations were properly recorded, and (4) properly consider and evaluate the audit evidence in concluding and reporting on the result of the audit. As a result, the OIG did not obtain sufficient, appropriate evidence to support the reported audit opinion. Further, the DOD OIG lacked policy and procedures for resolving disagreements among the audit team and documenting the basis for the resolution of such disagreements. The OIG withdrew its opinion on the Marine Corps’ Fiscal Year 2012 General Fund Schedule because of issues identified in the audit of the Marine Corps’ Fiscal Year 2014 General Fund Schedule that raised questions concerning the completeness of transactions in the Fiscal Year 2012 Schedule on which its opinion was based. At that time, the OIG indicated that once additional information has been gathered and analyzed, the fiscal year 2012 audit opinion will be revisited in light of its analysis and reissued. In commenting on our report, the OIG stated that it would consider all relevant information, including the findings and recommendations in our report and the findings of the four ongoing audits of suspense accounts as well as a report from the OIG’s Quality and Standards Office before deciding whether to reissue an opinion on the Marine Corps’ Fiscal Year 2012 General Fund Schedule. The Marine Corps made limited progress on resolving uncorrected financial management weaknesses. Consequently, inadequate risk management efforts will likely pose continuing challenges to its auditability. Moreover, the concerns identified with the Marine Corps audit also pose significant challenges to DOD-wide audits because the other military services and DOD components rely on many of the same supporting agencies’ business processes and feeder systems to carry out their missions and operations. For example, unless DOD and the military services can provide assurance of (1) completeness of general ledger data and the populations of budgetary transactions used in audit testing, along with proper cutoff and reporting of transactions to the appropriate fiscal year; (2) reliability of financial data generated by DOD agencies’ business processes and systems; and (3) proper recording of obligations and outlays, they will be unable to generate auditable schedules of budgetary activity and ultimately auditable sets of financial statements. The ultimate goal of financial audits is to provide accountability over DOD’s vast resources along with reliable information to support management decisions on DOD’s missions and operations. Achieving a clean audit opinion would be a normal outcome of sound financial management systems, processes, and controls. To improve the quality of DOD’s financial statement audits and ensure that corrective actions to address audit recommendations are fully and effectively implemented prior to their closure, we are making the following three recommendations to the Department of Defense Inspector General: In addition to analyzing additional information related to the withdrawal of the auditor’s opinion on the Marine Corps’ Fiscal Year 2012 General Fund Schedule, reconsider the conclusions made in the OIG’s initial audit report based on the findings in our report before determining whether the auditor’s opinion should be reissued or revised, or whether additional work should be performed. Develop and document a quality assurance process for elevating disagreements between the audit team and OIG management to ensure appropriate, objective resolution of the disagreements. Ensure that Marine Corps corrective actions fully address audit recommendations and document auditor review of the actions taken before closing the related recommendations. We provided a draft of this report to the DOD OIG, the Marine Corps, and the Office of the DOD Comptroller. We received written comments from each of these entities, which are reprinted in appendixes II through IV, respectively We summarize and evaluate the OIG’s, Marine Corps’, and Office of the DOD Comptroller’s comments below, and we provide detailed responses to the OIG’s comments following the comment letter in appendix II. We made technical corrections and clarifications in the body of our report, where appropriate. In commenting on our report, the DOD OIG agreed with our three recommendations directed to it but generally disagreed with our findings that the OIG did not perform sufficient procedures, under professional standards, and consequently did not obtain sufficient, appropriate audit evidence to support its audit opinion on the Marine Corps’ Fiscal Year 2012 General Fund Schedule. The OIG stated that it believed its report was supported when it was issued on December 20, 2013. The OIG provided comments on (1) the use of professional judgment, (2) completeness of transactions, (3) reliability of evidence, (4) cutoff testing, (6) reliability of recorded obligations, (7) materiality and audit conclusions, and (8) resolution of differences within the audit team. The OIG also commented on our oversight of the Marine Corps’ fiscal years 2012 through 2014 audits. During our review of the audit of the Marine Corps’ Fiscal Year 2012 General Fund Schedule, we had numerous discussions with the OIG, beginning at the end of February 2013, regarding the key areas discussed in our report. In drafting our report, we carefully considered the responses to our concerns that the OIG provided during these discussions. Such OIG responses were generally consistent with the OIG’s written comments on our draft report. Accordingly, the OIG’s comments do not raise issues that we had not already considered and appropriately addressed in our work. Further, our findings are consistent with the requirements in professional auditing standards cited in our report. In addition, the OIG referred, in several places, to additional procedures applied in the audits of the Marine Corps’ Fiscal Years 2013 and 2014 General Fund Schedules. The OIG stated that certain audit testing in subsequent audits was expanded to address GAO concerns. We understand that the results of subsequent, expanded audits may provide additional insights into risks and the extent of any misstatements that may exist in the key areas discussed in our report. However, our findings in this report are focused on the adequacy of audit procedures applied and documented as part of the OIG’s audit of the Marine Corps’ Fiscal Year 2012 General Fund Schedule. The OIG commented that auditing standards recognize that the auditor needs to make professional judgments throughout the audit. We acknowledge that auditing standards recognize the need for professional judgment in conducting an audit. However, auditing standards also include requirements that the auditor needs to fulfill in order to comply with such standards. Auditor requirements in the standards are clearly denoted with the terms “must,” “is required to,” and “should.” Our report includes references to the relevant requirements in auditing standards and the basis for our determination that in certain key audit areas, the OIG did not perform sufficient procedures, under such standards, and consequently did not obtain sufficient, appropriate audit evidence to support its audit opinion on the Marine Corps’ Fiscal Year 2012 General Fund Schedule. Specifically, we found that the OIG did not perform sufficient procedures to determine the completeness of transactions reported on the Marine Corps’ Schedule, perform sufficient procedures to determine the reliability of certain evidence used to support transactions included on the Schedule, perform sufficient procedures to determine whether budgetary activity was recorded in the proper period and shipment obligations were properly recorded, and properly consider and evaluate the audit evidence in concluding and reporting on the results of the audit. As stated in our report, had sufficient audit procedures been performed in key areas of concern that we identified, additional misstatements may have been identified that when aggregated with already identified misstatements, could be material to the Marine Corps’ Fiscal Year 2012 General Fund Schedule. The OIG stated that in its professional judgment, it reduced the risk of material misstatement related to completeness of outlays and obligations to an acceptable level. In our report, we noted several areas where, in our view, there is a high risk of material misstatement related to completeness of outlays and obligations and provided the supporting reasons (e.g., ineffective processes and controls, material amounts involved, and known prior misstatements). As noted in our report, auditing standards require that the auditor design and perform audit procedures to reduce the risk of material misstatement to an acceptably low level. Also, such standards require that the auditor(1) assess the risk of material misstatement at the relevant assertion level and (2) perform substantive procedures for all relevant assertions related to material classes of transactions, account balances, and disclosures to determine whether there is evidence of any material misstatements. Auditing standards further state that existence and completeness are always relevant assertions. We found that the OIG did not perform sufficient procedures to determine whether (1) material amounts of fiscal year 2012 obligations and outlays were improperly charged to fiscal year 2011 and prior appropriations, and (2) all nonpayroll feeder system transactions (representing about half of the reported fiscal year 2012 budgetary activity) were properly included in the Marine Corps’ Fiscal Year 2012 General Fund Schedule. The OIG also mentioned that the March 23, 2015, withdrawal of its unqualified opinion report on the Marine Corps’ Fiscal Year 2012 General Fund Schedule was not related to the completeness concerns discussed in our report. However, our concerns related to the risk that all transactions that should have been included in the Marine Corps’ Fiscal Year 2012 General Fund Schedule were not included in the Schedule, which includes the risk that suspense account transactions were not appropriately included in the Marine Corps’ Schedule. In response to our finding that the OIG did not perform sufficient procedures to determine the reliability of certain evidence used to support transactions in the Marine Corps’ Fiscal Year 2012 General Fund Schedule, the OIG stated that it believes that audit evidence used to test the Schedule was appropriate and permissible under the auditing standards. As discussed in our report, auditing standards require that in examining evidence supporting a transaction, the auditor should consider the reliability of the information used as audit evidence, such as electronic documents, including consideration of controls over its preparation and maintenance, where relevant. Such consideration would normally include any information that raises doubts about the reliability of the evidence. If the auditor has doubts about the reliability of information to be used as audit evidence or is aware of problems with the reliability of the data, the auditor should determine what modifications or additions to the audit procedures are necessary to resolve the issues. Also, when the auditor uses entity-produced information in performing audit testing or procedures to support audit testing, the audit standards require that the auditor obtain evidence about the accuracy and completeness of the information, for example, by performing procedures to determine whether the related controls over the data are effective. As noted in our report, the auditors did not document their consideration of the reliability of the audit evidence provided by other DOD agencies, although there was evidence that should have raised doubt about the reliability of the audit evidence. In addition, the auditors relied on support produced by certain Marine Corps systems without obtaining sufficient evidence about the accuracy and completeness of this information. The OIG commented that it believed that the cutoff testing performed on outlays was both sufficient and in accordance with auditing standards. While the OIG comments described certain cutoff tests that were performed, the OIG, as discussed in our report, did not (1) sufficiently document its assessment of the risk of material misstatement related to cutoff, (2) perform sufficient cutoff testing procedures with respect to certain risks (e.g., fiscal year 2012 appropriation transactions that may be inappropriately recorded as fiscal year 2011 transactions), and (3) perform sufficient cutoff testing procedures with respect to certain types of transactions (e.g., transactions with known long transaction cycles). Consequently, there may be misstatements related to cutoff that would not have been detected by the OIG’s audit procedures. As noted above, auditing standards require that the auditor design and perform audit procedures to reduce the risk of material misstatement to an acceptably low level. The OIG also stated that the risk of material misstatement related to cutoff was low, based on the results of the audits of the Marine Corps’ fiscal years 2011, 2012, and 2013 (first quarter) budgetary activity. Further, the OIG stated that additional procedures performed during the fiscal years 2013 and 2014 audits did not indicate there was a high risk of material misstatement. In our view, there was a high risk of material misstatement related to cutoff for the reasons included in our report and the fact that cutoff testing was not performed in prior year audits. While we agree that subsequent audits may provide additional information for understanding the risk of material misstatement related to cutoff, we believe that certain cutoff risks were not adequately addressed during the fiscal year 2012 audit. Also, we do not believe that the documentation adequately addressed the auditor’s assessment of the risk of material misstatement, including any other considerations beyond the information documented in the audit of the Marine Corps’ Fiscal Year 2012 General Fund Schedule. The OIG commented that it believes that sufficient audit procedures were performed to determine whether the accounting estimate for transportation shipments was reasonable in the context of the Schedule taken as a whole. As discussed in our report, the audit documentation showed that the OIG had identified several audit risks associated with the Marine Corps’ accounting for shipment transactions. For example, the Marine Corps (1) did not have sufficient documentation available to support its multiple obligation processes for shipment transactions and (2) was unable to match the liquidations (outlays) with corresponding obligations. The audit documentation also showed that the OIG had attempted to perform substantive detail testing of the Marine Corps’ shipment obligations; however, the Marine Corps was unable to provide support for $231.5 million of its reported $529.5 million in fiscal year 2012 shipment obligations. As stated in our report, auditing standards identify procedures that the auditor may consider when reviewing and testing the process used to develop management’s estimates, including controls over the process and the relevance, reliability, and sufficiency of historical data used in the estimate. The OIG commented that it had performed four of the nine procedures enumerated in the auditing standards. In addition, the auditing standards state that the auditor’s objective is to obtain sufficient, appropriate evidence to provide reasonable assurance that the accounting estimates are reasonable in the circumstances. In assessing the reasonableness of an estimate, auditing standards state that the auditor normally concentrates on key factors and assumptions that include sensitivity to variations, deviations from historical patterns, susceptibility to misstatements and bias, and the entity’s historical experience related to the reliability of prior year estimates. As stated in our report, the audit documentation did not contain evidence that the OIG sufficiently performed certain other procedures enumerated in the auditing standards that we believe are important related to (1) identifying whether there were controls over the preparation of the accounting estimates and supporting data that may be useful in the evaluation and (2) considering whether sources of data and factors that management used in forming the assumptions were relevant, reliable, and sufficient for the purpose of determining the estimates based on information gathered in other audit tests. The OIG stated that it believes the results of the audit work were properly considered and that it appropriately evaluated the audit evidence in accordance with all applicable auditing standards to conclude and report on the results of the audit of Marine Corps’ Fiscal Year 2012 General Fund Schedule. The OIG stated that its calculation of misstatements related to errors and untested amounts totaled approximately $773 million. The OIG also stated that all known misstatements or known risk factors were appropriately considered. The OIG stated that even if it included the $35.8 million that we reported related to unsupported contract payment transactions, the revised misstatements would total approximately $808.8 million, which is still below the overall materiality threshold of $826 million that the OIG had established for the audit. As discussed in our report, auditing standards state that in evaluating whether the financial statements are presented fairly, in all material respects, in conformity with GAAP, the auditor must consider the effects, both individually and in the aggregate, of misstatements (both known and likely) that are not corrected by the entity. At the conclusion of the audit, the auditor accumulates identified misstatements and considers whether such misstatements are material to the entity’s financial statements. Auditing standards further state that as the aggregate misstatement approaches materiality, the risk that the financial statements could be materially misstated also increases; consequently, the auditor should consider the effect of undetected misstatements, in concluding on whether the financial statements are fairly stated. Because the OIG’s previously noted calculation of misstatements totaling $773 million represents nearly 94 percent of its $826 million materiality threshold for the audit, in accordance with auditing standards, the OIG should have determined an amount for undetected misstatements and included this amount in its materiality calculation for concluding on the results of the audit of the Marine Corps’ Fiscal Year 2012 General Fund Schedule. However, the OIG did not do so. Further, had sufficient audit procedures been performed in the key areas of concern that we identified, additional misstatements may have been identified that, when aggregated with already identified misstatements, could be material to the Marine Corps’ Fiscal Year 2012 General Fund Schedule. Consequently, in the absence of such additional procedures, we do not believe that the OIG obtained sufficient, appropriate evidence to reduce the risk of material misstatement to an appropriately low level. The OIG agreed with our recommendation that it develop and document a quality assurance process for elevating disagreements between the audit team and OIG management to ensure appropriate, objective resolution of the disagreements. The OIG also stated that it was developing a formalized process for elevating such disagreements. The OIG commented that we did not always provide timely input on the results of our oversight of the OIG’s audits of the Marine Corps’ Fiscal Years 2012 and 2013 General Fund Schedules and that the OIG was encouraged by the interaction that took place between GAO and the OIG as part of the audit of the Marine Corps’ Fiscal Year 2014 General Fund Schedule. For the audit of the Marine Corps’ Fiscal Year 2012 General Fund Schedule, we provided comments to the OIG as we identified issues and concerns about its audit. For example, on May 1, 2013, when the OIG was in the process of concluding on the fiscal year 2012 audit results, we informed the OIG that audit procedures were not performed to test cutoff, and that cutoff is a key assertion that must be tested to provide audit evidence related to the completeness of transactions included in financial statements for the period audited. On May 30, 2013, the OIG made a decision to include cutoff as one of the additional areas it planned to test in its audit of the Marine Corps’ Fiscal Year 2012 General Fund Schedule. In addition, as the OIG has noted, audit testing was expanded in subsequent audits based on the concerns we identified with the fiscal year 2012 Marine Corps audit. The Marine Corps agreed overall with our discussion of actions needed on the issues related to the audit of its fiscal year 2012 General Fund Schedule. However, the Marine Corps did not agree with certain findings with respect to (1) support for certain audit sample items, and (2) progress in addressing audit recommendations. We acknowledge the Marine Corps’ continuing efforts to improve accountability over its financial management systems and internal controls. The Marine Corps stated that although the OIG may have deliberated with it on requiring an additional cutoff sample of 334 outlay transactions, the Marine Corps was not issued the additional samples and was not asked to provide supporting documentation. The discussion in our report is supported by the OIG’s audit documentation and a discussion with the auditors. Our review of the OIG’s audit documentation found that on September 6, 2013, the OIG e-mailed two, separate statistical samples for cutoff testing of obligations and outlays to the Marine Corps and requested that the Marine Corps provide the requested supporting documentation by close of business on September 13, 2013. The audit documentation shows that the Marine Corps responded to the obligation sample. However, OIG auditors told us that Marine Corps officials advised them that they could not respond to the request for additional fiscal year 2012 outlay samples because Marine Corps staff was responding to samples for the fiscal year 2013 Marine Corps audit, and sufficient staff were not available to respond to samples from both audits. The Marine Corps acknowledged that much work remains to fully mitigate its internal control weaknesses. However, the Marine Corps commented that it does not agree with our assertion that significant, uncorrected control weaknesses continue to impair the Marine Corps’ ability to produce consistent, reliable, and sustainable financial information for day- to-day decision making on its missions and operations. The objective of internal control is to provide reasonable assurance of (1) the effectiveness and efficiency of the entity’s operations, (2) reliability of financial reporting, and (3) compliance with applicable laws and regulations. An operating environment with significant, uncorrected weaknesses in internal controls lacks this assurance. In addition, the nature of the Marine Corps’ material weaknesses in internal control, which the OIG reported, include (1) financial management systems that do not comply with FFMIA requirements related to compliance with GAAP for federal government entities and the USSGL and (2) ineffective financial management oversight with regard to identifying and correcting accounting errors. The existence of such material weaknesses demonstrates that the Marine Corps does not have reasonable assurance of the reliability of its financial management operations. Further, the Marine Corps stated that in addition to the 11 accounting and financial reporting recommendations that were closed by the OIG, it had remediated an additional 17 accounting and financial reporting recommendations and was awaiting validation testing from the OIG or an audit firm. The Marine Corps also stated that based on reinforced coordination with its information technology stakeholders and testing through the completion of the audit of its Fiscal Year 2014 Schedule, 94 of 95 information technology system recommendations were remediated. We have not assessed the corrective actions taken subsequent to the December 20, 2013, issuance of the audit report on the Marine Corps’ Fiscal Year 2012 General Fund Schedule and our update in August 2014. The Office of the DOD Comptroller generally agreed with the findings in our report related to DOD-wide audit readiness implications and summarized efforts that are planned or under way to test controls over business processes and financial-related systems to help ensure the reliability of data used for DOD financial audits. However, the Office of the DOD Comptroller stated that our report does not recognize many of the corrections and improvements made by the Marine Corps or the value of lessons learned from the Marine Corps audits. We acknowledge DOD’s continuing efforts to become audit ready. Our report includes several examples where the DOD Comptroller and its FIAR Team had developed appropriate audit readiness guidance several years ago to help DOD components and mission support agencies, such as DLA, effectively respond to requirements under professional auditing standards in their audit readiness efforts. Our report also states that certain DOD components, such as the mission support agencies, have not followed the FIAR Guidance regarding audit readiness timelines for supporting DOD components with regard to assuring that their own processes, systems, and controls are effective and can be relied on to support their DOD customers’ audits. To the extent that the other DOD military services and DOD agencies rely on these support agencies, they are likely to experience similar challenges as the Marine Corps with regard to having reliable information for decision making on their missions and operations and achieving auditability of their budgetary information. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of Defense, the DOD Inspector General, the Under Secretary of Defense (Acquisition, Technology and Logistics); the Under Secretary of Defense (Comptroller)/Chief Financial Officer; the Deputy Chief Financial Officer; the Under Secretary of Defense (Personnel and Readiness); the Director of the Defense Finance and Accounting Service; the Director for Financial Improvement and Audit Readiness; the FIAR Governance Board; the Assistant Secretaries (Financial Management and Comptroller) of the Army, the Navy, and the Air Force; the Commandant of the Marine Corps; the Director of the Office of Management and Budget; and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9869 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix V. Our objectives were to (1) determine the extent to which the audit was performed in accordance with professional auditing standards; (2) analyze the status of the Marine Corps’ actions to address identified accounting, financial reporting, and information technology system control weaknesses; and (3) identify any implications to the Department of Defense (DOD) based on the Marine Corps’ Fiscal Year 2012 General Fund Schedule of Budgetary Activity (General Fund Schedule) audit results. To address our first objective, we analyzed auditor documentation, test results, and conclusions to determine the extent to which the work complied with professional auditing standards. As our criteria, we used professional audit standards issued by the American Institute of Certified Public Accountants, which are consistent with generally accepted government auditing standards, and considered additional guidance in the GAO/President’s Council on Integrity and Efficiency Financial Audit Manual. We followed the guidance in Section 650 of the Financial Audit Manual for relying on the work of others. We reviewed the Marine Corps’ Office of Inspector General (OIG) audit contracts and statements of work and the Marine Corps’ management representation letters, which contain assertions about the reliability of its financial reporting in accordance with generally accepted accounting principles, related to the audits of the Marine Corps’ Fiscal Year 2012 General Fund Schedule and its Fiscal Years 2011 and 2010 General Fund Statements of Budgetary Resources. In addition, we reviewed the OIG Marine Corps Auditor Reports, including the audit opinions, and Reports on Internal Control and Compliance with Laws and Regulations as well as the auditor’s reports to Marine Corps management that included detailed auditor findings and recommendations, and the Marine Corps’ responses to the auditor’s reports. We also reviewed the audit documentation related to planning, executing, concluding, and reporting on the audit. We retested selected auditor sample items for significant classes of transactions, such as civilian and military payroll, unpaid obligations related to undelivered orders and delivered orders, and outlays (payments or liquidations of the orders received) to determine if we agreed with the auditors’ conclusions on tests of those sample items. Throughout our audit, we discussed the concerns we identified regarding the conduct of the audit with OIG and independent public accounting firm auditors, including concerns about (1) completeness of reported budgetary transactions, (2) the reliability of data generated by DOD feeder systems, (3) proper fiscal year cutoff and the reliability of reported shipment obligations, and (4) the auditors’ conclusions on the audit as well as the basis for auditor judgments made during the audit. To analyze the status of the Marine Corps’ actions to address audit recommendations on identified accounting, financial reporting, and information technology system control weaknesses, we used federal internal control standards as our criteria. We assessed the status of the Marine Corps’ corrective actions on recommendations from the Marine Corps’ fiscal years 2010 through 2012 audits. We met with Marine Corps officials to discuss corrective action plans and actions completed and under way as well as their process for monitoring corrective actions. We reviewed auditor support for closed recommendations to determine whether the (1) corrective actions had been appropriately designed to address reported weaknesses and (2) documentation on closed recommendations confirmed that actions to address them had been completed. To identify any DOD-wide implications of the Marine Corps’ Fiscal Year 2012 General Fund Schedule audit results, we considered our findings with regard to the conduct of the Marine Corps audit and the status of Marine Corps actions to address auditor recommendations as well as November 2014 Financial Improvement and Audit Readiness (FIAR) Plan Status Report information on the status of DOD military service and DOD mission support agency audit readiness efforts. We gave particular consideration to audit readiness issues we identified with regard to assuring the (1) completeness of populations and proper cutoff, (2) reliability of financial data and information generated by DOD business processes and feeder systems, and (3) reliability of reported obligations and outlays. We considered whether DOD agencies and the other military services relied on many of the same systems, processes, and controls as the Marine Corps and would be likely to experience similar issues in their audits. We conducted this performance audit from July 2012 through July 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. 1. Material misstatements. The OIG stated that we did not identify any material misstatements that were excluded from its conclusions on the audit. It was not our objective to audit the Marine Corps’ Fiscal Year 2012 General Fund Schedule of Budgetary Activity (General Fund Schedule). Consequently, we did not perform audit tests to determine whether material misstatements occurred. As stated in our report, the OIG did not perform sufficient audit procedures, under professional standards, and consequently did not obtain sufficient, appropriate evidence to support its opinion on the Marine Corps’ Fiscal Year 2012 General Fund Schedule of Budgetary Activity (General Fund Schedule). Had sufficient audit procedures been performed in key areas of concern that we identified, additional misstatements may have been identified that when aggregated with the already identified misstatements, could be material to the Marine Corps’ Fiscal year 2012 General Fund Schedule. 2. Rejected transactions. The OIG stated that figure 3 in our draft report indicated that rejected transactions were removed from the Standard Accounting, Budgeting and Reporting System (SABRS) with no process to eventually include corrected transactions in SABRS. Because figure 3 depicts feeder system data flow, we revised the arrow related to the flow of rejected transactions to show that, if handled correctly, the rejected transactions would be corrected and, entered into SABRS. However, as discussed in our report, the OIG did not perform sufficient procedures to reasonably assure that rejected transactions were properly resolved and entered into SABRS before closing a related audit recommendation. 3. Reconciliation of SABRS to Fund Balance with Treasury. The OIG stated that we expressed concern that it did not complete a full comparison of fiscal year SABRS transaction activity to the Marine Corps’ fiscal year 2012 Fund Balance with Treasury reconciliation. The OIG stated that such a comparison is an acceptable procedure for gaining assurance of completeness, but it is not a required audit procedure. We referred to such testing as an example of one of the types of audit procedures that may be performed to determine whether recorded transactions are complete. The OIG also stated that it had traced selected transactions to the reconciliation. However, as stated in our report, these procedures would not be effective for testing completeness of transactions recorded in SABRS because they begin with items that are already recorded in SABRS. 4. Fiscal year 2012 activity recorded to fiscal year 2011 appropriations. The OIG stated that our example of $3.8 billion in Marine Corps fiscal year 2012 outlays that was recorded to fiscal year 2011 appropriations as reported by the Department of the Treasury, overstated the risk to the Marine Corps’ Fiscal Year 2012 General Fund Schedule. The OIG stated that the Marine Corps fiscal year 2012 outlay activity would include charges to 1-year appropriations as well as multiyear appropriations. We specifically excluded multiyear appropriations in calculating the $3.8 billion amount in our example. We included this example in our report because it illustrates that the amount of such transactions charged to prior year appropriations was material. As stated in our report, we believe the risk of material misstatement to the Marine Corps’ Fiscal Year 2012 General Fund Schedule related to transactions recorded in fiscal year 2012 to prior year appropriations that should have been charged to fiscal year 2012 appropriations is high based on numerous reported Marine Corps’ weaknesses in controls over accounting and financial reporting and the magnitude of fiscal year 2012 Marine Corps’ outlays that were recorded to prior fiscal year appropriations. Accordingly, testing of such transactions was necessary to determine whether there were any material misstatements. In addition, the OIG stated that the audit of the Marine Corps’ Fiscal Year 2012 Schedule appropriately excluded fiscal year 2012 transactions recorded to fiscal year 2011 because the Schedule only included current year appropriations. However, the scope of a first- year audit of a schedule of budgetary activity would appropriately include a determination of whether transactions related to current fiscal year appropriations were improperly charged to prior year appropriations, and, therefore, improperly excluded from the schedule. 5. Consideration of DOD agencies as third parties. The OIG stated that the auditing standards permit the use of both internal and external evidence and state that evidence from a knowledgeable source that is independent is generally more reliable than evidence obtained only from internal sources. Further, the OIG stated that based on its audit approach, it does not consider information obtained from the Defense Logistics Agency (DLA) and U.S. Transportation Command to be internal evidence. Instead, the OIG considered these DOD agencies to be third parties with respect to the Marine Corps. As stated in our report, in examining evidence supporting a transaction, the auditor should consider the reliability of the information used as audit evidence, such as electronic documents, including consideration of controls over its preparation and maintenance where relevant. Such consideration would normally include any information that raises doubts about the reliability of the evidence. If the auditor has doubts about the reliability of information to be used as audit evidence or is aware of issues with the reliability of the data, the auditor should determine what modifications or additions to the audit procedures are necessary to resolve the issues. Also, as discussed in our report, there were well-known, documented issues that should have raised significant doubts about the reliability of the data from DLA and U.S. Transportation Command systems and processes that the OIG relied on in its transaction testing for the audit of the Marine Corps’ Fiscal Year 2012 General Fund Schedule. 6. Military Standard Requisitioning and Issue Procedures (MILSTRIP) material weakness. The OIG stated that although it agrees that there are weaknesses surrounding MILSTRIP processes, DOD’s fiscal year 2012 Agency Financial Report does not conclude that the data within the system is unreliable and that the reported weaknesses would not prevent the auditors from using the MILSTRIP information to complete the audit tests. We disagree. As discussed in our report, DOD reported DLA’s MILSTRIP process as a department- wide material weakness, stating that the department could not effectively account for transactions and balances in the MILSTRIP orders process. Because this and other factors should have raised doubts about the reliability of MILSTRIP process data, auditors should determine what modifications or additions to the audit procedures are necessary to resolve the issues. 7. Relevance of OIG report on Defense Enterprise Accounting and Management System (DEAMS). The OIG stated that auditing standards do not require a Statement on Standards for Attestation Engagements (SSAE) No. 16 examination of system information in order for the results to be used to corroborate data from another entity and that the Marine Corps did not rely solely on DEAMS for its financial statement reporting. However, the concern raised in our report was that the OIG used information from DEAMS as audit evidence and DEAMS had known data reliability issues. As discussed above, if there are doubts about the reliability of information to be used in audit testing, auditors should determine what modifications or additions are needed to the audit procedures to resolve the issues. 8. Relevance of disclaimer on DOD financial statements. The OIG stated that although it issued a disclaimer on DOD’s department-wide financial statements for fiscal year 2012, its audit effort on the department-wide statements did not include any tests of DEAMS or MILSTRIP data that were used to corroborate the Marine Corps transactions. The OIG stated that as a result, there was no direct connection between the results of the DOD department-wide financial statement audit report and the audit of the Marine Corps’ Fiscal Year 2012 General Fund Schedule. As discussed in our report, in disclaiming an opinion on DOD’s department-wide financial statements for fiscal year 2012, the OIG reported that DOD financial management and business feeder systems were unable to adequately support material amounts on the financial statements as of September 30, 2012. The well-known, documented issues related to these systems should have raised significant doubts about the reliability of the data used in testing and the OIG should have determined what modifications or additions were needed to the audit procedures to resolve the issues. 9. Reallocation of shipment outlays. The OIG stated that it agrees with us that some transactions may be recorded in the wrong period, although the Marine Corps did not report and the OIG did not identify any material instances where the Marine Corps recorded transactions in an improper period. As discussed in our report, the OIG’s audit documentation did not include evidence that the OIG performed any procedures to (1) test the accuracy of the Marine Corps’ allocation of fiscal year 2012 shipment billings to previous fiscal year appropriations or (2) confirm that the related adjustments were recorded to ensure that the portion of the outlays that pertained to previous fiscal year appropriations, and in some cases, other military services, were excluded from the outlays reported on the Marine Corps’ Fiscal Year 2012 General Fund Schedule. The OIG also stated that our draft report was misleading regarding the discussion of $21 million of fiscal year 2012 shipment billings the Marine Corps was analyzing in January 2013 to determine the extent of adjustments needed to the Marine Corps reported fiscal year 2012 outlays. The OIG stated that our auditors were present during a series of meetings to assess this situation. The meetings the OIG referred to were held in November and December 2014, which was after the OIG had issued its opinion on the Marine Corps’ Fiscal Year 2012 General Fund Schedule. 10. Cutoff control testing on outlays. The OIG stated that it was able to resolve 7 transactions that its initial testing had determined were exceptions (errors) and that the other 14 transactions were supported by evidence obtained from DLA, an agency external to the Marine Corps. We revisited Marine Corps documentation that was available for 18 of the 21 transactions and determined that the additional support was sufficient for 6 of the 18 transactions. We revised the discussion in our report accordingly. However, because support for the other 12 transactions was not sufficient, we continue to believe that controls over cutoff for outlays were not effective and the OIG should have performed substantive detail tests of cutoff for outlays. 11. Adjustments to progress payment transactions. The OIG stated that while the Marine Corps may not always properly record certain progress payment transactions the OIG obtained evidence that an outlay occurred related to a valid obligation. The OIG stated its position that for purposes of the Marine Corps’ Fiscal year 2012 General Fund Schedule, if support for progress payment outlays could not be obtained, adjusting the outlay transaction to an advance payment would have no net effect on the Marine Corps’ schedule. The OIG stated that it considered such occurrences as a compliance issue. However, as stated in our report, the audit documentation showed that the audit team had initially determined that it could not conclude on the accuracy of sampled contract outlay transactions for which there was no support that the goods and services paid for were received. More specifically, the audit documentation showed that the audit team could not determine the validity of certain progress payment obligations because the contract information provided to them by the Marine Corps did not contain sufficient detail to make such a determination. Further, the audit documentation showed that the tested contractor invoices were related to progress payments and the audit team had determined that progress payments should not be recorded as advances. The audit team planned to include the unsupported contract obligations and outlays in its overall calculation of misstatements. The audit documentation also showed that OIG management subsequently made an assumption that the unsupported outlay transactions could be adjusted and reported as advance payments to avoid counting the amounts as untested. As stated in our report, the audit documentation did not include a reconciliation or explanation for such conflicting statements between OIG management and the audit team. 12. Quantitative Methods Division (QMD) certification. The OIG commented that it disagreed with the discussion in our report regarding QMD’s certification of statistical sampling and stated that although QMD expressed some concern with the statistical methods used by the audit firm, QMD confirmed that the statistical projections were calculated accurately and signed the certification. As stated in our report, we reviewed the documentation on QMD’s certification and held discussions with QMD statisticians regarding reasons why they added a note that qualified their certification. Specifically, the note stated that QMD expresses no opinion as to the application of results with respect to the evaluation of the sample results against materiality. QMD officials told us that they qualified their certification because the auditors mixed two methods for making statistical estimates, QMD was not included in the materiality assessment process, and as a result, they did not know the basis for the auditor judgments that were made. QMD officials also told us that this was unusual and that they are generally included in auditor assessments of materiality to help the auditors interpret sampling results. In addition to the contact named above, Robert F. Dacey (Chief Accountant), Gayle L. Fischer (Assistant Director), Richard Mayfield (Auditor-in-Charge), Michael Bingham, Gloria Cano, Jeremy Choi, Francine DelVecchio, Doreen Eng, Donald D. Holzinger, Pierre Kamga, Jason Kelly, Jason Kirwan, Richard Larsen, Gregory Marchand (Assistant General Counsel), Quang Nguyen, Brian Paige, Heather Rasmussen, Robert Sharpe, Eric Stalcup, and Ivy Wu made key contributions to this report.
After being identified in August 2009 as the pilot military service for an audit of its SBR, the Marine Corps received disclaimers of opinion on its fiscal year 2010 and 2011 SBRs. Because of difficulties in locating supporting documents for prior fiscal years, in June 2012, DOD leadership decided that the Marine Corps would prepare and subject to audit a Schedule of Budgetary Activity that would include only current year activity on fiscal year 2012 appropriations. In December 2013, the DOD OIG issued an unqualified opinion on the Schedule. GAO was asked to assess the 2012 audit results. GAO (1) determined the extent to which the OIG's audit met professional standards, (2) analyzed the status of Marine Corps actions on recommendations, and (3) identified any DOD-wide implications from the audit. GAO reviewed auditor documentation, re-performed certain tests, evaluated Marine Corps corrective action plans and statuses, and determined whether other military services and DOD would likely encounter similar issues. GAO met with DOD OIG auditors and Marine Corps and DOD Comptroller officials. GAO found that in certain key audit areas, the Department of Defense (DOD) Office of Inspector General (OIG) did not perform sufficient procedures, under professional standards, and consequently did not obtain sufficient, appropriate audit evidence to support the audit opinion on the Marine Corps' Fiscal Year 2012 Schedule of Budgetary Activity (Schedule). GAO found that the OIG did not perform sufficient procedures to determine (1) the completeness of transactions reported on the Schedule, (2) the reliability of certain evidence used to support transactions included on the Schedule, (3) whether budgetary activity was recorded in the proper period and shipment obligations were properly recorded. In addition, the OIG did not properly consider and evaluate the audit evidence in concluding and reporting on the results of the audit. For example, about half of the Marine Corps' reported fiscal year 2012 budgetary activity originated in non-payroll feeder systems. However, the OIG did not perform sufficient procedures to determine the completeness of the data transferred to the general ledger from the non-payroll feeder systems, although the OIG had reported control weaknesses over feeder system transfers in the 2 prior year audits that the Marine Corps had not yet fully addressed. Also, the OIG did not perform sufficient procedures to determine the reliability of data in certain feeder systems that were used as support when the Marine Corps could not locate or provide original support for some of the OIG's sampled transactions. The OIG stated that certain audit testing in subsequent audits was expanded to address GAO's concerns. On March 23, 2015, the OIG withdrew its fiscal year 2012 audit report, stating that facts identified in the audit of the Marine Corps' fiscal year 2014 Schedule raised questions about the completeness of information on which the 2012 opinion was based. The OIG has indicated that once additional information has been gathered and analyzed, it will revisit its fiscal year 2012 audit opinion in light of its analysis and determine whether the report should be reissued. GAO also found that the Marine Corps had made limited progress in addressing auditor recommendations since the audit of its fiscal year 2010 Statement of Budgetary Resources (SBR). For example, as of December 2013, the Marine Corps had not completed action on 130 of the 177 OIG recommendations. In commenting on GAO's report, the Marine Corps noted that it has subsequently remediated numerous recommendations. GAO has not assessed these subsequent corrective actions. GAO identified DOD-wide implications from the Marine Corps audit related to challenges in assuring the (1) completeness of budgetary transactions, (2) reliability of data generated by DOD agencies' business processes and systems, and (3) proper fiscal year recording of obligations and outlays. Actions to address these challenges will help ensure the reliability of DOD component agencies' financial information; however, until such actions are complete, DOD and its component agencies likely will continue to face significant challenges in having reliable budgetary information for decision making on DOD missions and operations and achieving auditability of their budgetary information. GAO makes three recommendations related to the quality of DOD OIG audits. The OIG agreed with GAO's recommendations, but disagreed with many of its findings; the Marine Corps disagreed with certain findings; and the Office of the DOD Comptroller generally agreed with GAO's findings on the DOD-wide audit readiness implications from GAO's work. GAO acknowledges DOD's continuing efforts to become audit ready. GAO maintains that its findings are accurate.
The H-1B program was created by the Immigration Act of 1990, which amended the Immigration and Nationality Act (INA). The H-1B visa category was created to enable U.S. employers to hire temporary workers as needed in specialty occupations, or those that require theoretical and practical application of a body of highly specialized knowledge. It also requires a bachelor’s or higher degree (or its equivalent) in the specific occupation as a minimum requirement for entry into the occupation in the United States. The Immigration Act of 1990 capped the number of H-1B visas at 65,000 per fiscal year. Since the creation of the H-1B program, the number of H-1B visas permitted each fiscal year has changed several times. Congress passed the American Competitiveness and Workforce Improvement Act of 1998 (ACWIA), which increased the limit to 115,000 for fiscal years 1999 and 2000. In 2000, Congress passed the American Competitiveness in the Twenty-First Century Act (AC-21), which raised the limit to 195,000 for fiscal year 2001 and maintained that level through fiscal years 2002 and 2003. The number of H-1B visas reverted back to 65,000 thereafter. Generally, an H-1B visa is valid for 3 years of employment and is renewable for an additional 3 years. Filing an application with Labor’s Employment and Training Administration is the employer’s first step in hiring an H-1B worker, and Labor is responsible for either certifying or denying the employer’s application within 7 days. By law, it may only review applications for omissions and obvious inaccuracies. Labor has no authority to verify the authenticity of the information. Employers must include on the application information such as their name, address, rate of pay and work location for the H-1B worker, and employer identification number. All employers are also required to make four attestations on the application as to: 1. Wages: The employer will pay non-immigrants at least the local prevailing wage or the employer’s actual wage, whichever is higher, and pay for nonproductive time caused by a decision made by the employer; and offer nonimmigrants benefits on the same basis as U.S. workers. 2. Working conditions: The employment of H-1B nonimmigrants will not adversely affect the working conditions of U.S. workers similarly employed. 3. Strike, lockout, or work stoppage: No strike or lockout exists in the occupational classification at the place of employment. 4. Notification: The employer has notified employees at the place of employment of the intent to employ H-1B workers. Certain employers are required to make three additional attestations on their application. These additional attestations apply to H-1B employers who: (1) are H-1B dependent, that is, generally those whose workforce is comprised of 15 percent or more H-1B nonimmigrant employees; or (2) are found by Labor to have committed either a willful failure to meet H-1B program requirements or misrepresented a material fact in an application during the previous 5 years. These employers are required to additionally attest that: (1) they did not displace a U.S. worker within the period of 90 days before and 90 days after filing a petition for an H-1B worker; (2) they took good faith steps prior to filing the H-1B application to recruit U.S. workers and that they offered the job to a U.S. applicant who was equally or better qualified than an H-1B worker; and (3) prior to placing the H-1B worker with another employer, they inquired and have no knowledge as to that employer’s action or intent to displace a U.S. worker within the 90 days before and 90 days after the placement of the H-1B worker with that employer. After Labor certifies an application, the employer must submit a petition for each worker it wishes to hire to USCIS. On March 1, 2003, Homeland Security took over all functions and authorities of Justice’s Immigration and Naturalization Service under the Homeland Security Act of 2002 and the Homeland Security Reorganization Plan of November 25, 2002. Employers submit to USCIS the application, petition, and supporting documentation along with the appropriate fees. Information on the petition must indicate the wages that will be paid to the H-1B worker, the location of the position, and the worker’s qualifications. Through a process known as adjudication, USCIS reviews the documents for certain criteria, such as whether the petition is accompanied by a certified application from Labor, whether the employer is eligible to apply for H-1B workers, and whether the prospective H-1B worker is qualified for the position. The Wage and Hour Division of Labor’s Employment Standards Administration performs investigative and enforcement functions to determine whether an employer has complied with its attestations on the application. An aggrieved individual or entity or certain non-aggrieved parties may file a complaint with Labor that an employer violated a requirement of the H-1B program. To conduct an investigation, the Administrator must have reasonable cause to believe that an employer did not comply with or misrepresented information on its application. Employers who violate any of the attestations on the application are subject to civil money penalties or administrative remedy, such as paying back wages to H-1B workers or debarment, which disqualifies an employer from participating in the H-1B program for a specified period of time. Employers, the person who filed the complaint, or other interested parties who disagree with the findings of the investigation then have 15 days to appeal by requesting an administrative hearing. The Office of Special Counsel for Immigration Related Unfair Employment Practices (OSC) of the Department of Justice also has some enforcement responsibility. Under statutory authority created by the Immigration Reform and Control Act of 1986, OSC pursues charges of citizenship discrimination brought by U.S. workers who allege that an employer preferred to hire an H-1B worker. Labor’s H-1B authority is limited in scope, but it does not use its full authority to oversee employers’ compliance with program requirements. Labor’s review of employers’ applications to hire H-1B workers overlooks some inaccuracies, such as applications containing invalid employer identification numbers. WHD investigates complaints made against H-1B employers and recently began random investigations of some employers who had previously violated program requirements. Labor uses education as the primary method of promoting employers’ compliance with the H-1B program. Labor reviews applications electronically by subjecting them to data checks, and its web site informs employers that it will certify or deny applications within minutes based on the information entered. We found that of the 960,563 applications that Labor electronically reviewed from January 2002 through September 2005, it certified 99.5 percent. Labor’s review of the application is limited by law to identifying omissions or obvious inaccuracies. Labor defines an obvious inaccuracy as when an employer: files an application after being debarred, or disqualified, from participating in the H-1B program; submits an application more than 6 months before the beginning date of the period of employment; identifies multiple occupations on a single application; states a wage rate that is below the Fair Labor Standards Act minimum wage; identifies a wage rate that is below the prevailing wage on the application; and identifies a wage range where the bottom of the range is lower than the prevailing wage on the application. Despite these checks, Labor’s system does not consistently identify all obvious inaccuracies. For example, although the overall percentage was small, we found 3,229 applications that were certified even though the wage rate on the application was lower than the prevailing wage for that occupation in the specific location (see table 1). Additionally, Labor does not identify other errors that may be obvious. Specifically, Labor told us its system reviews an application’s employer identification number to ensure it has the correct number of digits and that the number does not appear on the list of employers who are ineligible to participate in the H-1B program. However, we found 993 certified applications with invalid employer identification number prefixes. Officials told us that in other programs, such as the permanent employment program, Labor matches the application’s employer identification number to a database with valid employer identification numbers. However, they do not formally do this match with H-1B applications because it is an attestation process, not a verification process. Likewise, Labor officials told us they frequently review the application process to determine where improvements can be made, but they rely on a system of data checks rather than a formal quality assurance process because of the factual nature of the form and the number of applications received. Also, officials said if they conducted a more in-depth review of the applications, they could overreach their legal authority and increase the processing time for applications. Additionally, they said the integrity of the H-1B program is ensured through enforcement and by the fact that there is actual review by staff when the employer submits the paperwork to USCIS. Labor enforces H-1B program requirements primarily by investigating complaints filed against employers by H-1B workers or others. Labor’s Wage and Hour Division received 1,026 complaints from fiscal year 2000 through fiscal year 2005. Labor officials said they investigate the employer’s compliance with all program requirements for all H-1B workers; therefore, an investigation may yield more than one violation. While the number of H-1B complaints and violations has increased from fiscal year 2000 through fiscal year 2005, the overall numbers remain small and may have been affected by changes to the program. As shown in table 2, we found that the number of complaints increased from 117 in fiscal year 2000 to 173 in fiscal year 2005, and the number of cases with violations more than doubled, along with a corresponding increase in the number of employer penalties. In fiscal year 2000, Labor required employers to pay back wages totaling $1.2 million to 206 H-1B workers; by fiscal year 2005, back wages penalties had increased to $5.2 million for 604 workers. The most common type of violation each fiscal year involved a failure to pay H-1B workers the required wage. Labor officials told us it is difficult to attribute changes in complaints and violations to any specific cause because of multiple legislative changes to the program, such as the temporary increase in the number of H-1B workers allowed to enter the country and the additional attestations for certain employers that expired and then were reinstated. Labor’s Wage and Hour Division has recently begun random investigations of employers who have willfully violated H-1B program requirements in the past. Under the INA, as amended, Labor has had the authority to conduct these investigations since 1998, but officials told us the agency had not done so until recently for several reasons. First, these employers frequently go out of business because they are not allowed to participate in the H-1B program for a period of time. Second, there are only a limited number of willful violators—just 50 nationwide in late fiscal year 2005. In addition, we were told that H-1B investigators have heavy caseloads. However, Labor officials said they now have 59 cases that they can investigate, and in April 2006, directed each of their regional offices to initiate a random investigation of at least one employer prior to the end of fiscal year 2006. Labor uses education as the primary method of promoting employer compliance with the H-1B program. From 2000 through 2005, Labor’s district offices conducted six presentations on H-1B compliance. Labor also holds compliance seminars in response to requests from employer associations and discusses program requirements with companies that do not have pending lawsuits related to the H-1B program. Additionally, Labor posts guidance and fact sheets on its web site. While some of its fact sheets have not been updated since the program was amended by the H-1B Visa Reform Act in 2004, officials said 26 new fact sheets will be posted on the agency’s web site by the end of fiscal year 2006. During investigations of employers, Labor explains the employer’s legal obligations and asks the employer about the changes it plans to make to comply with the law. When an investigation results in an employer’s debarment, Labor publicizes the case through press releases highlighting the consequences for not complying with H-1B program requirements. Labor is also working with the Department of State to provide information cards to H-1B workers when they are issued their visa. These cards inform them about their employment rights, including required wages and benefits, illegal deductions, working conditions, records, and discrimination. Homeland Security and Justice also use education to promote employer compliance with the H-1B program. Homeland Security publishes informational bulletins and uses its web site to advise the public of any changes to the program regarding filing fees or eligibility resulting from changes in the law. Justice engages in educational activities through public service announcements aimed at employers, workers, and the general public. The agency trains employers and works with other federal agencies to coordinate employer education programs. Justice also uses a telephone intervention hotline to resolve disputes between U.S. workers and H-1B employers, answers questions submitted via e-mail, issues guidance, and provides information on its web site. Labor, Homeland Security, and Justice all have responsibilities under the H-1B program, but Labor and Homeland Security face challenges sharing information that could help identify possible program violations. In addition to Homeland Security, Labor also shares enforcement responsibilities with Justice, which pursues charges filed by U.S. workers who allege that they were not hired or were displaced because of an H-1B worker. Justice has found discriminatory conduct in relatively few cases. Homeland Security reviews Labor’s certified application as part of the adjudication process; however, it cannot easily verify whether employers have submitted petitions for more workers than originally requested on the application. USCIS’s data system does not match each petition to its corresponding application because the system does not include a field for the unique number Labor assigns each application. As a result, USCIS cannot easily verify how many times the employer has used a given application or which petitions were supported by which application, potentially allowing employers to use the application for more workers than they were certified to hire. USCIS told us that while it has attempted to add Labor’s application case number to its database, it has not been able to because of the system’s memory limitations and it will be several years before a new information technology system is operational. During the process of reviewing employers’ petitions, USCIS may find evidence the employer is not meeting the requirements of the H-1B program, but current law precludes Labor’s Wage and Hour Division from using this information to initiate an investigation of the employer. Some petitions to extend workers’ H-1B status have been submitted with W-2 forms where the wage on the W-2 was less than the wage the employer indicated it would pay on the original Labor application, according to USCIS staff. If the employer is unable to adequately explain the discrepancy, USCIS may deny the petition but does not have a formal mechanism for reporting these discrepancies to Labor. Moreover, even if USCIS did report these cases, current law precludes WHD from using the information to initiate an investigation. According to officials from Labor, it does not consider Homeland Security to be an aggrieved party; therefore, Labor would not initiate an investigation based on information received from, or a complaint filed by, Homeland Security. Justice pursues charges filed by U.S. workers who allege that an H-1B worker was hired in their place. Such charges may be resolved before an administrative law judge, through an out-of-court settlement, or by dismissal for lack of reasonable cause to believe that a violation occurred. From 2000 through 2005, no cases were heard by an administrative law judge. Most of the 101 investigations started by Justice from 2000 through 2005 were found to be incomplete, withdrawn, untimely, dismissed, or investigated without finding reasonable cause for a violation. Of the 97 investigations closed, Justice found discriminatory conduct in 6 cases, and assessed $7,200 in penalties in 3 of the 6 cases, all in 2003. We found that Labor—in coordination with Homeland Security—could provide better oversight of employers’ compliance with H-1B visa program requirements. Even though Labor’s authority to review applications is limited, it is certifying some applications that do not meet program requirements or have inaccurate information. Additionally, USCIS may find information in the materials submitted by an H-1B employer that indicates the employer is not complying with the program requirements. However, these employers may not face consequences because USCIS does not have a formal mechanism for reporting this information to Labor, and current law restricts Labor from using such evidence to initiate an investigation. USCIS also has an opportunity to improve its oversight by matching information from its petition database with Labor’s application case number to detect whether employers are requesting more H-1B workers than they were originally certified to hire. As Congress deliberates changes to U.S. immigration policy, it is essential to ensure that employers comply with program requirements designed to protect both domestic and H-1B workers. To increase employer compliance with the H-1B program and protect the rights of U.S. and H-1B workers, Congress should consider the following two actions: Eliminate the restrictions on Labor using petition information submitted by employers to Homeland Security as the basis for initiating an investigation, and Direct Homeland Security to provide Labor with information received during the adjudication process that may indicate whether an employer is fulfilling its H-1B responsibilities. Further, we recommend that Labor strengthen its oversight of employers’ applications to hire H-1B workers by improving its procedures for checking for completeness and obvious inaccuracies, including developing more stringent, cost-effective methods of checking for wage inaccuracies and invalid employer identification numbers. We also recommend that USCIS ensure employers’ compliance with the program requirements by including Labor’s application case number in its new information technology system, so that adjudicators are able to quickly and independently ensure that employers are not requesting more H-1B workers than were originally approved on their application to Labor. We provided a draft of our report to the Departments of Labor, Homeland Security, and Justice for their review and comments. Each agency provided technical comments, which we incorporated as appropriate. Justice did not have formal comments on our report. Homeland Security agreed with our recommendations, and stated that USCIS intends to include Labor’s application case number in its new information technology system. Labor questioned whether our recommendation for more stringent application review measures is supported by the low error rate that we found, as well as whether the benefits of instituting such measures would equal or exceed the added costs of implementing them. In addition, Labor said that Congress intentionally limited the scope of Labor’s application review in order to place the focus for achieving program integrity on USCIS. We believe that Labor is at risk of certifying H-1B applications that contain more errors than were found in the scope of our review. For example, we checked only for employer identification numbers with invalid prefix codes, and did not look for other combinations of invalid numbers or data. Therefore, we do not know the true magnitude of the error rate in the certification process. We continue to believe there are cost-effective methods that Labor could use to check the applications more stringently that would enhance the integrity of the H-1B process. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions you or other Members of the Subcommittee may have at this time. For information regarding this testimony, please contact Sigurd R. Nilsen, Director, Education, Workforce, and Income Security Issues, on 202-512-7215. Individuals making key contributions to this testimony include: Alicia Puente Cackley, Gretta L. Goodwin, Amy J. Anderson, Pawnee A. Davis, Sheila McCoy and Rachael C. Valliere. H-1B Visa Program: Labor Could Improve Its Oversight and Increase Information Sharing with Homeland Security. GAO-06-720. Washington, D.C.: June 22, 2006. Homeland Security: Better Management Practices Could Enhance DHS’s Ability to Allocate Investigative Resources. GAO-06-462T. Washington, D.C.: March 28, 2006. Immigration Benefits: Additional Controls and a Sanctions Strategy Could Enhance DHS’s Ability to Control Benefit Fraud. GAO-06-259. Washington, D.C.: March 10, 2006. Homeland Security: Visitor and Immigrant Status Program Operating, but Management Improvements Are Still Needed. GAO-06-318T. Washington, D.C.: January 25, 2006. Immigration Benefits: Improvements Needed to Address Backlogs and Ensure Quality of Adjudications. GAO-06-20. Washington, D.C.: November 21, 2005. Immigration Enforcement: Weaknesses Hinder Employment Verification and Worksite Enforcement Efforts. GAO-05-813. Washington, D.C.: August 31, 2005. Department of Homeland Security, U.S. Citizenship and Immigration Services: Allocation of Additional H-1B Visas Created by the H-1B Visa Reform Act of 2004. GAO-05-705R. Washington, D.C.: May 18, 2005. Homeland Security: Some Progress Made, but Many Challenges Remain on U.S. Visitor and Immigrant Status Indicator Technology Program. GAO-05-202. Washington, DC: February 23, 2005. Alien Registration: Usefulness of a Nonimmigrant Alien Annual Address Reporting Requirement Is Questionable. GAO-05-204. Washington, D.C.: January 28, 2005. Highlights of a GAO Forum: Workforce Challenges and Opportunities For the 21st Century: Changing Labor Force Dynamics and the Role of Government Policies. GAO-04-845SP. Washington, D.C.: June 1, 2004. H-1B Foreign Workers: Better Tracking Needed to Help Determine H-1B Program’s Effects on U.S. Workforce. GAO-03-883. Washington, D.C.: September 10, 2003. Information Technology: Homeland Security Needs to Improve Entry Exit System Expenditure Planning. GAO-03-563. Washington, D.C.: June 9, 2003. High-Skill Training: Grants from H-1B Visa Fees Meet Specific Workforce Needs, but at Varying Skill Levels. GAO-02-881. Washington, D.C.: September 20, 2002. Immigration Benefits: Several Factors Impede Timeliness of Application Processing. GAO-01-488. Washington, D.C.: May 4, 2001. H-1B Foreign Workers: Better Controls Needed to Help Employers and Protect Workers. GAO/HEHS-00-157. Washington, D.C.: September 7, 2000. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The H-1B visa program assists U.S. employers in temporarily filling certain occupations with highly-skilled foreign workers. There is considerable interest regarding how Labor, along with Homeland Security and Justice, is enforcing the requirements of the program. This testimony summarizes our report, GAO-06-720 , that describes how Labor carries out its H-1B program responsibilities and how Labor works with other agencies involved in the H-1B program. While Labor's H-1B authority is limited in scope, it does not use its full authority to oversee employers' compliance with program requirements. Labor's review of employers' applications to hire H-1B workers is timely, but lacks quality assurance controls and may overlook some inaccuracies. From January 2002 through September 2005, Labor electronically reviewed more than 960,000 applications and certified almost all of them. Labor's review of the applications is limited by law to checking for missing information or obvious inaccuracies and does this through automated data checks. However, in our analysis of Labor's data, we found more than 3,000 applications that were certified even though the wage rate on the application was lower than the prevailing wage for that occupation. We also found approximately 1,000 certified applications that contained erroneous employer identification numbers, which raises questions about the validity of the applications. In its enforcement efforts, Labor's Wage and Hour Division (WHD) investigates complaints made against H-1B employers. From fiscal year 2000 through fiscal year 2005, Labor reported an increase in the number of H-1B complaints and violations, and a corresponding increase in the number of employer penalties. In fiscal year 2000, Labor required employers to pay back wages totaling $1.2 million to 226 H-1B workers; by fiscal year 2005, back wage penalties had increased to $5.2 million for 604 workers. Program changes, such as a higher visa cap in some years, could have been a contributing factor. In April 2006, WHD began randomly investigating willful violators of the program's requirements. Labor uses education as its primary method of promoting compliance with the H-1B program by conducting compliance assistance programs and posting guidance on its web site. Labor, Homeland Security, and Justice all have responsibilities under the H-1B program, but Labor and Homeland Security face challenges sharing information. After Labor certifies an application, USCIS reviews it but cannot easily verify whether employers submitted petitions for more workers than originally requested on the application because USCIS's database cannot match each petition to Labor's application case number. Also, during the process of reviewing petitions, staff may find evidence that employers are not meeting their H-1B obligations. For example, Homeland Security may find that a worker's income on the W-2 is less than the wage quoted on the original application. USCIS may deny the petition if an employer is unable to explain the discrepancy, but it does not have a formal process for reporting the discrepancy to Labor. Moreover, current law precludes WHD from using this information to initiate an investigation of the employer. Labor also shares enforcement responsibilities with Justice, which pursues charges filed by U.S. workers who allege they were displaced by an H-1B worker. From 2000 through 2005, Justice found discriminatory conduct in 6 out of the 97 investigations closed, and assessed a total of $7,200 in penalties.
Although recent events may have moved airport congestion off center stage as a major national issue, delays remain a pervasive problem, in part because of the interdependence of the nation’s airports. The effect of delays can quickly spread beyond those airports where delays tend to occur most often, such as New York La Guardia, Chicago O’Hare, Newark International, and Atlanta Hartsfield. Delays at these airports can quickly create a “ripple” effect of delays that affects many airports across the country. For example, flights scheduled to take off from these airports may find themselves being held at the departing airport due to weather or limited airspace. Similarly, an aircraft late in leaving the airport where delays are occurring may be late in arriving at its destination, thus delaying the departure time for the aircraft’s next flight. Delays have many causes, but weather is the most prevalent. Figures compiled by FAA indicate that weather causes about 70 percent of the delays each year. Apart from weather, the next main cause is lack of capacity—that is, the inability of the national airspace system to handle the amount of traffic seeking to use it. Capacity can be measured in a variety of ways. For example, at individual airports, one measure is the maximum number of takeoffs and landings that can be conducted in a given period, such as 15 minutes or 1 hour. In our 2001 report, we noted that FAA had established such a capacity benchmark at each of the 31 of the nation’s busiest airports. FAA’s data on capacity and demand at these airports showed that even in optimum weather conditions, 16 airports had at least three 15-minute periods each day when demand exceeded capacity. Weather and capacity problems are often linked, because bad weather can further erode capacity. For example, some airports have parallel runways that are too close together for simultaneous operations in bad weather. When weather worsens, only one of the two runways can be used at any given time, thereby reducing the number of aircraft that can take off and land. FAA’s data in 2001 showed that in bad weather, 22 of the 31 airports had at least three 15-minute periods when demand exceeded capacity. Another measure of capacity, apart from the capacity of individual airports, is the number of aircraft that can be in a given sector of the airspace. For safe operations, aircraft must maintain certain distances from each other and remain within authorized airspace. If too many aircraft are trying to use the same airspace, some must wait, either on the ground or en route. Addressing flight delay problems also requires action by multiple aviation stakeholders because no single entity has the authority or ability to solve delay-related problems. The federal government, especially through the Federal Aviation Administration (FAA) and its parent agency, the Department of Transportation (DOT), plays a major role by operating the national airspace system, distributing federal funding for airports, and setting operating standards for all aircraft and airports. Airports and airlines are also important decision makers and funding sources. The nation’s airports are primarily owned and operated by local units of government, so that decisions about such steps as expanding airport capacity are primarily local in nature. Airlines’ business decisions have a strong effect on the volume and routing of flights, the type and size of aircraft used, and the degree to which aircraft are upgraded to take advantage of new technology. Several initiatives to reduce flight delays and enhance capacity are ongoing. These initiatives which FAA, the airlines, and the airports are implementing are incorporated into FAA’s major capacity-enhancing effort: the Operation Evolution Plan (OEP). The OEP is a rolling 10-year plan to increase capacity and efficiency of the national airspace system and focuses on airport surface infrastructure, and technological and procedural initiatives at 35 of the busiest airports in the United States. FAA acknowledges, however, that the OEP is not intended as the ultimate solution to congestion and delay problems. Responsibility for the various initiatives is still shared among the various segments of the aviation community. In February 2005, FAA published version 7 of the OEP and organized it into the following four quadrants: Airport Congestion. The Airport Congestion quadrant focuses on capacity enhancements for the airport surface. One of the most effective ways to increase capacity is to build runways; however, it takes an average of 10 years from the time planning begins for a runway until it is commissioned. To help expedite the process for building runways, Congress and FAA streamlined the environmental review phase of the runway process. In addition, according to FAA, over the last six years, seven new runways were opened at Phoenix, Detroit, Denver, Miami, Cleveland, Houston, and Orlando airports which provided those airports with the potential to accommodate about one million more annual operations (take-offs and landings). Seven more runways and one runway extension are included in the OEP and are scheduled to open by the end of 2008. These runways are expected to provide those airports with the potential to accommodate 889,000 more annual operations in the system, as shown in figure 2. In addition to the runways listed in the OEP, nine more projects are in the planning or environmental stages, including one new runway, three airfield reconfigurations, one runway extension, and three new airports in major metropolitan areas. FAA also has additional flight reduction activities that are not included in the OEP. To reduce flight delays at some of the delay- prone airports, such as New York La Guardia and Chicago O’Hare, FAA is exploring administrative and market based options. For example, FAA is considering auctioning off landing and take off rights at New York La Guardia and is currently limiting the number of scheduled arrivals during peak periods at New York La Guardia and Chicago O’Hare. Air Traffic Management Flow Efficiency. This quadrant focuses on new technology and procedures to optimize the flow of traffic and maximize system throughput which may allow better control and utilization of current airspace. Included is the Collaborative Convective Forecast Product which is a graphical forecast of potential convective activity areas (i.e. thunderstorms) for use in the strategic planning and management of air traffic. It is intended to provide advance planning for long haul flights and allows for schedule predictability based on 2-, 4-, and 6-hour forecasts. This tool is most useful during the severe weather avoidance procedures season, which is from March to October. Another program is Collaborative Decision Making, which is a joint government/industry initiative. Collaborative decision making focuses on electronic data exchange; optimized airspace utilization; shared planning and decision-making; and post-analysis reporting. In addition, the Traffic Management Advisor, which is in operation at eight air route traffic control centers, is an automated decision support tool, is intended to provide controllers and traffic management coordinators more information on airport arrival demand and available capacity for making decisions on aircraft spacing. En Route Congestion. Although the flying public is impacted by delays at the airports, many times this occurs in the en route areas as the airways become congested. The tools in this quadrant reduce delays and contribute to time and fuel savings for the vast majority of airspace users. One of the tools currently in use is reduced lateral (side-to-side) separation may provide space for additional routes between current city pairs or allow for new direct routes. Reduced longitudinal (nose-to-tail) separation may provide more opportunities to add flights without incurring delays. For domestic flights, Domestic Reduced Vertical Separation Minimum was implemented in fiscal year 2005 in the contiguous United States and Alaska and adds six additional flight levels between existing flight levels. The User Request Evaluation Tool which was installed at l7 air route traffic control centers and is operational at 13 air route traffic control centers, allows controllers to predict aircraft-to- aircraft and aircraft-to-airspace conflicts, which allows them to construct alternative flight paths. Airspace redesign projects also provide significant capacity improvements. For example, new routes added as part of the High Altitude Redesign increased en route throughput form the Pacific Northwest into the San Francisco Bay and the Los Angeles Basin areas. Terminal Area Congestion. Terminal airspace is a critical component in the efficient use of airport capacity. In instances where volume has increased and the current airspace structure is the limiting factor, redesigning arrival and departure procedures, including the addition of Area Navigation and Required Navigation Performance procedures, will allow more efficient use of constrained terminal airspace. Also, by applying existing technology with new procedures may provide instrument approaches to nearly all runways greater than 5,000 feet and under a wider range of meteorological conditions that are insensitive to airport surface traffic. Area navigation procedures provide flight path guidance from the runway to the en route airspace with minimal instructions given by air traffic controllers. As a result, routine controller/pilot communications are reduced, which frees time to handle other safety-critical flight activities. Other key benefits include more efficient use of airspace, with improved flight profiles, resulting in significant fuel efficiencies to the airlines. Additional solutions for increasing capacity in this arena are Time Based Metering which is used in conjunction with Traffic Management Advisor, became operational at seven air route traffic control centers. By optimizing the flow of aircraft from the en route to the terminal area, Time Based Metering with Traffic Management Advisor may help an airport to efficiently use the full capacity of its runways which increases acceptance rates as well as peak throughput. An air traffic management tool called Integrated Terminal Weather System which provides full color graphic displays of essential weather information to promote the safety, capacity, and efficiency of air traffic control operations was also implemented at Boston Logan, Denver International, and Minneapolis-St. Paul airports in 2004. According to FAA, the plan is to install the production version of Integrated Terminal Weather System at the New York terminal radar control facility in 2006. A number of challenges in reducing flight delays and enhancing capacity remain. A daunting challenge that FAA and other aviation stakeholders will have to address is funding the various initiatives that are designed to address flight delays and enhance capacity. The successful implementation of many of these initiatives is predicated on the availability of funding However, since 2000, which is to date the worst year in history for delays, the financial condition of the aviation industry has changed significantly. A number of structural changes within the airline industry, such as the growth of the Internet as a means to sell and distribute tickets, the growth of the low cost airlines, and fare reductions by legacy carriers, all transformed the industry and led to lower average fares. These lower fares have resulted in lower ticket taxes and less revenue into the Airport and Airway Trust Fund. In addition, a series of largely unforeseen events, including the September 11 terrorist attacks, war in Iraq and associated security concerns, SARS, global recessions, and a steep decline in business travel seriously reduced the demand for air travel and resulted in sharp decreases in airline industry revenue. Consequently, FAA expects that over the next four years there may be a multi-billion dollar gap between its costs and revenues. According to one aviation expert, this gap could have consequences that would increase air traffic delays. For example, FAA’s Facilities and Equipment account, which provides funding for modernizing the air traffic control system and improving its reliability, capacity, and efficiency, was reduced by 15 percent in fiscal year 2005 and the President’s 2006 budget proposes to reduce it by 20 percent in fiscal year 2006. These are the funds that are key to the national airspace system’s future ability to handle demand and to minimize delays. For example, to provide the $4.4 billion needed for its major system acquisitions while remaining within its budget targets through fiscal year 2009, FAA has made significant cuts elsewhere in its capital funding plans. Specifically, FAA eliminated all of the $1.4 billion that it had set aside for what it calls the “architecture segment.” These funds would have been used to perform about two years’ worth of early research on new programs before they are mature enough to receive formal Joint Resources Council approval. FAA also made significant reductions in planned investments for facilities—an action that runs counter to its reported need to refurbish or replace its physical infrastructure. Thus, even if all OEP initiatives are implemented the national airspace system is expected to fall behind demand, resulting in an increase in congestion and delays over the 10-year period of the OEP. FAA’s Management Advisory Council estimates that passengers would experience 63 percent more total delay hours in 2012 than they did in 2000. In contrast, FAA states that if all of the OEP initiatives are implemented, delays will be maintained at or below the flight delay levels in 2000. However, FAA also stated that capacity at some airports will not keep pace with demand and in these cases delays will get worse over time because not all airports have improvements planned. In 2004, the airline industry losses totaled $9 billion and the industry is expecting similar losses in 2005, which will make it difficult for them to equip their aircraft with some of the new air traffic control technology, according to Air Transport Association officials. Another important challenge is reducing flight delays and enhancing capacity at delay-prone airports, such as those shown in table 1, some of which have little capacity to physically expand and would find it difficult to build even one more runway, either because they lack the space or would face intense opposition from adjacent communities. Although eight runways were opened during the last six years and seven new runways are scheduled to be opened by the end of 2008, only three (Atlanta Hartsfield, Philadelphia International, and Houston International) of the nine airports that experienced the highest rate of delays in 2004 will receive new runways. Because these delay-prone airports can cause delays that ripple throughout the system, other airports that have increased their own capacity could still experience delays. For example, in 2000, Phoenix Sky Harbor International put an additional runway into service, and the airport had sufficient capacity to allow flights to take off on time. However, the airport ranked among the top 15 in the United States for flight delays. According to airport officials, most of the delays in Phoenix were the result of delays and cancellations at other airports— circumstances unrelated to the capacity at Phoenix. FAA also projects that the three New York-area airports—La Guardia, Newark, and Kennedy— will experience relatively small capacity gains during this decade—just 7 percent for Newark and 1 percent each for the other two airports. In addition to addressing the capacity needs of the most delay-prone airports, FAA, airlines, and airports will also have to address the emerging capacity needs of new metropolitan areas in the South and Southwest. Among those metropolitan areas FAA believes will need additional capacity by 2013 are Tucson, AZ; Austin-San Antonio, TX; and South Florida. Other options — not in the OEP — exist as potential measures to address capacity needs as shown in table 2. These options, which have been cited by various researchers and policy organizations over the last decade, basically fall into two categories. The first category involves measures for adding airport infrastructure besides adding runways to existing airports, such as building new airports or using nearby underdeveloped regional airports. The second category includes developing alternative modes of intercity travel other than air transportation, such as high-speed rail. The applicability of any particular option is likely to vary by location, considering the circumstances at each major airport. There is no “one-size fits- all” solution; rather, substantially reducing delays will probably require a combination of options spread out over time. For example, the airspace surrounding the greater New York metropolitan area is perhaps the most congested airspace in the nation. The three major airports in the area (La Guardia, Newark, and Kennedy), which currently are among the nation’s most delay-prone airports, are expected to continue to experience substantial air traffic growth. But these airports have very limited expansion potential, largely because they cannot realistically build new runways. Building new airports or developing regional airports to serve these airports are long-term solutions that will likely take many years to materialize. In the meantime, other short-term options would need to be considered as passenger demand increases, such as ways to use existing facilities more efficiently. This is the direction that FAA and the New York/New Jersey Port Authority, which operates the three area airports, were moving before the drop in passenger demand following the events of September 11. As demand and delay are once again increasing, the FAA and Port Authority are reevaluating a regional approach to addressing these issues. As noted earlier, FAA and the Port Authority are also considering market- based and administrative approaches, such as auctioning off landing and take-off rights and congestion pricing for La Guardia. However, the airlines oppose auctions because of the uncertainty regarding number of slots and gates that they might receive. The airlines also, to a lesser degree, oppose market-based mechanism such as congestion pricing because of concerns over who would have responsibility for the revenue generated. Because major airports in other locations may face different circumstances than the New York airports face, they may need an entirely different set of solutions to address flight delays. Options— such as building new airports, developing regional airports, or using ground transportation alternatives —are likely to be a more daunting challenge than implementing initiatives in the OEP. Implementing the OEP’s initiatives will not be easy, but the opportunity for success is enhanced because FAA has the support of major aviation stakeholders on nearly all of the initiatives. By contrast, gaining consensus on any of these other options could be much more difficult because they change the nature of the system to the degree that each one could adversely affect the interests of one or more key aviation stakeholder groups—including passengers; air carriers; and aircraft operators, airports, and local communities. For example, Large infrastructure projects, such as building new airports that are located in metropolitan areas, could create major controversy. Such projects are often opposed by adjacent communities that are fearful of noise, displacement, or other environmental concerns. Also, finding suitable sites for such projects in crowded metropolitan areas—with enough land that is compatible with other potential land uses—may be difficult. Airlines may oppose some types of infrastructure projects if they fear that the projects would adversely affect them. For example, an airline with a dominant market position at a major hub airport may oppose building an additional airport nearby because the dominant carrier may view it as an opportunity for their competitors to enter the market in that area. In addition, some airlines are concerned about the need to divide their hub resources between the current airport and a new airport. Administrative, regulatory, and other measures for managing the demand for existing capacity could generate opposition from various sources as well. Airlines may oppose such measures if they perceive that these measures would restrict their choices in determining rates, schedules, and aircraft sizes—all of which could affect their profits and competitive status relative to other airlines. Smaller communities may also oppose such measures, fearing that commercial air service to and from their airports may be reduced or curtailed because airlines would react by choosing more profitable routes for the limited number of airport slots available. Cost, a factor to be weighed in adding runways to existing airports, is also an important consideration when building a new airport. For example, the last major new airport—the Denver International Airport completed in 1995—cost almost $5 billion to build. This cost would have been greater had the airport been located closer to the city, but since it was located on open land away from established communities, the costs of noise mitigation and other land-use issues were minimized. Also, the construction of fast-rail service in populated metropolitan corridors is likely to be costly. For example, Amtrak estimates the cost to construct fast-rail service in federally designated, high-speed corridors and the Northeast Corridor of the United States will be about $50 billion to $70 billion. In summary, the initiatives implemented by FAA, airlines, and the airports might help to reduce flight delays and increase capacity in the national airspace system in the short term. However, FAA and other aviation stakeholders continue to face a number of challenges in reducing delays at the most delay-prone airports and developing long term solutions for enhancing capacity. Addressing these challenges is perhaps more difficult today in comparison to in 2000 because a number of issues have exacerbated the situation. Chief among them is funding these initiatives during a time when the federal government and the aviation industry are experiencing significant fiscal problems. Consequently, keeping up with the economy’s increasing demand for air transportation services will require a tremendous amount of planning; making some tough choices about which initiatives, both short-term and long-term, to pursue; and efforts to ensure that such initiatives are adequately funded. For further information on this testimony please contact Dr. Gerald Dillingham by email at [email protected] or Tammy Conquest at [email protected]. Alternatively, we can be reached by phone at (202) 512-2834. Individuals making key contributions to this testimony include Colin Fallon, Simon Galed, David Hooper, Maureen Luna-Long, Richard Scott, Laura Shumway, and Nicolas Zitelli. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Since the unprecedented flight delays in 2000, a year in which one in four flights were delayed, our aviation system has been adversely affected by many unanticipated events--such as the September 11th terrorist attacks, and Severe Acute Respiratory Syndrome (SARS)--that significantly reduced the demand for air travel. However, demand for air travel is rebounding. For example, the number of passengers traveling by air increased from 642 million in 2003 to 688 million in 2004. Flight delays have been among the most vexing problems in the national transportation system and are defined by the Department of Transportation as instances when aircraft arrive at the gate 15 minutes or more after scheduled arrival time. In 2004, one in five flights were delayed primarily at New York La Guardia and Chicago O'Hare. Delays at these airports have consequences for the rest of the system. GAO's testimony addresses the following questions that pertain to flight delays and enhancing capacity: (1) What initiatives are ongoing by the federal government, airlines, and airports to address flight delays and enhance capacity? (2) What are some of the challenges in reducing flight delays and enhancing capacity? (3) What other options are available for reducing flight delays and enhancing capacity? Several initiatives to address flight delays and enhance capacity are ongoing. Many of these initiatives are reflected in FAA's February 2005 Operation Evolution Plan, which is a 10-year plan to increase capacity and efficiency of the national airspace system at 35 of the busiest airports in the United States. New runways opened in the last 6 years at the Phoenix, Detroit, and 5 other airports. Seven more runways are scheduled to open by the end of 2008. Congress and FAA also streamlined the process for building runways. In addition to building runways, several other initiatives were implemented. For example, in January 2005, FAA implemented the Domestic Reduced Vertical Separation Minimum which is designed to increase high altitude routes in the contiguous United States and Alaska. To reduce flight delays at some of the delay-prone airports, FAA is limiting the number of takeoffs and landings during peak periods at New York La Guardia and Chicago O'Hare and is considering auctioning off landing and take off rights at New York La Guardia. A number of challenges in reducing flight delays and enhancing capacity remain. Chief among them is obtaining funding for the initiatives mentioned above; their successful implementation is predicated on the availability of funding from several sources, including FAA, airlines, and airports. Another challenge is reducing flight delays and enhancing capacity at delay-prone airports, such as New York La Guardia, which have little capacity to expand and would find it difficult to build even one more runway. Other options to address delay problems include adding new capacity by building new airports. According to FAA, airport authorities in Chicago, Las Vegas, and San Diego are evaluating the need for new airports. Another option is to develop other modes of intercity travel, such as high-speed rail, where metropolitan areas are relatively close together. These options may conflict with the interests of one or more key stakeholder groups; and, in many cases, would be costly.
The 2010 Nuclear Security Summit highlighted the global threat posed by nuclear terrorism and the need for countries to work in a comprehensive and concerted fashion to ensure that nuclear materials are not stolen or diverted for weapons use. The Summit produced a communiqué, a high- level political statement by the leaders of the 47 participating countries. The communiqué identified several measures that countries planned to take to strengthen their nonproliferation efforts. These efforts included, among other things, (1) focusing on improving security; (2) accounting for and consolidating HEU and plutonium; and (3) ensuring that the International Atomic Energy Agency (IAEA) has the necessary resources to carry out its nuclear security activities. The 2010 Summit produced results. For example, Ukraine announced at the Summit that it would ship approximately 236 pounds of HEU and 123 pounds of spent nuclear fuel to Russia by the end of 2012. During the Summit, the United States, Canada, and Mexico announced a new agreement that calls for the conversion of HEU fuel at Mexico’s nuclear research reactor to low enriched uranium. Malaysia, Egypt, and Armenia planned to enact new export control laws to limit nuclear trafficking. Malaysia, an important hub in the A.Q. Khan illicit nuclear trafficking network, approved a new export law curbing transfers of nuclear weapons-related materials. Many other nations expressed their support to funding efforts for international nuclear safety organizations. For example, Belgium, Japan, the United Kingdom, Norway, and New Zealand all pledged funding efforts towards IAEA’s Nuclear Security Fund. In December 2010, we reported on aspects of U.S. planning and strategies to secure all vulnerable nuclear materials worldwide within a 4- year period. Following President Obama’s announcement of the 4-year initiative, NSC took the lead in coordinating efforts among different federal agencies that will contribute to the initiative. NSC officials approved a U.S. governmentwide strategy entitled “Interagency Efforts to Improve the Security of Nuclear Weapons and Fissile Materials,” which, among other things, described the scope and objectives of the interagency effort and identified the main activities by agencies and programs in support of the President’s initiative. U.S. agencies—including NNSA, DOD, and State— had identified individual plans describing how they intend to contribute to the 4-year initiative. NNSA, for example, had developed a formal written plan with specific details regarding how it intends to contribute to the 4- year nuclear material security goal. The NNSA plan details a prioritized five-part effort, including (1) continuing nuclear security cooperation, especially nuclear material protection, control and accounting (MPC&A) upgrades and efforts to transition responsibility for sustaining MPC&A systems; (2) expanding nuclear security cooperation with other countries; (3) accelerating nuclear material removal with other countries; (4) strengthening nuclear security standards, practices, and next- generation nuclear safeguards; and (5) building international capabilities to prevent illicit nuclear trafficking and smuggling. Despite individual agency efforts to implement the 4-year initiative, we found that the overarching interagency strategy coordinated by NSC lacked specific details concerning how the initiative would be implemented, including the identity of, and details regarding, vulnerable foreign nuclear material sites and facilities to be addressed, agencies and programs responsible for addressing each site, planned activities at each site, potential challenges and strategies for overcoming these challenges, anticipated timelines, and cost estimates. NSC officials told us that developing a single, integrated cross-agency plan that incorporates all these elements could take years. However, we found that, absent such an implementation plan, essential details associated with the 4-year initiative were unclear, including the initiative’s overall estimated costs, time frames, and scope of work. For instance, we reported that the costs of implementing the initiative were unknown. Among other things, NSC officials told us that estimating the costs associated with the President’s goal is impossible because the initiative is predicated on having other countries provide assistance and share costs, and it is impossible to forecast cooperation that may occur with other countries, including the resumption of denuclearization efforts in North Korea. We also found that the time frames for the initiative are uncertain because NSC officials did not consider the 4-year time frame to be a hard and fast deadline. Rather than achieving a specific level of nuclear material security around the world within the 4-year time frame, the President’s proposal has value in broader terms, according to NSC officials. They described the value of the President’s proposal as a “forcing function” to (1) accelerate ongoing U.S. nuclear nonproliferation programs, (2) drive closer integration of nuclear nonproliferation programs across the federal government, and (3) mobilize greater international responsibility for and commitment to nuclear material security. Furthermore, we reported that other details relating to the overall scope of the 4-year initiative were vague. For example, we were unable to identify the scope of nuclear material worldwide that would be addressed under the initiative, because such details were not included in the interagency strategy document. We also identified concerns with how the initiative intends to address sites with potentially vulnerable nuclear materials located in countries that may impose access limitations that could complicate or preclude U.S. security assistance. We recommended that NSC lead and coordinate the development of a comprehensive plan for implementing this initiative. Such a plan, in our view, should clearly identify the specific foreign countries, sites, and facilities, where materials have been determined to be poorly secured, and include information specifying the agencies and programs responsible for addressing each location; planned activities, potential implementation challenges, and steps needed to overcome those challenges at each location; and estimated time frames and costs associated with achieving the 4-year goal. NSC did not comment on our recommendation. Improving the U.S. government’s management of nuclear cooperation agreements could contribute to the administration achieving its goal of securing all vulnerable nuclear material worldwide in 4 years. The United States has 27 nuclear cooperation agreements in force for peaceful civilian cooperation with partners, including foreign countries, the European Atomic Energy Community (EURATOM), IAEA, and Taiwan. Governmental relations between the United States and Taiwan were terminated on January 1, 1979. All agreements concluded with the authorities on Taiwan prior to January 1, 1979, are administered for the United States by the American Institute in Taiwan, a nonprofit corporation based in Washington, D.C. The United States has two nuclear cooperation agreements with Australia, including one for Separation of Uranium Isotopes by Laser Excitation technology, bringing the total number of agreements to 27. response to a 1992 congressional mandate.that NRC produced in response to the mandate stated that it was not possible to reconcile this information from available U.S. sources of data with all foreign holders of U.S. HEU within the 90-day period specified in the act. Our analysis of other documentation associated with the report shows that NRC, in consultation with U.S. agencies, was able to verify the location of 1,160 kilograms out of 17,500 kilograms of U.S. HEU remaining overseas as of January 1993. According to DOE and NRC officials, no further update to the 1993 report was issued, and the U.S. government has not subsequently attempted to develop such a comprehensive estimate of the location and status of U.S. HEU overseas. Nuclear cooperation agreements do not contain specific access rights that enable U.S. agencies to monitor and evaluate the physical security of U.S. nuclear material overseas, and the United States relies on its partners to maintain adequate security. In the absence of access rights, DOE, NRC, and State have conducted physical protection visits, when permitted, to monitor and evaluate physical security conditions of U.S. nuclear materials at overseas facilities. However, we found that the agencies have not systematically visited countries believed to be holding the most sensitive material or systematically revisited facilities not meeting international physical security standards in a timely manner. U.S. interagency teams made 55 visits from 1994 through 2010 and found that countries met IAEA security guidelines approximately half of the time. There are several countries that have U.S. nuclear material that are particularly problematic and represent special cases for concern. Specifically, U.S. nuclear material has remained at sites in three countries where physical protection measures are unknown or the sites have not been visited by an interagency physical protection team in decades. DOE’s Global Threat Reduction Initiative (GTRI) removed a large quantity of U.S.-spent HEU recently from one of those countries. However, according to NRC and State officials, U.S. transfers to these three countries were made prior to 1978, when a requirement that the partner countries guarantee that they will maintain adequate physical security for transferred nuclear material was added to the U.S. Atomic Energy Act of 1954. Therefore, these countries have not made the same commitments regarding the physical security of U.S.-transferred material as the United States’ other nuclear cooperation agreement partner countries. We also found that physical security concerns are not confined to countries that have limited infrastructure and resources. The potential vulnerability of nuclear material at certain facilities in high-income countries was raised to us by NSC officials. Specifically, we reported that there may be security vulnerabilities in certain high-income countries, including three specific high-income countries. For sites in these countries, GTRI officials told us the U.S. government’s strategy is to work bilaterally with the countries, provide recommendations to improve physical protection, and follow up as needed. In our September 2011 report, we found that DOE has taken steps to improve security at a number of facilities overseas that hold U.S. nuclear material but faces constraints. DOE’s GTRI program removes U.S. material from vulnerable facilities but can only repatriate materials that have an approved disposition pathway and meet the program’s eligibility criteria. GTRI officials told us that of the approximately 17,500 kilograms of HEU exported from the United States, 12,400 kilograms are currently not eligible for return to the United States. The vast majority of this amount—about 10,000 kilograms—is currently not eligible for return because the material does not have an acceptable disposition pathway, such as permanent disposal or potential reuse. Another 2,000 kilograms of material is located primarily in the European Atomic Energy Community (EURATOM) member countries and is in use or adequately protected, according to GTRI officials. As a result, we made several suggestions and recommendations to improve oversight and accountability. For example, we suggested that Congress consider directing DOE and NRC to compile an inventory of U.S. weapon-usable nuclear materials overseas. As a separate matter, we also suggested that Congress consider amending the Atomic Energy Act if State, working with other U.S. agencies, does not include enhanced measures regarding physical protection access rights in future and renewed agreements, so that U.S. interagency physical protection teams may obtain access when necessary to verify that U.S. nuclear materials have adequate physical protection. We also recommended that the Secretary of State, working with the Secretary of Energy and the Chairman of the NRC, establish better inventory reporting and reconciliation procedures, particularly when it comes to foreign facilities holding U.S. weapon-usable material. DOE, NRC, and State generally disagreed with our recommendations when commenting on our draft report, including the need to reconcile inventories with partner countries, stating that these reconciliations were unnecessary. State believes that implementing the recommendations, generally, would adversely impact U.S. commercial competitiveness in overseas markets and diminish U.S. influence to advance our nonproliferation objectives and cost jobs at home. DOE, however, now agrees in principle with several recommendations we directed to that agency according to a January 24, 2012, letter to us. For example, we recommended, among other things, that DOE, working with its interagency partners, develop formal goals and a systematic process to determine which foreign facilities to visit for future interagency physical protection visits. DOE informed us in the January 2012 letter that it is working with NRC, State, and other agencies to develop a new methodology and improve their efforts to set priorities for U.S. interagency physical protection visits. To that end, DOE has established regular interagency conference calls to coordinate upcoming visits and directed a national laboratory to establish a repository of information regarding past physical protection visits to assist in determining which sites to visit in the future and in what time frame to do so. Reducing the risks posed by vulnerable nuclear material worldwide requires a layered approach to protecting such material. As a first layer of defense, the United States has helped countries secure nuclear materials in place at civilian and defense facilities. As a second line of defense, the United States has also helped countries improve their border security to address the threat posed by nuclear smuggling. According to IAEA, there were 2,164 confirmed cases of illicit trafficking in nuclear and radiological materials worldwide from 1993 through 2011. In December 2011, we reported on issues relating to the coordination of U.S. programs involved in combating nuclear smuggling overseas.reviewed 21 federal programs and offices under five federal agencies— NNSA, DOD, State, DHS, and the Department of Justice. These programs (1) conduct research and development on radiation detection technologies, (2) deploy radiation detection equipment along foreign borders and points of transit, (3) train and equip foreign customs and border security officials to identify and interdict illicit nuclear materials or technology transfers, (4) assist foreign governments in the development of export control systems, (5) enhance and coordinate with foreign antismuggling law enforcement and prosecutorial capabilities, and (6) analyze potential foreign nuclear smuggling cases and incidents. However, we found impediments to the coordination of U.S. efforts to combat nuclear smuggling overseas. Specifically, we found that none of the existing strategies and plans for coordinating federal efforts to prevent and detect nuclear smuggling and illicit nuclear transfers overseas incorporate all of the desirable characteristics of national strategies, such as identifying the financial resources needed and monitoring mechanisms to be used to determine progress and make improvements. For example, the 2010 Global Nuclear Detection Architecture Strategic Plan—developed jointly by DHS, DOD, Energy, State, Justice, the intelligence community, and NRC—did not identify the financial resources needed to achieve the strategic plan’s objectives or the monitoring mechanisms that could be used to determine programmatic progress and needed improvements. We also identified potential fragmentation and overlapping functions among some programs. Specifically, we identified six programs that provide training to improve the capabilities of foreign border security and customs officials to prevent smuggling and illicit nuclear shipments: (1) NNSA’s Second Line of Defense program, (2) International Nonproliferation Export Control Program, and (3) Cooperative Border Security Program, (4) State’s Export Control and Related Border Security program, (5) DOD’s Weapons of Mass Destruction-Proliferation Prevention Program, and (6) International Counterproliferation Program. Similarly, we identified four programs that are involved in providing equipment to foreign governments to enhance the ability of their customs and border security organizations to detect nuclear smuggling: (1) NNSA’s Second Line of Defense program, (2) State’s Export Control and Related Border Security program, (3) DOD’s Weapons of Mass Destruction-Proliferation Prevention Program, and (4) DOD’s International Counterproliferation Program. In prior reports on nuclear nonproliferation programs, we have found that consolidating programs that share common goals and implement similar projects can maximize limited resources and may achieve potential cost savings or other programmatic and administrative efficiencies. Agency officials representing these programs told us that not all of them have the same focus, that some concentrate on specialized niches, and that many are complementary. For instance, regarding the provision of equipment, NNSA, State, and DOD officials noted that the Second Line of Defense program tends to provide larger equipment, such as radiation portal monitors and cargo scanning equipment, while the Export Control and Related Border Security Program and International Counterproliferation Program provide smaller-scale equipment, such as hand-held radiation detection pagers, hazardous materials kits, and investigative suits to foreign customs and border security organizations. Nevertheless, in our view, the fragmented and overlapping nature of the programs raise questions as to whether greater efficiency could be obtained through possible consolidation of such efforts. Furthermore, we found that no single federal agency has lead responsibility to direct federal efforts to prevent and detect nuclear smuggling overseas. In the past, we have reported that interagency undertakings can benefit from the leadership of a single entity with sufficient time, responsibility, authority, and resources needed to ensure that federal programs are based upon a coherent strategy and are well coordinated, and that gaps and duplication in capabilities are avoided. For instance, State and DOD officials told us that neither State nor any other federal agency has the authority to direct the activities or coordinate the implementation of programs administered by other agencies involved in preventing or detecting nuclear smuggling overseas. Regarding interagency coordinating mechanisms, NSC has established mechanisms to coordinate efforts in this area, including a Countering Nuclear Threats Interagency Policy Committee (IPC) and a sub-IPC for international nuclear and radiological border security efforts. NSC officials declined our request to discuss various aspects of the IPC structure and how it coordinates U.S. efforts to combat nuclear smuggling overseas. However, some officials from other agencies expressed doubts about the value of NSC’s coordinating role. Notably, DOD officials told us that they believed NSC has played a negligible role in coordinating programs to counter nuclear smuggling. We made two recommendations to NSC to streamline and eliminate the potential for fragmentation and overlap among U.S. government programs involved in preventing and detecting the smuggling of nuclear materials overseas. Specifically, we recommended that NSC undertake or direct an appropriate agency or agencies to conduct a comprehensive review of the structure, scope, and composition of agencies and programs across the federal government involved in such efforts. Such a review should include, among other things, (1) the level of overlap and duplication among agencies and programs and (2) potential for consolidation to fewer programs and agencies. Following this review, new guidance should be issued that incorporates the elements of effective strategic plans, including clearly delineating the roles and missions of relevant programs, specific priorities, performance measures, overall program costs, and projected time frames for program completion. NSC did not respond to these recommendations. In 2007, we issued a report at the Subcommittee’s request focusing on the security of radiological sources overseas. In the course of that work we visited a number of hospitals and medical facilities in foreign countries and identified weaknesses in security. For example, in one country the security cable used to secure a teletherapy machine’s cobalt-60 source had been broken for almost a month. In another country, we observed that a storage facility containing devices with thousands of curies of cesium-137 had several unsecured large openings in the roof. Based on the findings in this report, the Subcommittee subsequently asked us to review the security of hospitals and medical facilities in the United States that use radiological sources. Hospitals and medical facilities in the United States are significant users of radiological sources contained in medical devices used primarily for cancer treatment and research. The amount of radiation emitted by the sources in these devices varies according to the size and type of source. For example, teletherapy machines contain a single cobalt-60 source ranging from about 1,000 to 10,000 curies, while irradiators can occasionally contain up to 27,000 curies or more of cesium-137. The following section provides our preliminary findings on our ongoing work. NRC, which is responsible for regulating the security of radiological sources in U.S. hospitals and medical facilities, issued a security order in 2005 that directed licensees possessing radiological sources of concern to implement increased controls for access, detection and assessment, material shipments, physical barriers, information protection, and sensitive information. NRC has relinquished jurisdiction for licensing and regulating radiological sources to 37 states called Agreement States, whose offices are typically administered by state health or environment departments, and which inspect licensees to ensure compliance with state regulations that are generally compatible with NRC regulations. The Department of Veterans Affairs and DOD, which maintain a network of hospitals and medical facilities in the United States, are also required to meet the NRC security order for radiological sources of concern at their facilities. NRC’s security order and implementation guidance are broadly written and do not prescribe the specific steps that licensees must take to secure their sources. Rather, they provide a general framework for what constitutes adequate security practices. According to NRC, the intent of the increased controls is not to provide absolute security from theft or unauthorized access. Rather, the intent is to develop a combination of people, procedures, and equipment that will delay and detect an intruder, and initiate a response to the intrusion. In addition, the controls provide minimum requirements that a licensee must implement, and licensees may go beyond the minimum requirements. However, the ultimate responsibility for securing radiological materials in the United States rests with the licensees that possess these materials. The security order directs that licensees limit access to radiological sources and develop a documented program to detect, assess, and respond to unauthorized access. The controls do not prescribe the types of physical security needed. It is up to the licensee to determine, for example, if security cameras are necessary or what types of locks or alarms are needed to secure doors or windows. For some locations, such as blood banks, requirements for access control can be met if the room where the medical device is located is staffed 24 hours a day, 7 days a week by an individual, or individuals, who are determined to be trustworthy and reliable. As long as the room is staffed at all times, the facility is not required to have any additional physical security, such as cameras or motion detection equipment. As a result, the only access control in place could be one or more staff members. NRC also requires that hospitals and medical facilities verify the trustworthiness and reliability of individuals who are granted unescorted access to the medical devices containing radiological sources. The trustworthiness and reliability process requires that hospitals conduct a background check using information such as employment history, academic records, and other relevant information. It is ultimately the responsibility of the licensee to decide whether to grant the employee unescorted access. In 2007, NRC issued an additional security order requiring individuals employed at facilities containing highly radioactive sources to undergo fingerprinting with verification through the Federal Bureau of Investigation. According to NRC officials, the requirements are intentionally broad to allow licensees flexibility to tailor security upgrades to their specific facility and operations. The ability to tailor security to a facility’s needs and resources is particularly important for commercial facilities with limited resources. For example, officials from smaller medical facilities told us that implementing specific security requirements—such as cameras and other surveillance equipment—could jeopardize their continued operations because of the costs associated with this equipment. NRC officials told us that given factors such as diverse economic conditions, facility type, layout, and operations of facilities, a “one size fits all” approach is neither practical nor desirable. We found that the NRC controls have been implemented in a variety of ways in the hospitals and medical facilities we visited in seven states and District of Columbia. These approaches have created a mix of security controls and procedures that could leave some facilities’ radiological sources more vulnerable than others to possible tampering, sabotage, or outright theft. At some locations, the controls resulted in significant security upgrades, such as the addition of surveillance cameras, upgrades to locks on doors, and alarms. In contrast, we observed minimal security in other facilities. Moreover, law enforcement personnel from states with significant amounts of high-activity radioactive sources at hospitals and medical facilities told us that the NRC controls have an inherent weakness: the controls do not specify what the facility is protecting against and are not linked to a design basis threat. Typically, a design basis threat characterizes the elements of a potential attack, including the number of attackers, their training, and the weapons and tactics they are capable of employing. Although NNSA does not use a design basis threat for its security assessments of hospitals and medical facilities, it does employ a threat scenario (known as potential adversary capability) as the basis for its recommendations for security enhancements. According to a VA official, VA initially developed a generic threat scenario for use at its facilities with larger activity sealed radiological sources since NRC did not provide a design basis threat as part of the increased controls. Later, VA partnered with NNSA to implement security enhancements based on the NNSA threat scenario. The 25 sites we visited are a non-generalizable sample, selected on the basis of the number of radiological devices in the state and the total number of cumulative curies contained in these devices in each state. In addition, we also considered if the site had undergone security upgrades funded by NNSA, and if the site is located in a large urban area. from the room. The door to the room is opened by a swipe card lock, and there are no cameras or other security measures inside the room. We observed that one of the irradiators was sitting on a wheeled pallet. When we asked the radiation safety officer (RSO)—the designated hospital official responsible for the security of radiological sources—if he had considered removing the wheels, he said no. This response was given even though the irradiator room is located in close proximity to an external loading dock, and the cameras along the corridor to the loading dock are displayed on a single monitor. This facility had passed its most recent NRC security inspection because access to the room where the irradiators were located was restricted through use of a swipe card. However, it could be vulnerable because of the limited security we observed and the potential mobility of the device. At a hospital in a major U.S. city, we observed that the interior door to the hospital blood bank, which had a cesium-137 blood irradiator of approximately 1,500 curies, had the combination to the lock written on the door frame. The door is in a busy hallway with heavy traffic, and the security administrator for the hospital said that he often walks around erasing door combinations that are written next to the locks. According to NRC, a single lock is not necessarily a security weakness, however, they noted that writing combinations on the door is a weakness. The RSO at a university hospital in another state told us that he did not know the exact number of individuals with unescorted access to the hospital’s radiological sources, although he said that there were at least 500 people—the current data system does not allow for entering records of individuals beyond 500. In the past, he said, the hospital had as many as 800 people with unescorted access to sources. In contrast, at a major medical research facility at a military installation we visited, access was limited to 4 safety and security personnel. At a blood center in a third state we visited, we observed a cesium- 137 blood irradiator of approximately 1,400 curies in a room that was secured by a conventional key lock. The irradiator was located in the middle of the room and not secured to the floor. The room had an exterior wall with a bank of unalarmed and unsecured windows that looked out onto a loading dock. The blood center officials said that while they met the controls, they acknowledged that the center is highly vulnerable to theft or sabotage of their radiological sources. According to NRC, an irradiator sitting in the middle of the floor not bolted down is not necessarily vulnerable. Licensees are responsible for implementing the security requirements, including designing a security plan and implementing it. Implementation includes procuring and installing surveillance and alarm equipment that the licensees believe is adequate to protect the radiological materials in their facilities. However, many of the officials at the 25 hospitals and medical facilities we visited told us that they have backgrounds in radiological safety and facilities management and have limited security experience. Furthermore, none of these officials has been trained in how to implement the controls. For example: At another hospital we visited, the RSO said that when the controls were instituted in 2005, his new responsibilities included ensuring the security of a cobalt-60 gamma knife of approximately 2,600 curies and a cesium-137 blood irradiator of about 2,400 curies. He told us that he was not comfortable with his security role because his training was as a health physicist. One facility manager who oversees the security for an approximately 1,700 curie cesium-137 blood irradiator at a blood bank told us that he has a background in construction, not security. He said that it would have been helpful if NRC’s controls were more specific so that he would be in a better position to determine what security measures were necessary to adequately protect the device. According to NRC, NRC and Agreement State inspectors receive training in security inspections. They also noted that only qualified inspectors can conduct security inspections. Qualification includes training and inspection accompaniments with qualified inspectors. However, some inspectors from NRC and Agreement States we interviewed told us that they do not feel comfortable conducting security inspections at hospitals and medical facilities, despite having received this training. For example, an NRC inspector said that security inspections were particularly difficult for her because she is trained as a physicist. She said that the controls were confusing, and she did not understand the nuances of security. An Agreement State inspector from another state we visited also told us that he was not qualified to do security inspections. However, he said that he was doing the best he could to interpret the controls and help the licensees implement the requirements. Other inspectors from this state told us that they were placed in the awkward situation of having to enforce regulations that they did not believe they were fully qualified to interpret. We also found that Agreement States lacked sufficient staff and adequate training to ensure the security of radiological sources, according to recent NRC reviews of two Agreement States’ inspection programs. For example, NRC’s review of one of the state’s radioactive materials program found that the program experienced significant turnover and that inspectors did not have an adequate understanding of the controls. According to a state official, high staff turnover and the resulting lack of security experience affected the quality of their oversight. As a result, inspectors had difficulty assessing licensee compliance with the security requirements. According to NRC’s review of the other state’s radioactive materials program, the state’s newer inspectors would have benefitted from additional training on NRC’s security requirements. A state inspector told NRC that he did not understand the meaning of some of the documentation he was reviewing. Another state official stated that he was authorized to inspect a radiological device independently (without being accompanied by a more experienced inspector) before he was ready to do so. Furthermore, according to state officials, staff turnover has significantly affected the state’s timely follow-up of increased controls violations. NRC told us that they plan, based on the findings of these reviews, to take action in future reviews to remedy these problems. According to NNSA, there are approximately 1,500 hospital and medical buildings in the United States —that they have identified—that contain high-activity radiological sources. NNSA also estimates that these buildings cumulatively contain about 22 million curies of radioactive material. One of GTRI’s components is the Domestic Material Protection program, which further improves security beyond NRC and Agreement State regulatory requirements at U.S. facilities with high-activity radiological sources, including hospitals and medical facilities. This voluntary program provides, among other things, U.S. hospitals with security upgrades to the devices that contain high-activity radiological sources. It also provides training for hospital personnel and local police departments through its Alarm Response Training program at the Y-12 National Security Complex in Oak Ridge, Tennessee. This training is designed to teach facility personnel and local law enforcement officials how to protect themselves and their communities when responding to alarms indicating the possible theft or sabotage of nuclear or radioactive materials. NNSA funds the cost of the security upgrades and training. However, the licensee is responsible for maintaining the security systems once the 3-to-5-year warranty period established by NNSA expires. NNSA officials told us that they estimated the average cost of maintaining the upgrades at each hospital was typically less than $10,000 per year. According to NNSA officials, as of December 2011, the program spent an estimated $96 million to secure radiological sources at 302 U.S. hospitals and medical facilities. The program plans to complete voluntary security upgrades at all 1,503 hospital and medical buildings it has identified as high-risk by 2025, at a projected cost of $608 million. NNSA officials told us that they estimate the average cost to upgrade a medical building has been $317,800. We plan to analyze these expenditures more fully during the course of our review. Of the 25 hospital and medical facilities that we visited in seven states and the District of Columbia, 13 have received GTRI upgrades and three were in the process of receiving the upgrades. Officials from most of the 16 hospitals and medical facilities told us that GTRI’s program enhanced the security of their facilities. We observed a number of security upgrades at the facilities we visited, including remote monitoring systems, surveillance cameras, hardened doors, iris scanners, motion detectors, and tamper-proof alarms. NNSA has established criteria for determining which hospitals are eligible for assistance; it ranks facilities to be upgraded based on the relative risk of the radiological sources and expected risk reduction resulting from the planned GTRI activity. The criteria NNSA uses include the following: the attractiveness for theft or diversion of nuclear and radiological materials; existing site security conditions; threat environment; and location to a potential target, such as a large population center. Some hospital officials and police department personnel told us that the GTRI program is limited because it is a voluntary program and because of the potential financial burden placed on hospitals and medical facilities to maintain the upgrades beyond the 3- to 5-year warranty period. We found that some hospitals have declined the upgrades, including hospitals located in high-risk urban areas. For example: At a blood bank in one of the states we visited with a cesium-137 blood irradiator of approximately 1,400 curies, staff told us that NNSA was prepared to upgrade the bank’s security, but the blood bank decided not to participate because senior management wanted to wait until the blood bank moved to a new location, which it planned to do within the next 3 years. We observed that the blood irradiator appeared vulnerable—it was visible through an unalarmed and unsecured bank of windows overlooking an exterior loading dock. In February 2012, we contacted NNSA officials about this matter. As a result, NNSA and national laboratory officials met with the facility and developed a plan to secure the irradiator before the end of the fiscal year. According to police department officials from one major U.S. city, one hospital with a blood irradiator of approximately 1,700 curies has declined the GTRI upgrades, even though the police department considers it a high-risk facility. The hospital officials told us in February 2012 that they decided not to implement the GTRI upgrades because of concerns about maintenance costs associated with the security equipment after the NNSA-funded warranty period expired. The RSO said that the security the hospital has in place is adequate. Furthermore, the hospital is under serious budget pressure that makes it difficult to justify spending more money on protecting the sources. Under the GTRI program, NNSA also upgrades some smaller sources, such as those contained in brachytherapy devices. Typically, these devices contain between 10 and 15 curies of iridium-192. The curie level is not considered high enough to be subject to NRC’s security controls, but NNSA officials told us that the devices’ portability makes them a potential target for theft. NNSA officials stated that GTRI completed security upgrades at some sites before they considered including brachytherapy devices. GTRI is in the process of revisiting these sites and implementing security enhancements. We observed GTRI upgrades for brachytherapy devices at some hospitals, including a device that was put in a locked closet. However, we did visit one GTRI-upgraded facility where the security of the brachytherapy device had not been upgraded. In this facility, there were no security cameras monitoring the area, and in particular, there were no cameras in the room where the device was located. Furthermore, access to the room was controlled by a wooden door with a padlock, and we observed a hospital official retrieve the key to the padlock from an unlocked desk immediately outside the door. Upon entering the room, we observed that the device was not secured to the floor, as required by the hospital’s own security protocol. We are continuing to conduct our audit and plan to visit some additional medical facilities in the United States. We plan to issue our report later this year. Chairman Akaka, Ranking Member Johnson, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions you may have at this time. If you or your staff have any questions about this testimony, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Glen Levis, Assistant Director; Jeffrey Barron; Alysia Davis; William Hoehn; Will Horton; and Michelle Munn. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In 2009, President Obama announced an international initiative to secure all vulnerable nuclear material worldwide within 4 years. Leaders of 47 nations endorsed this effort at the 2010 Nuclear Security Summit and will meet again in March 2012 to evaluate their work and set new goals for nuclear security. The United States has been a leader in promoting nuclear nonproliferation efforts worldwide. GAO has issued numerous reports on U.S. nonproliferation programs administered by several agencies, including the departments of Energy (DOE), State, and Defense (DOD); and the Nuclear Regulatory Commission (NRC). This testimony, which is based primarily on previously issued reports, discusses (1) the U.S. strategy to secure all vulnerable nuclear material within 4 years, (2) U.S. agencies’ ability to track and evaluate the security of U.S. nuclear materials transferred to foreign countries, (3) challenges coordinating federal nuclear nonproliferation efforts, and (4) preliminary observations regarding GAO’s ongoing work on federal efforts to secure radiological sources in U.S. hospitals and medical facilities. To conduct its ongoing work, GAO visited 25 hospitals and medical facilities in 7 states and the District of Columbia. GAO is making no new recommendations, but continues to believe that implementation of the recommendations made in its recent reports complements and supports the administration’s goal of securing vulnerable nuclear material in a timely fashion. The President’s 4-year initiative is a worthwhile effort designed to accelerate U.S. and international efforts to secure nuclear material worldwide. However, as GAO reported in December 2010, the governmentwide strategy approved by the National Security Council (NSC) for the initiative lacked specific details regarding how the initiative will be implemented. As a result, key details associated with the initiative are unclear, including its overall estimated cost, time frame for completion of work, and scope of planned work. In its 2010 report, GAO recommended, among other things, that NSC lead the interagency development of a more detailed implementation plan for the President’s 4-year initiative. NSC did not comment on GAO’s recommendations. The United States also faces challenges accounting for and evaluating the security of U.S. nuclear material overseas. As GAO reported in September 2011, federal agencies are not able to fully account for U.S. nuclear material overseas that is subject to nuclear cooperation agreements. GAO also found that the agreements do not contain specific access rights that enable agencies to monitor and evaluate the physical security of U.S. nuclear material overseas. GAO found that the agencies responsible for reviewing foreign partners’ security are not doing so systematically. GAO suggested that Congress consider directing DOE and NRC to fully account for U.S. weapon-usable nuclear materials overseas and consider amending the Atomic Energy Act to require access rights allowing the United States to verify adequate protection of U.S. nuclear materials if future agreements cannot be negotiated to include such rights. GAO also reported in December 2011 on the challenges in coordinating U.S. governmentwide nonproliferation efforts. Specifically, GAO identified potential fragmentation and overlap among some U.S. programs that played a role in preventing and detecting the smuggling of nuclear materials overseas. GAO also found that no single federal agency had the lead responsibility to direct these efforts. GAO recommended, among other things, that NSC review U.S. programs working to prevent nuclear smuggling overseas to reduce fragmentation and potential overlap. NSC declined to comment on the recommendations. In addition to nuclear materials, the Summit plans to address the security of radiological sources—material that could be used to make a dirty bomb. Based on preliminary results from ongoing work on federal efforts to secure radiological sources in U.S. hospitals and medical facilities, GAO found that NRC’s security controls for hospitals and medical facilities do not prescribe the specific steps that must be taken to protect their radiological sources. GAO also found that medical facilities have implemented the controls in various ways. This has created a mix of security measures at the locations GAO visited that could leave some facilities more vulnerable than others. DOE’s National Nuclear Security Administration (NNSA) has established a voluntary program to upgrade the security of domestic facilities that have radiological sources. NNSA has made progress in securing domestic radiological sources, but some facilities have declined NNSA’s assistance, including hospitals located in high-risk urban areas.
The F/A-22 is planned to be an air superiority and ground attack aircraft with advanced features to make it less detectable to adversaries (stealth characteristics) and capable of high speeds for long ranges. It has integrated avionics that greatly improve pilots’ awareness of the situation surrounding them. The objectives of the F/A-22 development program are to (1) design, fabricate, test, and deliver 9 F/A-22 development test aircraft, 2 nonflying structural test aircraft, 6 production representative test aircraft, and 37 flight-qualified engines; (2) design, fabricate, integrate, and test the avionics; and (3) design, develop, and test the support and training systems. The F/A-22 is being developed under contracts with Lockheed Martin Corporation, the prime contractor (for the aircraft),and Pratt & Whitney Corporation (for the engine). Following a history of increasing cost estimates to complete the development phase of the F/A-22 program, the National Defense Authorization Act for Fiscal Year 1998 established a cost limitation for both the development and the production. Subsequently, the National Defense Authorization Act of 2002 eliminated the cost limitation for the development, but it left the cost limit for the production. The production program is now limited to $36.8 billion. The current cost estimate of the development program is $28.7 billion. Currently, the F/A-22 program is both in development and production. Development is in its final stages, and production has been ongoing since fiscal year 1999. The aircraft’s development problems and schedule delays in completing flight testing have led to congressional concerns. The National Defense Authorization Act for Fiscal Year 2004 prohibited the obligation of $136 million in procurement funds until the Under Secretary of Defense, Acquisition, Technology, and Logistics, submitted to the congressional defense committees, among other things, a certification that the avionics software installed on test aircraft can operate at least 5 hours on average before certain types of avionics anomalies occur. The Under Secretary of Defense, Acquisition, Technology, and Logistics, the final authority in making acquisition decisions in DOD, has also included this criterion as a requirement for the F/A-22 program before entering IOT&E . The F/A-22 program has experienced several significant changes since it began development in 1986. First, the Air Force cannot afford to purchase the quantities of aircraft that were originally planned 18 years ago. This reduction in buying power is attributed, in a large part, to increases in development time and cost due to the program’s failure to employ a knowledge-based acquisition approach to developing the F/A-22. Second, in September 2002, the Air Force decided to add a more robust air-to-ground attack capability than previously envisioned but now deemed needed to increase the utility of the aircraft. This capability will add significant cost to the program over the next 10 years. Lastly, the Air Force has determined that new computer processors and architecture are needed to support some planned enhancements, which will further increase program costs and risk. Since the F/A-22 acquisition program started in 1986, cost and schedule estimates have grown significantly, thus contributing to a loss in buying power. Development costs are now estimated at $28.7 billion, a 127 percent increase over the 1986 estimates. Planned development cycle time has grown from 9 years to 19 years, and the initial operational capability date has slipped over 9 years, from March 1996 to December 2005. These schedule extensions, delays, and cost increases were major contributors to changes in the Air Force’s initial plan to purchase 750 aircraft. Current Air Force budget estimates include plans to purchase 277 aircraft. Table 1 shows the changes in the F/A-22 program since its start in 1986 based on information provided in Selected Acquisition Reports over time. In our 1988 report, the average unit procurement cost was estimated by the Air Force to be $69 million. Today, after schedule delays and development problems, the estimated average unit procurement costs have grown to $153 million—almost a 122 percent increase. The Air Force does not expect the development program to be completed until 2005 and with IOT&E still to be completed, the possibility of additional changes and costs is likely. As we previously reported, the acquisition approach of the F/A-22 program has contributed to cost increases and delays in schedule. Leading commercial firms that we studied employ an acquisition approach that evolves a product to its ultimate capabilities on the basis of mature technologies and available resources. Further, product enhancements are planned for subsequent development efforts only when technologies are proven to be mature and other resources are available. Our work has shown that commercial firms ensure that high levels of knowledge exist at three critical junctures in a development program. First, a match must be made between a customer’s needs and the available resources— technology, engineering knowledge, time, and funding—before a new development program is launched. Second, a product’s design must demonstrate its ability to meet performance requirements and be stable about midway through development. Third, the developer must show that the product can be manufactured within cost, schedule, and quality targets and is demonstrated to be reliable before production begins. In contrast, the F-22 acquisition strategy from the outset was to achieve full capability in a “big bang” approach instead of evolving development in manageable increments of new capability. By not using an evolutionary approach, the Air Force took on significant risk and onerous technology challenges. The three critical technologies that were immature at the start of the program included low-observable materials, propulsion, and integrated avionics. Integrated avionics has been a source of major schedule delays and cost increases in the F/A-22 program. Starting the program with these immature technologies prevented the program from knowing cost, schedule, and performance ramifications until late in the development program, after significant investments had already been made. Efforts to mature technology cascaded into development, delaying attainment of design and production maturity. The overall result has been significant delays and substantially higher investments to buy over 60 percent fewer aircraft. Developing an expanded air-to-ground attack capability for the F/A-22 will be costly and add risk to the program. The Air Force began development of the F/A-22 as a replacement for the F-15 air superiority fighter with primary emphasis on the air-to-air role. It was never intended to have robust air-to-ground capability. Its need was based on a projection that the Soviet Union would develop and produce large numbers of advanced fighter aircraft. The F/A-22 was intended to identify, track, and kill advanced fighters before it was targeted, giving it the edge and making it a more lethal and survivable aircraft than an F-15. However, the original Soviet threat never materialized. To enhance the utility of the F/A-22, the Air Force plans to develop a robust air-to-ground attack capability to be able to engage a greater variety of ground targets, such as surface-to-air missile systems, that have posed a significant threat to U.S. aircraft in recent years. The Air Force has a modernization program to improve the capabilities of the F/A-22 focused largely on a new robust air-to-ground capability. It has five developmental spirals planned over more than a 10-year period, with the initial spiral started in 2003. Table 2 shows each spiral as currently planned. In March 2003, the Office of Secretary of Defense’s Cost Analysis Improvement Group (CAIG) estimated that the Air Force would need $11.7 billion for the planned modernization program. The CAIG estimate included costs for development, production, and the retrofit of some aircraft. As of March 2003, the Air Force F/A-22 approved program baseline did not include estimated costs for the full modernization effort. Instead, the Air Force estimate included $3.5 billion for modernization efforts planned through fiscal year 2009. To support the F/A-22’s expanded capability beyond Global Strike Enhanced, the Air Force has determined that its baseline computer architecture and critical avionics processors will need to be replaced. Current processors are old and obsolete, cannot be supported, and do not have sufficient capacity to meet the increased processing demands required for planned new air-to-ground capabilities beyond Global Strike Enhanced. As a bridge to meet this expanded capability, the Air Force plans to modify some avionics processors and purchase sufficient quantities to support production of the first 155 F/A-22 aircraft. The F/A-22 is dependent on its onboard computers and software to perform its mission. Unlike other fighter aircraft, it has a highly advanced, integrated avionics system capable of detecting, identifying, and engaging the enemy at ranges beyond a pilot’s vision. The key to the F/A-22 avionics lies in its fully integrated core architecture and its two central, networked computers called common integrated processors (CIP). CIPs use very high-speed integrated circuits to collect, process, and integrate data and signals from the aircraft’s sensors. CIP serves as the “brains” for the F/A-22’s integrated avionics system and is unique to this aircraft. The primary processor in CIP is the Intel i960MX microprocessor, which is used strictly for avionics processing. This microprocessor is based on 1990’s technology and has a 32-bit processor that operates at speeds of 25mhz. By today’s technology standards, the processor is considered obsolete and cannot support spiral developments beyond the Global Strike Enhanced. In mid-2003, the manufacturer of the microprocessor informed the Air Force that it planned to permanently shut down the i960MX production line by January 2004 because the microprocessor was no longer a viable product for the company. As a result, the Air Force decided in November 2003 to replace its computer architecture and avionics processors to support the F/A-22’s expanded capabilities. In December 2003, the Air Force purchased its last i960MX microprocessors when it bought 820 of the microprocessors. According to program officials, this quantity and previously purchased quantities are sufficient to support production of 155 F/A-22 aircraft. These officials believe that with some minor upgrades to improve processing capacity, these processors will be able to support the baseline aircraft and the developmental spirals—Global Strike Basic and Global Strike Enhanced. However, the Air Force plans for the remaining production aircraft to include a new computer architecture and avionics processor needed to support the final two planned spirals—Global Strike Full and Enhanced Intelligence, Surveillance, and Reconnaissance. At the time of our review, the Air Force believed its best long-term solution to its avionics architecture and computer-processing shortfalls was a new, modern, open system architecture. Rather than start a new development program, the program office plans to leverage two other ongoing Air Force development or modification programs for this processing capability: the new architecture being developed for the F-35 and the new commercial off-the-shelf general-purpose processors designed for newer versions of the F-16. According to F/A-22 program officials, this new architecture will be state-of-the-art and will have ample processing capacity to accommodate all future air-to-ground capabilities as currently planned. These officials do not expect the new architecture to be fully developed and ready for installation in the F/A-22 for at least 5 to 6 years. F/A-22 program officials acknowledge that this mass changeover of the F/A-22 computer architecture and avionics processor will be a time-consuming and costly effort and will likely create additional program risks. Air Force cost estimates are not yet available. Nevertheless, program officials estimate the nonrecurring engineering costs alone could be at least $300 million. At the time of our review, the Air Force had not made a decision about retrofitting aircraft equipped with the i960MX microprocessor. Additional risks are likely because the new processor and architecture are being developed by other major aircraft programs and will require extensive integration and operational testing to ensure that the F/A-22 program does not encounter similar problems that have delayed integration and testing of the F/A-22’s current avionics suite. The F/A-22 program did not meet key testing goals established for fiscal year 2003 and required for the aircraft to begin IOT&E testing. The Air Force’s efforts to stabilize avionics software and improve its performance have not been sufficiently demonstrated, and entrance criterion previously set for starting IOT&E testing has been changed. In addition, the F/A-22 program is not performing as expected in some other key performance areas, including reliability and maintenance support. The ongoing problems have led to a revised test schedule, which has compressed the time to complete initial operational testing by 4 months, and have increased the potential for cost increases and delays in the full rate production decision. The program has made progress in correcting several of the design problems we identified in our March 2003 report. The Air Force changed the avionics stability metric planned as a criterion to enter IOT&E from an average of 20 hours between avionics software failures to a broader measure of an average of 5 hours between avionics software or hardware failures. Current testing shows the program continues to have problems meeting the new and old avionics stability metrics. Because the F/A-22 avionics encountered frequent shutdowns over the last few years, many test flights were delayed. As a result, the Air Force Operational Test and Evaluation Center wanted assurances that the avionics would work before it was willing to start the IOT&E program. It established a requirement for a 20-hour performance metric that was to be demonstrated before IOT&E would begin. The metric was Mean Time Between Instability Events (MTBIE) and tracked two distinct types of avionics software failures: Hard failures (type 1) that were the most serious resulting in a complete avionics system shutdown requiring the need to restart the avionics system. Significant failures (type 2) that were less serious failures but required the pilot to restart an individual subsystem that failed versus the complete avionics system. Using personal computers as an analogy, a type 1 failure would be equivalent to a failure of one’s personal computer that requires it to be shut down and rebooted, except that the time to restart the F/A-22 avionics system could take substantially longer. A type 2 failure would be equivalent to a failure in a particular application, such as the word processing program shutting down. Even with such a failure, other software applications could still be operated while the word processing software was restarted. Likewise, in the case of the F/A-22, other applications would still be operable despite the failure of any single application, such as a shutdown in the communication, navigation, and identification system. In July 2003, the Air Force decided to switch to a different metric—Mean Time Between Avionics Anomaly (MTBAA)—to measure the performance of the avionics software for the start of IOT&E. Two main differences between the new metric and its predecessor are the new metric (1) includes hardware and some subsystem software failures not previously counted and (2) requires a failure rate based on an average of 5 hours without experiencing avionics anomalies, instead of 20 hours. According to Air Force operational test officials, they adopted this new metric because they believe it is a better measure of the avionics operational performance needed to start IOT&E, whereas the previous metric was more technically focused on software performance, excluding hardware failures. They also said the 5-hour criterion would provide a minimum amount of effective operational test time to efficiently conduct IOT&E. In turn, Congress included the new metric in the National Defense Authorization Act for Fiscal Year 2004. Testing as of January 2004 showed the program had achieved 2.7 hours—54 percent of the requirement. Once this criterion is achieved, the avionics must still undergo rigorous operational testing to demonstrate its effectiveness and suitability in a realistic environment. Figure 1 shows the status of the MTBIE and MTBAA metrics. The figure shows that MTBIE, the previous criterion, was demonstrated at about 67 percent of the requirement. In addition, the type 1 failures, causing a complete shutdown of the avionics system, have significantly diminished. They are occurring only about once every 25 hours on average. This is the result of a substantial effort on the part of the Air Force and the contractor to identify and fix problems that led to the instability in the F/A-22 avionics software. Type 2 failures are still occurring frequently. While less serious when compared to the entire avionics suite shutting down, type 2 failures become serious if critical subsystem software shuts down when its function is needed for the success of the mission or survivability of the aircraft. In September 2003, the F/A-22 contractor reported a high number of outstanding avionics Common Problem Reports. Of the 231 reports of problems not resolved, about 25 (or 11 percent) were identified as stability-related problems. The remaining 206 reports (89 percent) were the result of avionics performance or functional problems. For example, the communication, navigation, and identification subsystem accounted for nearly 36 percent of the total reports. Because the avionics system is essential to the success of the F/A-22, the integrated avionics still needs to be demonstrated to meet design specifications and operational requirements. Reductions in avionics performance could affect the ability of the F/A-22 to effectively carry out its expected missions. The F/A-22 program is not meeting its requirements for a reliable aircraft and it is not using a best practice approach. The Air Force established reliability requirements to be achieved at the completion of development and at system maturity. As a measure of the system’s overall reliability, the Air Force established a requirement for 1.95-hours mean time between maintenance by the completion of development, and 3-hours mean time between maintenance at system maturity. This measure of reliability represents the average flight time between maintenance actions. As of October 2003, the Air Force had only been able to demonstrate a reliability of about 0.5 flying hours between maintenance actions or about 26 percent of the development requirement and 17 percent of system maturity requirement. This has led to the development test aircraft spending more time than planned on the ground undergoing maintenance. During 2003, the Air Force identified 68 parts that had a high rate of failure causing them to be removed or replaced, affecting the F/A-22 system reliability. The contractor has initiated programs to eliminate the high failure rates experienced by these parts. The canopy has also been experiencing failures during testing, allowing it to achieve only about 15 percent of its expected 1,600-hour life. A second manufacturer for canopies is being developed, but until it has passed qualification testing, it cannot be used as an alternative source for the high failing canopies. Best commercial practices for new product development require reliability to be demonstrated by the start of production. Our work has shown that product development engineers from leading commercial firms expect to achieve reliability requirements before entering production. They told us reliability is attained through an iterative process of design, test, analyze, and redesign. Commercial firms understand that once a system enters production, the costs to achieve reliability through this iterative design change process become significantly more expensive. The F/A-22 aircraft has been in production since fiscal year 1999, and the Air Force has on contract 52 production aircraft, and an additional 22 aircraft on long lead contracts representing 27 percent of the planned buy quantity. With 83 percent of the reliability requirement yet to be achieved through this iterative design change process, the Air Force can expect to incur additional development and design change costs. If the Air Force fails to improve the F/A-22’s reliability before fielding the aircraft, the high failure rates will result in higher operational and support costs to keep the aircraft available for training or combat use. The F/A-22 is designed to have a computerized and paperless maintenance system that monitors, diagnoses, identifies, and reports failures to maintenance crews and that is intended to allow a faster maintenance turnaround to flight status. The onboard Diagnostics Health and Management system constantly monitors the aircraft’s systems and the performance of both hardware and software. It collects, analyzes, stores, and reports failures. Critical failures are reported to the pilot, and all failures are stored in a portable database for later use by ground maintenance crews. At the completion of a flight, the database is removed from the aircraft and is downloaded into a system on the ground, the Integrated Management Information System, which is a network of computers the maintainers use to process the maintenance and support information. This system further analyzes the downloaded information to determine the problems and match failures with the appropriate digitized technical order data needed to make the repairs. This information is then loaded into handheld portable computers that the technicians use to repair the aircraft. According to DOD and Air Force test officials, these systems have been generating false reports of failures, which have caused maintenance staff to spend more hours than planned replacing items unnecessarily and trying to identify the actual problems. In addition, the maintenance systems are not providing all the technical data needed to repair the aircraft, thus making it more difficult to make repairs. According to the test officials, they do not have precise data to quantify the extent of the problems, and they said it has disrupted maintenance activities. A key indication has been the inability to fly aircraft as planned. We found that between October 2003 and January 2004 the test force could only fly about 53 percent of the planned test flights and that the maintenance problems were a key contributor to this poor flying performance. Air Force officials do not expect the maintenance systems to be fully matured until December 2005. Consequently, the program office has had to provide additional funding to the contractor to purchase special test equipment that will be used to support maintenance requirements during operational testing. Moreover, because these systems will not be fully available during the operational testing, it may be difficult to assess the systems’ real performance. Progress in F/A-22 flight testing was slower than expected in 2003, and start of IOT&E was delayed an additional 7 months due to avionics and other problems. Realizing the Air Force would not be ready to enter initial operational testing as previously planned, the Office of the Secretary of Defense requested the F/A-22 program to establish a new operational test plan that included measures to ensure the aircraft and its avionics are ready before entering operational testing. In response, the Air Force put in place a two-phase operational test program. Phase 1, also called an operational assessment, is not the official start of operational testing. It is intended to assess the F/A-22’s readiness for IOT&E. Started in October 2003, it calls for testing two F/A-22 aircraft to conduct live air-to-air missile shots, fly one-ship and two-ship formation operational sorties, and assess the computerized maintenance system’s maturity. It will include some flight tests that are planned to be repeated in IOT&E if the aircraft configuration changes. Phase 2 testing is considered the actual start of IOT&E. To begin this phase, the Air Force must meet a number of criteria. Perhaps most importantly, it must demonstrate that the F/A-22 integrated avionics will be able to operate for sufficient lengths of time, without shutting down. Other criteria that must be met prior to IOT&E include the availability of four fully configured F/A-22 test aircraft and one spare aircraft, the completion of live missile shots, the completion of key aircraft flight envelope testing (planned speed, altitude, and maneuver boundaries of the F/A-22), the completion of operational pilot and maintenance training, a useable system with technical data to fix problems, and the software upgrades to the maintenance system. Figure 2 compares the changes in the planned test program since our last report. According to Air Force test officials, results of some phase 1 tests could be used to satisfy IOT&E requirements if the aircraft and software configurations do not change for IOT&E testing. This could reduce the scope of the test effort planned during IOT&E. The Defense Acquisition Board is scheduled to review the F/A-22’s readiness for IOT&E in March 2004. At the present time, the Air Force expects to complete IOT&E in October 2004, before the full rate production decision, now expected in December 2004. The time allotted to complete IOT&E under the new test plan, however, has been compressed by 4 months, assuming phase 1 testing results are not permitted to be used for IOT&E. This means the Air Force would have less time than previously planned to complete the same amount of testing. If the Air Force continues to experience delays in testing prior to IOT&E, then the full rate production decision would also have to be delayed until IOT&E is complete and the Beyond Low Rate Initial Production Report is delivered to Congress. There is no consensus within DOD on the Air Force’s ability to meet this October 2004 milestone. The Director of Operational Test and Evaluation, Office of Secretary of Defense, believes the start of testing will slip, although the Air Force maintains it will meet its schedule. The Air Force has corrected design problems discussed in our March 2003 report. To correct the movement or buffeting of the vertical fins in the tail section of the aircraft, the Air Force designed and implemented modifications, which strengthen the fin and hinge assemblies. Because of this problem, the Air Force placed restrictions on flights below 10,000 feet. Testing was done above and below 10,000 feet, and the flight restrictions were removed. Likewise, the Air Force modified the aircraft to prevent overheating concerns in the rear portion of the aircraft by adding thermal protection and strengthened strategic areas in the aft tail sections. The Air Force also plans to modify later production aircraft using a new venting approach to resolve the heat problems. We reported that the Air Force had also experienced separations in the horizontal tail materials. After additional testing, the Air Force deemed that the original tails met requirements established for the life of the airframe. However, the Air Force redesigned the tail to reduce producibility costs. Tests will be performed on the redesigned tail in late 2004. DOD has not provided Congress with sufficient information to support the business case for buying and modernizing the F/A-22 program. In our testimony of April 11, 2003, before the Subcommittee on National Security, Emerging Threats, and International Relations, House Committee on Government Reform, we stressed that the issue was not whether the F/A-22 should be produced, but rather in what quantities it is needed—as justified by a business case. We discussed the current and future environments in which the F/A-22 investment decision would have to be made, including the need to consider opportunity costs inside and outside DOD. DOD has planned investments over the next several years, on average $150 billion a year, to keep legacy systems working while at the same time modernizing and transforming U.S. national defense capabilities for the future. The F/A-22 program represents a sizable investment and must compete with other demands within the defense budget. This competition requires a knowledge-based approach to justify acquisition investment decisions and an efficient acquisition process to ensure programs are implemented within expectations set in associated business cases. Since the start of the F/A-22 program, acquisition costs have increased, the aircraft’s mission and key capabilities have expanded, fewer quantities are affordable, and delivery to the user has been delayed. The Air Force currently estimates the total F/A-22 acquisition program will cost about $72 billion, excluding all costs estimated to complete the spiral improvement effort. Including these costs brings the estimated total investment for the F/A-22 program to about $80 billion. Through fiscal year 2004, about one-half this investment has been funded. In light of the changes in the program and investments that remain, the Subcommittee on National Security, Emerging Threats, and International Relations, House Committee on Government Reform, asked DOD to provide a business case justifying the Air Force’s planned number of F/A-22s (276 at that time) as well as how many F/A-22s are affordable. In its response, DOD did not sufficiently address key business case questions such as how many F/A-22s are needed, how many are affordable, and if alternatives to planned investments increasing the F/A-22 air-to-ground capabilities exist. Instead, DOD stated it planned to buy 277 F/A-22s based on a “buy to budget” concept that determines quantities on the availability and efficient use of funds by the F/A-22 program office. Furthermore, justification for expanding the capability, for an estimated $8 billion to $12 billion investment, was not addressed in DOD’s response. While ground targets such as surface-to-air missile systems are acknowledged to be a significant threat today, the business case did not establish a justification for this investment or state what alternatives were considered. For example, the F-35 aircraft is also expected to have an air-to-ground role as are planned future unmanned combat air vehicles. These could be viable alternatives to this additional investment in F/A-22 capability. While the business case information submitted to Congress called for 277 aircraft, DOD stated it could only afford to acquire between 216 and 218 aircraft within the congressionally imposed cap on production costs—currently at $36.8 billion. DOD expects improvements in manufacturing efficiencies and other areas will provide it with sufficient funds to buy additional F/A-22 aircraft. However, this seems to be an unlikely scenario given the program’s history. Under the “buy to budget” approach, the previous $876 million increase in development costs was funded by taking funds mostly from production, thus reducing aircraft quantities by 49. With testing still incomplete and many important performance areas not yet demonstrated, the possibility for additional increases in development costs is likely. While DOD and the Air Force are focused on completing IOT&E and making a decision to go into full rate production, a more basic issue needs to be addressed. The conditions driving the business case that spurred the major investment decision to initially develop and buy 750 F-22 aircraft have changed. A revised and comprehensive business case assessment has not been completed and shared with congressional defense oversight committees. At the present time, it is uncertain how many F/A-22s are needed. The program has been in development for about 18 years, and DOD has invested over $40 billion. This investment represents about one- half the estimated costs projected for the entire F/A-22 program. Therefore, DOD must still make investment decisions affecting another $40 billion to support this program through full rate production and implementation of the spiraled modernization effort. Based on current design problems and the development efforts that remain, the F/A-22 program’s affordability is uncertain. Current conditions suggest the Air Force cannot afford to buy much more than 218 aircraft within the cost limitation imposed by Congress. In light of the uncertainty concerning how many aircraft are needed in today’s environment, the large investments that remain, and the unknown outcomes of planned initial operational testing, we continue to be concerned with DOD’s readiness to address a December 2004 decision to enter full rate production. Furthermore, IOT&E, intended to demonstrate the F/A-22 effectiveness and suitability, has not started and may not be completed as planned, which may delay the full rate production decision. With this testing outstanding, the risk is high that additional development funding will be needed to resolve problems that could result. Given the sizable investment that remains in the F/A-22 program, the uncertainties, and the ever changing financial demands of DOD, Congress and the Secretary of Defense would benefit from a comprehensive assessment of the number of F/A-22 aircraft needed as well as assurance that problems identified in initial operational testing will be identified and resolved. Specifically, we recommend that the Secretary of Defense take the following two actions: Complete a new business case analysis that determines the continued need for the F/A-22 and that specifically (a) addresses the need for an expanded air-to-ground capability and an assessment of alternatives, to include the feasibility of using other assets like the F-35 and unmanned aerial vehicles planned for the future; (b) justifies the quantity of F/A-22 aircraft needed to satisfy requirements for air-to-air and air-to-ground missions; and (c) provides evidence that the planned quantity is affordable within current budgets and the congressional funding limitation. The Secretary should provide the results of the business case analysis to the defense committees before the decision to start full rate production. Before the full rate production decision is made and in conjunction with the Beyond Low-Rate Initial Production Report, provide the defense committees a plan that shows how the Air Force will correct and fund any major problems identified and still open after IOT&E is completed. In written comments on a draft of this report, DOD stated that it partially concurred with our two recommendations. Regarding our first recommendation on completing a new business case for the F/A-22, DOD stated that it evaluates the F/A-22 business case elements as part of the annual budget process. Additionally, DOD’s response acknowledged that this year the department is undertaking a broader set of reviews under the Joint Capabilities Review process; the F/A-22 will be a part of this review. The President’s budget submission to Congress will reflect the results of these review efforts of the F/A-22 business case. We believe that the various reviews and assessments in the budget process along with the Joint Capabilities Review process present excellent opportunities for DOD to conduct a business case analysis. Other opportunities for completing the business case analysis include the independent and in-depth study requested by the Office of Management and Budget for the Comanche and F/A-22 programs. It is important, however, that the analysis sufficiently address the specific business case elements included in our recommendation—analysis of continued need, need for expanded air-to-ground capability, assessment of alternatives, justification of needed quantities, and evidence that planned quantities are affordable. In addition, it is important that the outcomes of the business case analysis are provided to the Congress prior to the full rate production decision. Regarding our second recommendation on providing Congress the plans to resolve outstanding problems after the completion of IOT&E, DOD stated that the law already requires the Director, Operational Test and Evaluation, to submit to Congress a Beyond Low Rate Initial Production Report that includes the results of operational testing. Since this report is an independent assessment of test results, the department did not believe it appropriate to include in it Air Force plans and costs for corrective actions stemming from operational testing. However, DOD will present these actions and costs to the Defense Acquisition Board for decisions on the F/A-22 program that will be included in the President’s budget submission to Congress. We understand the legal requirements for submitting the Beyond Low Rate Initial Production Report. We also recognize that this is an independent report submitted by the Director, Operational Test and Evaluation. The intent of our recommendation is not to modify the report itself, but to ensure corrective actions and resultant costs are identified and reported in a timely fashion and before the full rate production decision is made. Because plans and costs could span over several years, such information may or may not be captured in annual budget submissions. We have modified our recommendation to clarify our intent. To determine changes in the F/A-22 program since its inception, we analyzed cost information from Selected Acquisition Reports and obtained information from the Air Force on its plans to modernize the F/A-22 to include enhanced air-to-ground capabilities. We compared prior cost information with the Air Force’s current estimates to complete development and production of the F/A-22. To determine the impact of development and testing on program outcomes, we examined the extent to which the development program is meeting planned flight test goals for 2003 and the Air Force’s planned entry criterion for starting initial operational testing. In examining sufficiency of the business case DOD provided to a congressional oversight committee, we obtained a copy of the business plan and analyzed the various DOD assumptions and approaches used to make the assessment conclusions. In making these determinations, assessments, and identifications, we required access to current information about test results, performance estimates, schedule achievements and revisions, costs being incurred, aircraft modifications, and the program’s plans for continued development and initial production. The Air Force and the contractors gave us access to sufficient information to make informed judgments on the matters covered in this report. In performing our work, we obtained information or interviewed officials from the Office of the Secretary of Defense, Washington D.C.; the F/A-22 System Program Office, Wright-Patterson Air Force Base, Ohio; Lockheed-Martin, Marietta, Georgia; the Defense Contract Management Agency, Marietta, Georgia; the Air Force Operational Test and Evaluation Center, Kirkland Air Force Base, New Mexico; and the Combined Flight Test Center, Edwards Air Force Base, California. We are sending copies of this report to the Secretary of Defense; the Secretary of the Air Force; and the Director, Office of Management and Budget. Copies will also be made available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me at (202) 512-4841 or Michael J. Hazard at (937) 258-7917 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix II. Marvin E. Bonner, Edward Browning, Roger Corrado, Steve Hunter, Gary Middleton, and Robert Ackley made key contributions to this report. Best Practices: Better Acquisition Outcomes Are Possible If DOD Can Apply Lessons from F/A-22 Program. GAO-03-645T. Washington, D.C.: April 11, 2003. Tactical Aircraft: Status of the F/A-22 Program. GAO-03-603T. Washington, D.C.: April 2, 2003. Tactical Aircraft: DOD Should Reconsider Decision to Increase F/A-22 Production Rates While Development Risks Continue. GAO-03-431. Washington, D.C.: March 14, 2003. Tactical Aircraft: DOD Needs to Better Inform Congress about Implications of Continuing Cost Growth. GAO-03-280. Washington, D.C.: February 28, 2003. Tactical Aircraft: F-22 Delays Indicate Initial Production Rates Should Be Lower to Reduce Risks. GAO-02-298. Washington, D.C.: March 5, 2002. Tactical Aircraft: Continuing Difficulty Keeping F-22 Production Costs Within the Congressional Limitation. GAO-01-782. Washington, D.C.: July 16, 2001. Tactical Aircraft: F-22 Development and Testing Delays Indicate Need for Limit on Low-Rate Production. GAO-01-310. Washington, D.C.: March 15, 2001. Defense Acquisitions: Recent F-22 Production Cost Estimates Exceeded Congressional Limitation. GAO/NSIAD-00-178. Washington, D.C.: August 15, 2000. Defense Acquisitions: Use of Cost Reduction Plans in Estimating F-22 Total Production Costs. GAO/T-NSIAD-00-200. Washington, D.C.: June 15, 2000. Budget Issues: Budgetary Implications of Selected GAO Work for Fiscal Year 2001. GAO/OCG-00-8. Washington, D.C.: March 31, 2000. F-22 Aircraft: Development Cost Goal Achievable If Major Problems Are Avoided. GAO/NSIAD-00-68. Washington, D.C.: March 14, 2000. Defense Acquisitions: Progress in Meeting F-22 Cost and Schedule Goals. GAO/T-NSIAD-00-58. Washington, D.C.: December 7, 1999. Fiscal Year 2000 Budget: DOD’s Production and RDT&E Programs. GAO/NSIAD-99-233R. Washington, D.C.: September 23, 1999. Budget Issues: Budgetary Implications of Selected GAO Work for Fiscal Year 2000. GAO/OCG-99-26. Washington, D.C.: April 16, 1999. Defense Acquisitions: Progress of the F-22 and F/A-18E/F Engineering and Manufacturing Development Programs. GAO/T-NSIAD-99-113. Washington, D.C.: March 17, 1999.
Following a history of increasing cost estimates to complete F/A-22 development, Congress asked GAO to assess the Air Force's F/A-22 development program annually and determine whether the Air Force is meeting key performance, schedule, and cost goals. On April 23, 2003, a congressional subcommittee requested that the Department of Defense (DOD) provide more detailed information on the business case that supports the estimated quantities and costs for an affordable F/A-22 program. Specifically, GAO (1) identified changes in the F/A-22 program since its inception, (2) reviewed the status of the development activities, and (3) examined the sufficiency of business case information provided for congressional oversight. The Air Force is developing the F/A-22 aircraft to be less detectable to adversaries, capable of high speeds for long ranges, and able to provide a pilot with improved awareness of the surrounding situation through integrated avionics. In addition, the Air Force plans to expand the F/A-22's ability to engage targets on the ground to provide a robust capability not originally planned at the start of the program. The Air Force plans to begin initial operational test and evaluation in March 2004 and to seek full rate production approval in December 2004. The F/A-22 program has experienced several significant changes since it began development in 1986. First, the Air Force cannot afford to purchase the quantities of aircraft that were planned 18 years ago. The Air Force had originally planned to buy 750 aircraft, but it now estimates it can only afford 218 aircraft. Second, in order to develop the expanded air-to-ground attack capability, the Office of Secretary of Defense estimates that the Air Force will need $11.7 billion in modernization funding. Lastly, the Air Force has determined that new avionics computer processors and architecture are needed to support most planned enhancements, which will further increase program costs and risk. Further, the development test program continues to experience problems and risks further delays. The F/A-22's avionics continue to experience shutdowns and failures. Moreover, the F/A-22 has not met its reliability requirements and has experienced failures in its computerized maintenance support system. This has led to aircraft spending more time on the ground undergoing maintenance. Due to the risks of future cost increases and schedule delays, a congressional subcommittee requested that DOD provide business case information on the F/A-22. However, the information DOD provided did not address why this aircraft is needed given current and projected threats. The business case also did not address how many aircraft the Air Force needs to accomplish its missions, how many the Air Force can afford considering the full life-cycle costs, whether investments in new air-to-ground capabilities are needed, and what are the opportunity costs associated with purchasing any proposed quantities of this aircraft. While the response stated that the Air Force still plans to buy 277 F/A-22 aircraft, the Air Force estimates that only 218 aircraft are affordable within congressionally imposed funding limitations. In addition, significant investment decisions remain and could affect another $40 billion to support this program through full rate production and implementation of the spiraled improvement efforts. In light of the uncertainty concerning how many aircraft are needed in today's environment, the large investments that remain, and unknown outcomes of planned operational testing, GAO continues to have concerns regarding the DOD's readiness to make a full rate production decision.
Some DI benefit recipients have incomes low enough to qualify them for SSI as well and receive benefits from both programs. gainful activity because of a severe physical or mental impairment. The standards for determining whether the severity of an applicant’s impairment qualifies him or her for disability benefits are set out in the Social Security Act and SSA regulations and rulings. SSA’s disability claims process is complex, multilayered, and lengthy. Potential beneficiaries apply for benefits at any one of SSA’s local field offices, where applications are screened for nonmedical eligibility: applicants for DI must meet certain work history requirements, and applicants for SSI must meet financial eligibility requirements. If the applicants meet the nonmedical eligibility requirements, their applications are forwarded to a state disability determination service (DDS), which gathers, develops, and reviews the medical evidence and prior work history to determine the individual’s medical eligibility; the DDS then issues an initial determination on the case. Applicants who are dissatisfied with the determination may request a reconsideration decision by the DDS. Those who disagree with this decision may appeal to SSA’s Office of Hearings and Appeals (OHA) and have the right to a hearing before one of the administrative law judges (ALJ) located in hearings offices across the country. Individuals who disagree with the ALJ decision may pursue their claim with SSA’s Appeals Council and ultimately may appeal to a federal district court. This process can be both time-consuming and confusing for the applicants and may compel many of them to seek help from an attorney. Obtaining representation for a pending case has become increasingly popular because disability representatives have been successful in obtaining favorable decisions for their clients upon appeal.In fiscal year 1997, about 70 percent of all cases decided at the ALJ-hearing level involved representatives. The fees attorneys representing DI and SSI applicants can charge are limited by law and must be approved by SSA. In order to be compensated, attorneys must file either a fee agreement—a formal contract signed by the applicant and the attorney setting the fee as a percentage of the applicant’s past-due benefits—or a fee petition that details the specific costs associated with the case. Past-due benefits are calculated by multiplying the monthly benefit amount by the total number of months from the month of entitlement up to, but not including, the month SSA effectuates the favorable disability decision. When fee agreements are filed, attorney fees are limited to 25 percent of the applicant’s past-due benefits, up to $4,000 per case.In fee petition cases, however, SSA can approve any fee amount as long as it does not exceed 25 percent of the beneficiary’s past-due benefits. For DI cases, SSA usually withholds the amount of the fee from the beneficiaries’ past-due benefits and pays the attorneys directly, in effect guaranteeing payment for the attorney. In SSI cases, however, SSA does not have the authority to pay attorneys directly, and only calculates the amount an attorney is due. Attorneys must instead collect their fees from the SSI recipients. Effective February 1, 2000, the Ticket to Work Act imposed a 6.3 percent user fee on attorneys for SSA’s costs associated with “determining and certifying” attorney fees on the basis of beneficiaries’ past-due benefits. This amount is deducted from the approved attorney’s fee. The act also directed us to study a number of issues related to the costs of determining and certifying the attorney fees, “efficiencies” available to reduce these costs, changes to the attorney fee requirements, and the new user fee. While SSA has been paying attorney fees for over 30 years, the payment process itself is inefficient, and the costs of the process are not known. Approving and paying attorney fees is a complex process that involves many steps; a number of staff in different units and locations; and various information systems that are not linked and that, therefore, require considerable manual intervention. Regarding the costs to administer this multistep process, we have not yet fully determined whether SSA’s past estimate appropriately captured the costs associated with administering attorney fees; however, the agency is currently developing a way to capture actual costs. Attorneys are compensated for their services through either a fee agreement or a fee petition. Attorneys told us that the fee agreement is usually an easier, quicker way to get paid and that, although the fee petition is useful, it is also a more cumbersome tool used primarily when potential fees exceed the statutory limits or when attorneys were unable to file a fee agreement at the beginning of a case. In 1999, fee agreements accounted for about 85 percent of attorney payments, and fee petitions accounted for the balance. Figure 1 shows the steps involved in processing attorney fee agreements. First, officials in SSA’s field offices or ALJs in OHA—depending on where the case is being determined—review fee agreements for DI and SSI cases to assess the reasonableness of the attorney fee charges.If a favorable decision is made on the case and SSA approves the fee agreement, both items—the applicant’s case and the fee agreement—are forwarded to a processing center for payment. 5All parties involved—SSA, the beneficiary, and the attorney—may question the amount of the attorney’s fee, and the fee may be changed if warranted. The Ticket to Work Act requires SSA to impose an assessment, or user fee, to pay for the costs the agency incurs when paying attorneys directly from a claimant’s past-due benefits. For calendar year 2000, the act established the user fee at 6.3 percent of the attorney fees; for calendar years after that, the percentage charged is to be based on the amount the Commissioner determines necessary to fully recover the costs of “determining and certifying” fees to attorneys, but no more than 6.3 percent. The actual costs of administering attorney fees are not yet known because SSA was not required to capture these costs in its information systems and did not have a methodology to do so. The 6.3 percent user fee found in the law was based on an estimate prepared by the agency. Documentation SSA provided us indicates that the percentage was computed by multiplying the numbers of fee petitions and fee agreements the agency processed in 1994 by the amount of time SSA determined it spent on various related activities. When data were not available on the volume of activities or the time spent on them, SSA used estimates. The agency’s overall cost estimate included both the time spent by the ALJs reviewing documentation to support the attorney fees—that is, the fee petitions and fee agreements—as well as the processing centers’ costs associated with calculating the fees, choosing the notice language, and preparing the notices. In addition, the agency included the cost of administering the user fee itself. We recently received information on the basis for SSA’s 6.3 percent user fee calculation and have only begun to analyze the assumptions the agency used to compute it. In order to comply with the Ticket to Work Act, SSA is currently in the process of developing a methodology to capture the current costs of administering the attorney fee provisions. These costs could then provide the foundation for the agency’s decisions about what the rate should be to achieve full recovery of costs. SSA has established a work group to identify the components of administering attorney fees and to develop its new methodology. Thus far, the work group has identified four components associated with the cost of administering attorney fees: (1) the time that SSA field office staff spend informing claimants that they are entitled to legal representation when filing an appeal; (2) the time it takes an ALJ to review and approve the fee; (3) the charges incurred by SSA’s Office of Systems to program systems to track attorney fee cases and related computing time to generate a payment file/tape for Treasury to use to pay the attorney; and (4) the process for calculating the attorney fee, entering relevant attorney and other key data into SSA’s information systems, and certifying related amounts for payment. In April and May of this year, SSA work group officials told us that they do not plan to capture cost information from the first two components because it would be time-consuming to do so, and the methods currently available to SSA for capturing these two types of costs may not produce reliable results. For the third component, SSA officials told us they can readily gather cost information related to time spent on programming SSA’s systems to track attorney fees. However, SSA does not have a cost allocation methodology in place to determine related computing time for processing attorney fees. SSA officials indicated that computing time would constitute an insignificant portion of SSA’s total costs to administer attorney fees. To capture data on the fourth component, SSA modified one of its information systems in February 2000 to determine the number of attorney fee cases it administers. Using the number of cases it processes, SSA is working on a methodology to estimate the costs involved in this fourth component for paying attorneys. SSA plans to have this cost data available by the end of fiscal year 2000. However, in commenting on a draft of this statement, SSA officials told us that they do plan to capture costs for the second component—the time it takes the ALJ to review and approve the fee. In reviewing the law, the cost of ALJ time spent reviewing and approving fees appears to be part of the cost of “determining and certifying” fees and may represent a significant portion of the total costs. As SSA determines the ALJ costs in its current approach, it will need an allocation methodology that accurately allocates the costs associated with DI cases for which SSA is paying an attorney directly to those cases. Attorneys we talked with told us they are concerned now that they are paying more than their fair share of the cost of the process. Attorneys have expressed concern about the length of time it takes SSA to process their fees and have questioned the appropriateness of charging a user fee for a service that takes so long. In regard to the user fee, you specifically asked us to look at issues surrounding (1) linking the amount of the user fee to the timeliness of the payment to the attorney and (2) expressing the user fee as a fixed amount instead of a percentage. When considering one or both of these changes, certain policy and administrative implications would need to be addressed. According to the National Organization of Social Security Claimants’ Representatives (NOSSCR),6 individual attorneys, and SSA officials, SSA often has trouble making timely payments to attorneys. Processing attorney fees represents a small part of SSA’s overall activities—in 1999, we estimate that SSA processed about 6 billion beneficiary payments and SSA reported it processed less than 200,000 attorney payments. Additionally, SSA officials told us that they view such responsibilities as paying beneficiaries as more directly linked to their mission than paying attorneys. As a result, SSA has not routinely gathered and monitored performance data on the length of time it has taken to pay attorneys. However, recently tabulated data show that from January 1995 through May 2000, only 10 percent of attorney fees for fee agreements were paid within 30 days from the time of the beneficiary is put on current-pay status to payment of fees. As figure 2 shows, there is a wide range of elapsed processing times for payments. NOSSCR is an interest group for Social Security lawyers. To address timeliness concerns, a recent legislative proposal (H.R. 4633) would permit the user fee to be assessed against attorneys only if SSA pays attorneys within 30 days from the time of initial certification of benefits. Figure 2 above shows that from 1995 to the present, SSA has only been able to meet this timeframe in 10 percent of the cases. However, certain issues related to this proposal should be clearly understood by both SSA and the attorneys. All parties involved must clearly understand at what point in the process the clock starts ticking, when it stops, and what activities are performed during this period. When considering the current legislative proposal or contemplating other options, concerned parties need to weigh the attorneys’ right to be paid in a timely manner with SSA’s need to ensure the accuracy of its payments. While SSA’s current process is inefficient and the agency can make some improvements, not all factors are within SSA’s control, such as awaiting fee petition information from attorneys and coordinating workers’ compensation offsets. The current legislative proposal states that the clock starts ticking with initial certification of benefits—also referred to as the point when the beneficiary is put in current-pay status. At this point, SSA might be developing the case for final calculation of past-due benefits and might not have control over processing times. Attorneys need to realize that because the proposal starts the clock with initial certification, and additional work may still need to be done to develop the case, the total elapsed time from favorable decision to attorney fee payment might not actually be decreased. Information on these issues needs to be clearly communicated or the frustration and complaints with the process are likely to continue. In addition, having the clock start before SSA has complete control over the process could create perverse incentives that may actually delay payments to attorneys. Because SSA does not have control over all the activities that occur following initial certification of benefits, it is conceivable that some attorneys might view this as an opportunity to delay providing needed information to SSA in hopes of avoiding the user fee. Aside from the delays that are outside its control, SSA is aware that there are steps it could take to make the process more efficient. For example, agency officials have said that instituting direct deposit of attorney fees is more efficient; it could shorten the time it takes for the fee payment to reach the attorney, and could eliminate delays that result when attorneys change their addresses without notifying SSA.SSA currently pays 65 percent of beneficiaries by means of direct deposit and wants to expand this approach to all its transactions. Possible improvements to SSA’s information systems may also help reduce processing times. For instance, enhancements to SSA’s information systems could eliminate much of the manual workload involved in processing and certifying attorney fees. As stated earlier, various information systems are currently used to process SSA’s attorney fee workload associated with DI cases. These systems capture data on various aspects of the disability claims process, but are not linked to one another and, thus, require some manual intervention. As a result, without linked systems or a more streamlined process it is difficult for SSA to capture the data required to measure the timeliness of the total range of activities involved in paying attorneys. To efficiently administer user fees that are based on timeliness of fee payments to attorneys, SSA will need to develop new software code to link these stand-alone information systems, or develop a new system to process the entire attorney fee workload. SSA currently has plans for systems enhancements to improve the attorney fee process, which should help improve case processing time. According to SSA, these enhancements would automate the steps in order for systems to recognize attorney fee agreement cases, compute and withhold the 6.3 percent user fee, pay the actual attorney fee, and release the remainder of the past-due benefits immediately to the beneficiary.9 If SSA were to make the proposed system enhancements to process attorney fees, it may be advisable to revisit requirements for how quickly the agency could be expected to process an attorney fee. A number of issues surround the question of whether the user fee should be expressed as a fixed amount or as a percentage, and these are linked in large part to the question of what costs the user fee should cover. On one hand, expressing the user fee as a percentage of the attorney fee, as is currently the case, assumes that the costs SSA incurs in processing user fees grow in proportion to the fees. This could be the case, for example, if an ALJ spends extra time reviewing a fee petition in cases involving more activity and larger fees. On the other hand, expressing the user fee as a fixed amount assumes that the costs of processing the attorney fees are relatively the same and, therefore, a higher attorney fee does not translate into higher processing costs. This could be the case if the costs are fixed and do not vary from case to case. To adequately weigh the relative merits of both options, we need to further study the cost estimate information SSA used to develop the 6.3 percent user fee, the cost data that SSA is currently capturing, and the percentage of DI versus SSI cases processed. This analysis will be included in our final report, due to the Congress by the end of this year. Attorneys, NOSSCR, and advocates have discussed various changes related to attorney fees: issuing joint checks for past-due benefits to both the attorney and the beneficiary, raising the $4,000 limit on attorney fees allowable under the fee agreement process, and extending the statutory withholding of attorney fees to the SSI program. Each of these proposals involves trade-offs that should be considered before its implementation. Under the current process, when an individual receives a favorable DI decision, SSA makes an effort to issue the beneficiary’s past-due benefits as soon as possible and withholds the amount of the attorney fee. After the fee is processed, Treasury issues a check to the attorney. Individual attorneys have suggested changing this process from one in which two separate payments are made to one in which a single check for the total amount of the past-due benefits—made out jointly to the beneficiary and the attorney—is sent directly to the attorney. The attorney would deposit the check into an escrow account and pay the past-due benefits, minus his or her fee, to the beneficiary. Attorneys told us that joint checks would help expedite the attorney fee process because the beneficiary’s money and attorney fees would be linked, and SSA views paying beneficiaries as a priority. Such a change could have serious policy implications, however. For instance, SSA currently attempts to pay the beneficiary as soon as possible following a favorable decision. Issuing joint checks might delay payment to the beneficiary because the beneficiary would have to wait until after the attorney deposited the money into an escrow account to receive benefits. In addition, when SSA controls the payment, it is assured that no more than 25 percent is deducted from the past-due benefits. Sending joint checks to the attorney would reduce SSA’s ability to enforce attorney fee limits and could increase the risk that attorneys would short change beneficiaries. In turn, control over payment to the beneficiary would shift to the attorney, while accountability for the payment would remain with SSA. In addition, a number of administrative issues dealing with the implementation of joint checks would need to be addressed. First, SSA needs to know when the beneficiary receives his or her benefits. SSA is responsible for sending out benefit statements, SSA-1099s, to beneficiaries because sometimes Social Security benefits are taxable. With joint checks, SSA might have difficulty tracking when beneficiaries received their benefits. If attorneys were responsible for paying the past-due benefits from their escrow accounts, SSA would need a system certifying when—in which tax year—the beneficiary was paid. This reporting system would be needed to ensure the accuracy of the SSA-1099s. Another administrative consideration is that the current information system used for processing DI cases—MCS—would need to be modified so that joint payments could be issued. As noted earlier, this system is designed to effectuate payments to the beneficiary or his or her representative payee only. Another change being discussed is raising the $4,000 cap on attorney fees for the fee agreement process. As I explained earlier, under the fee agreement process, attorneys can receive 25 percent of the past-due benefits up to $4,000, whichever is less. By statute, the Commissioner of SSA has the authority to adjust the cap at his or her discretion. Debate on this issue centers around how legal representation for DI applicants might be affected. Attorneys we spoke with told us that higher fees would increase the attractiveness of DI claims. According to this argument, attractive fees could result in more attorneys for DI cases, which could increase the rate of representation for this population. Further, an increased rate of representation might result in more favorable decisions for DI applicants. The opposing argument is that representation is readily available to DI applicants. According to an SSA official, the agency has not raised the cap because it determined that a higher cap was not needed to support representation. In either case, evaluating this issue is difficult in the absence of such data as historical and current representation rates and without knowing the proportion of applicants who could not secure representation and why. A final change being discussed would be to expand withholding to the SSI program. SSA currently calculates the amount of attorney fees due in SSI cases but does not withhold the fee from beneficiaries’ past-due benefits. Current law explicitly differentiates between DI and SSI regarding attorney fees, stating that withholding and paying attorney fees is only permissible for DI cases. Many believe that extending withholding to SSI is appropriate because it would increase representation for SSI applicants and alleviate a perceived equity imbalance for attorneys who represent both DI and SSI applicants. Because there is no guarantee that attorneys will receive fees due to them for SSI cases, some attorneys told us that they are reluctant to accept SSI cases. The attorneys maintained that expanding withholding to SSI would increase the attractiveness of the cases, and representation would increase. In fact 1999 data show that at the hearing level, applicants for DI and combined DI/SSI benefits were more likely to be represented by an attorney than those applying for SSI only. Additionally, according to an official from an association of ALJs, expanding withholding to SSI would be beneficial to the applicants because cases with representation are better presented and have a better chance of receiving a favorable decision than nonrepresented cases. Proponents of extending withholding to SSI also told us that the current situation is unfair to attorneys representing SSI applicants. According to this view, it is inequitable for attorneys to be guaranteed payment for DI cases but not for SSI cases. As with the DI cases, the SSI recipient has an obligation to pay for his or her legal services; however, in DI cases, SSA ensures that this happens. For SSI cases, the attorney must obtain payment directly from the beneficiary. The opposing view of extending withholding to SSI is based on the relative economic status of DI beneficiaries and SSI recipients. SSI recipients tend to be poorer than DI beneficiaries, and some advocates have expressed concern that taking money from a recipient’s past-due benefits to pay attorneys would be detrimental to the recipient’s economic well-being. SSI recipients often have many financial obligations, such as overdue rent and utility bills that need to be paid. Advocates maintain that deducting the attorney fee from the past-due benefits might make it impossible for recipients to pay these bills. However, if an attorney successfully appeals a case for an SSI recipient, the recipient should be in a better position financially. From an administrative standpoint, if SSA was required to withhold attorney fees for SSI cases, it will need to develop new information systems or significantly modify existing systems to process this new workload. However, as with any system effort, SSA’s ability to carry out this task will depend on its available resources and the priority that it gives to this initiative. Mr. Chairman, this concludes my prepared statement. At this time, I will be happy to answer any questions you or other Members of the Subcommittee may have. For information regarding this testimony, please contact Barbara Bovbjerg at (202) 512-7215. Individuals who made key contributions to this testimony include Yvette Banks, Kelsey Bright, Kay Brown, Abbey Frank, Valerie Freeman, Valerie Melvin, Sheila Nicholson, Daniel Schwimer, and Debra Sebastian. (207092)
GAO discussed issues involving the Social Security Administration's (SSA) process for paying attorneys representing applicants for disability benefits, focusing on three areas of the attorney payment process: (1) the process itself, including the costs of processing the payments; (2) possible changes to the way the user fee is charged; and (3) changes being considered for the attorney fee payment process overall. GAO noted that: (1) while SSA has been paying attorney fees from beneficiaries' past-due benefits for over 30 years, the payment process remains inefficient, and little historical data are available to help GAO analyze proposed changes; (2) under the current procedures, the inefficiencies in processing fee payments to attorneys result from using a number of different staff in different units and various information systems that are not linked, and are not designed to calculate and process all aspects of the attorney fee payment, thus necessitating manual calculations; (3) the Ticket to Work Act includes a provision that requires SSA to charge an assessment to recover the costs of this service; (4) GAO has only begun to analyze the estimate that was used as a basis for the user fee, and SSA does not know the actual cost it incurs in processing attorney fees; (5) however, the agency is developing a methodology to better capture these costs; (6) SSA has trouble with making timely payments to attorneys, and some have questioned the appropriateness of charging a user fee for a service that takes so long; (7) a recent legislative proposal calls for eliminating the user fee if SSA does not pay the attorney within 30 days; (8) in many cases, it will be difficult for SSA to meet these timeframes; (9) attorneys need to realize that, while it is possible for SSA to improve the efficiency of the process it uses to pay them, some factors that delay their payments are outside SSA's control and are unlikely to change at this time; (10) three possible changes to the attorney fee payment process include whether: (a) joint checks for past-due benefits should be issued to the beneficiary and the attorney; (b) the dollar limit on certain attorney fees should be raised; and (c) SSA's attorney fee payment process should be expanded to the Supplemental Security Income program; (11) these changes would have both policy and administrative implications that need to be considered; (12) some of the changes could increase attorney representation for disability applicants, according to attorneys GAO spoke with; (13) however, not everyone agrees with this premise; (14) moreover, there are some drawbacks to these changes; and (15) SSA indicated it may need to make significant modifications to its information systems to issue joint checks or pay attorneys who represent SSI recipients.
NRC is an independent agency of over 3,200 employees established by the Energy Reorganization Act of 1974 to regulate civilian—that is, commercial, industrial, academic, and medical—use of nuclear materials. NRC is headed by a five-member Commission. The President appoints the Commission members, who are confirmed by the Senate, and designates one of them to serve as Chairman and official spokesperson. The Commission as a whole formulates policies and regulations governing nuclear reactor and materials safety, issues orders to licensees, and adjudicates legal matters brought before it. NRC and the licensees of nuclear power plants share the responsibility for ensuring that commercial nuclear power reactors are operated safely. NRC is responsible for issuing regulations, licensing and inspecting plants, and requiring action, as necessary, to protect public health and safety. Plant licensees have the primary responsibility for safely operating their plants in accordance with their licenses and NRC regulations. NRC has the authority to take actions, up to and including shutting down a plant, if licensing conditions are not being met and the plant poses an undue risk to public health and safety. Nuclear power plants have many physical structures, systems, and components, and licensees have numerous activities under way, 24-hours a day, to ensure that plants operate safely. NRC relies on, among other things, its on-site resident inspectors to assess plant conditions and the licensees’ quality assurance programs such as those required for maintenance and problem identification and resolution. With its current resources, NRC can inspect only a relatively small sample of the numerous activities going on during complex plant operations. According to NRC, its focus on the more safety significant activities is made possible by the fact that safety performance at plants has improved as a result of more than 25 years of operating experience. Commercial nuclear power plants are designed according to a “defense in depth” philosophy revolving around redundant, diverse, and reliable safety systems. For example, two or more key components are put in place so that if one fails, there is another to back it up. Plants have numerous built- in sensors to monitor important indicators such as water temperature and pressure. Plants also have physical barriers to contain the radiation and provide emergency protection. For example, the nuclear fuel is contained in a ceramic pellet to lock in the radioactive byproducts and then the fuel pellets are sealed inside rods made of special material designed to contain fission products, and the fuel rods are placed in reactors housed in containment buildings made of several feet of concrete and steel. Furthermore, the nuclear power industry formed an organization, the Institute of Nuclear Power Operations (INPO) with the mission to “promote the highest levels of safety and reliability-to promote excellence- in the operation of nuclear electric generating plants.” INPO provides a system of personnel training and qualification for all key positions at nuclear power plants and workers undergo both periodic training and assessment. INPO also conducts periodic evaluations of operating nuclear plants, focusing on plant safety and reliability, in the areas of operations, maintenance, engineering, radiological protection, chemistry, and training. Licensees make these evaluations available to the NRC for review, and the NRC staff uses the evaluations as a means to determine whether its oversight process has missed any performance issues. NRC uses various tools to oversee the safe operation of nuclear power plants, generally consisting of physical plant inspections of equipment and records and objective indicators of plant performance. These tools are risk-informed in that they are focused on the issues considered most important to plant safety. Based on the results of the information it collects through these efforts, NRC takes a graded approach to its oversight, increasing the level of regulatory attention to plants based on the severity of identified performance issues. NRC bases its regulatory oversight process on the principle and requirement that plant licensees routinely identify and address performance issues without NRC’s direct involvement. An important aspect of NRC’s inspections is ensuring the effectiveness of licensee quality assurance programs. NRC assesses overall plant performance and communicates these results to licensees on a semi- annual basis. During fiscal year 2005, NRC inspectors spent a total of 411,490 hours on plant inspection activities (an average of 77 hours per week at each plant). The majority of these inspection efforts were spent on baseline inspections, which all plants receive on an almost continuous basis. Baseline inspections, which are mostly conducted by the two to three NRC inspectors located at each nuclear power plant site, evaluate the safety performance of plant operations and review plant effectiveness at identifying and resolving its safety problems. There are more than 30 baseline inspection procedures, conducted at varying intervals, ranging from quarterly to triennially, and involving both physical observation of plant activities and reviews of plant reports and data. The inspection procedures are risk-informed to focus inspectors’ efforts on the most important areas of plant safety in four ways: 1) areas of inspection are included in the set of baseline procedures based on, in part, their risk importance, 2) risk information is used to help determine the frequency and scope of inspections, 3) the selection of activities to inspect within each procedure is informed with plant-specific risk information, and 4) the inspectors are trained in the use of risk information in planning their inspections. For inspection findings found to be more than minor, NRC uses its significance determination process (SDP) to assign each finding one of four colors to reflect its risk significance. Green findings equate to very low risk significance, while white, yellow, and red colors represent increasing levels of risk, respectively. Throughout its application of the SDP, NRC incorporates information from the licensee, and the licensee has the opportunity to formally appeal the final determination that is made. In addition to assigning each finding a color based on its risk significance, all findings are evaluated to determine if certain aspects of plant performance, referred to as cross-cutting issues, were a contributing cause to the performance problem. The cross-cutting issues are comprised of (1) problem identification and resolution, (2) human performance, and (3) safety consciousness in the work environment. To illustrate, in analyzing the failure of a valve to operate properly, NRC inspectors determined that the plant licensee had not followed the correct procedures when performing maintenance on the valve, and thus NRC concluded the finding was associated with the human performance cross-cutting area. If NRC determines that there are multiple findings during the 12-month assessment period with documented cross-cutting aspects, more than three findings with the same causal theme, and NRC has a concern about the licensee’s progress in addressing these areas, it may determine that the licensee has a “substantive” cross-cutting issue. Opening a substantive cross-cutting issue serves as a way for NRC to notify the plant licensee that problems have been identified in one of the areas and that NRC will focus its inspection efforts in the cross-cutting area of concern. When NRC becomes aware of one or more performance problems at a plant that are assigned a risk color greater-than-green (white, yellow, or red), it conducts supplemental inspections. Supplemental inspections, which are performed by regional staff, expand the scope beyond baseline inspection procedures and are designed to focus on diagnosing the cause of the specific performance deficiency. NRC increases the scope of its supplemental inspection procedures based on the number of greater-than- green findings identified, the area where the performance problem was identified, and the risk color assigned. For example, if one white finding is identified, NRC conducts a follow-up inspection directed at assessing the licensee’s corrective actions to ensure they were sufficient in both correcting the specific problem identified and identifying and addressing the root and contributing causes to prevent recurrence of a similar problem. If multiple yellow findings or a single red finding is identified, NRC conducts a much more comprehensive inspection which includes obtaining information to determine whether continued operation of the plant is acceptable and whether additional regulatory actions are necessary to address declining plant performance. This type of more extensive inspection is usually conducted by a multi-disciplinary team of NRC inspectors and may take place over a period of several months. NRC inspectors assess the adequacy of the licensee’s programs and processes such as those for identifying, evaluating, and correcting performance issues and the overall root and contributing causes of identified performance deficiencies. NRC conducts special inspections when specific events occur at plants that are of particular interest to NRC because of their potential safety significance. Special inspections are conducted to determine the cause of the event and assess the licensee’s response. For special inspections, a team of experts is formed and an inspection charter issued that describes the scope of the inspection efforts. At one plant we reviewed, for example, a special inspection was conducted to investigate the circumstances surrounding the discovery of leakage from a spent fuel storage pool. Among the objectives of this inspection were to assess the adequacy of the plant licensee’s determination of the source and cause of the leak, the risk significance of the leakage, and the proposed strategies to mitigate leakage that had already occurred and repair the problem to prevent further leakage. In addition to its various inspections, NRC also collects plant performance information through a performance indicator program, which it maintains in cooperation with the nuclear power industry. On a quarterly basis, each plant submits data for 15 separate performance indicators. These objective numeric measures of plant operations are designed to measure plant performance related to safety in various aspects of plant operations. For example, one indicator measures the number of unplanned reactor shutdowns during the previous four quarters while another measures the capability of alert and notification system sirens, which notify residents living near the plant in the event of an accident. Working with the nuclear power industry, NRC established specific criteria for acceptable performance with thresholds set and assigned colors to reflect increasing risk according to established safety margins for each of the indicators. Green indicators reflect performance within the acceptable range while white, yellow, and red colors represent decreasing plant performance, respectively. NRC inspectors review and verify the data submitted for each performance indicator annually through the baseline inspection process. If questions arise about how to calculate a particular indicator or what the correct value should be, there is a formal feedback process in place to resolve the issue. When performance indicator thresholds are exceeded, NRC responds in a graded fashion by performing supplemental inspections that range in scope depending on the significance of the performance issue. Under the ROP, NRC places each plant into a performance category on the agency’s action matrix, which corresponds to increasing levels of oversight based on the number and risk significance of inspection findings and performance indicators. The action matrix is NRC’s formal method of determining what additional oversight procedures—mostly supplemental inspections—are required. Greater-than-green inspection findings are included in the action matrix for a minimum of four quarters to allow sufficient time for additional findings to accumulate that may indicate more pervasive performance problems requiring additional NRC oversight. If a licensee fails to correct the performance problems within the initial four quarters, the finding may be held open and considered for additional oversight for more than the minimum four quarters. At the end of each 6-month period, NRC issues an assessment letter to each plant licensee. This letter describes what level of oversight the plant will receive according to its placement in the action matrix performance categories, what actions NRC is expecting the plant licensee to take as a result of the performance issues identified, and any documented substantive cross-cutting issues. NRC also holds an annual public meeting at or near each plant site to review performance and address questions about the plant’s performance from members of the public and other interested stakeholders. Most inspection reports, assessment letters and other materials related to NRC’s oversight processes are made publicly available through a NRC website devoted to the ROP. The website also includes plant-specific quarterly summaries of green or greater inspection findings and all the performance indicators. The ROP has identified numerous performance deficiencies as inspection findings at nuclear power plants since it was first implemented, but most of these were considered to be of very low risk to safe plant operations. Similarly, there have been very few instances in which performance indicator data exceeded acceptable standards. As a result, few plants have been subjected to high levels of oversight. Of more than 4,000 inspection findings identified between 2001 and 2005, 97 percent were green. While green findings are considered to be of “very low” safety significance, they represent a performance deficiency on the part of the plant licensee and thus are important to correct. Green findings consist of such things as finding that a worker failed to wear the proper radiation detector or finding that a licensee did not properly evaluate and approve the storage of flammable materials in the vicinity of safety-related equipment. NRC does not follow-up on the corrective action taken for every green finding identified; rather, it relies on the licensee to address and track their resolution through the plant’s corrective action program. NRC does, however, periodically follow-up on some of the actions taken by the licensee to address green findings through an inspection specifically designed to evaluate the effectiveness of the licensee’s corrective action program. NRC officials stated that green findings provide useful information on plant performance and NRC inspectors use the findings to identify performance trends in certain areas and help inform their selection of areas to focus on during future inspections. In contrast to the many green findings, NRC has identified 12 findings of the highest risk significance (7 yellow and 5 red), accounting for less than 1 percent of the findings since 2001. For example, one plant was issued a red finding— the highest risk significance—after a steam generator tube failed, causing an increased risk in the release of radioactive material. Similar to the inspection findings, most performance indicator reports have shown the indicators to be within the acceptable levels of performance. Only 156, or less than one percent of over 30,000 indicator reports from 2001 to 2005, exceeded the acceptable performance threshold. Four of the 15 performance indicators have always been reported to be within acceptable performance levels. In addition, 46 plants have never had a performance indicator fall outside of the acceptable level and only three plants reported having a yellow indicator for one performance measure; no red indicators have ever been reported. On the basis of its inspection findings and performance indicators, NRC has subjected more than three quarters of the 103 operating plants to at least some level of increased oversight (beyond the baseline inspections) for varying amounts of time. Most of these plants received the lowest level of increased oversight, consisting of a supplemental inspection, to follow- up on the identification of one or two white inspection findings or performance indicators. Five plants have received the highest level of plant oversight for which NRC allows plants to continue operations, due to the identification of multiple white or yellow findings and/or the identification of a red finding. One plant received this level of oversight because NRC determined that the licensee failed to address the common causes of two white findings and held them open for more than four quarters. One of these findings involved the recurrent failure of a service water pump because the licensee failed to take adequate corrective action after the first failure. NRC inspectors at the plants we reviewed indicated that, when plant performance declines, it is often the result of ineffective corrective action programs, problems related to human performance, or complacent management, which often results in deficiencies in one or more of the cross-cutting areas. In assessing the results of the ROP data, we found that all plants subjected to NRC’s highest level of oversight also had a substantive cross-cutting issue open either prior to or during the time that it was subjected to increased oversight inspections. Overall, NRC’s oversight process shows mostly consistent results from 2001 to 2005. For example, the total number of green findings at all plants ranged from 657 to 889 per year and the total number of other findings ranged from 10 to 30 per year with no strong trend (see fig. 1). Only in the area of cross-cutting issues—or inspection findings for which one or more cross-cutting issues was associated—is an increasing trend evident (see fig. 2). According to NRC, the reason for this increase is due in part to the development of guidance on the identification and documentation of cross-cutting issues and its increased emphasis in more recent years. According to NRC officials, the results of its oversight process at an industry or summary level serve as an indicator of industry performance, which to date indicates good safety performance. On an annual basis, NRC analyzes the overall results of its inspection and performance indicator programs and compares them with industry level performance metrics to ensure all metrics are consistent and takes action if adverse trends are identified. While NRC communicates the results of its oversight process on a plant-specific basis to plant managers, members of the public, and other government agencies through annual public meetings held at or near each site and an internet Web site, it does not publicly summarize the overall results of its oversight process, such as the total number and types of inspection findings and performance indicators falling outside of acceptable performance categories, on a regular basis. NRC has taken a proactive approach to improving its reactor oversight process. It has several mechanisms in place to incorporate feedback from both external and internal stakeholders and is currently working on improvements in key areas of the process, including better focusing inspections on areas most important to safety, improving its timeliness in determining the risk significance of its inspection findings, and modifying the way that it measures some performance indicators. NRC is also working to address what we believe is a significant shortcoming in its oversight process by improving its ability to address plants’ safety culture, allowing it to better identify and address early indications of deteriorating safety at plants before performance problems develop. According to NRC officials, the ROP was implemented with the understanding that it would be an evolving process and improvements would be made as lessons-learned were identified. Each fall NRC solicits feedback from external stakeholders, including industry organizations, public interest groups, and state and local officials, through a survey published in the Federal Register. NRC also conducts an internal survey of its site, regional, and headquarters program and management staff every other year to obtain their opinions on the effectiveness of the ROP. Additionally, NRC has in place a formal feedback mechanism whereby NRC staff can submit recommendations for improving various oversight components and NRC staff meet with industry officials on a monthly basis—in addition to various meetings, workshops, and conferences—to discuss oversight implementation issues and concerns. NRC staff also incorporates direction provided by the NRC Commissioners and recommendations from independent evaluations such as from GAO and the NRC Inspector General. The results of these efforts are pulled together in the form of an annual self-assessment report, which outlines the overall results of its outreach and the changes it intends to make in the year ahead. According to NRC officials, the changes made to the ROP since its implementation in 2000—including those made in response to the Davis- Besse incident—have generally been refinements to the existing process rather than significant changes to how it conducts its oversight. In the case of Davis-Besse, NRC formed a task force to review the agency’s regulatory processes. The task force’s report, issued in September 2002, contained more than 50 recommendations, many associated with the ROP. Among the more significant ROP-related recommendations were those to enhance the performance indicator that monitors unidentified leakage to be more accurate, develop specific guidance to inspect boric acid control programs and vessel head penetration nozzles, modify the inspection program to provide for better follow-up of longstanding issues, and enhance the guidance for managing plants that are in an extended shutdown condition as a result of significant performance problems. NRC program officials told us that the task force’s most significant recommendations were in areas outside of the ROP, such as improving the agency’s operating experience program. According to NRC, it has implemented almost all of the task force’s recommendations. Other modifications that NRC has recently made or is in the process of making include the following: NRC recently revised seven of its baseline inspection procedures to better focus the level and scope of its inspection efforts on those areas most important to safety. These revisions resulted from a detailed analysis in 2005 of its more than 30 baseline inspection procedures. The effort involved analyzing the number of findings resulting from each of its inspection procedures and the time spent directly observing plant activities or reviewing licensee paperwork, among other things. NRC has efforts underway to improve what it refers to as its significance determination process (SDP). An audit by the NRC Inspector General, a review by a special task group formed by NRC, and feedback from other stakeholders have pointed to several significant weaknesses with the SDP. For example, internal and external stakeholders raised concerns about the amount of time, level of effort, and knowledge and resources required to determine the risk significance of some findings. Industry officials commented that because most inspection findings are green, one white finding at a plant can place it in the “bottom quartile” of plants from a performance perspective. Therefore, industry officials explained, licensees try to avoid this placement and will expend a great deal of effort and resources to provide additional data to NRC to ensure the risk level of a finding is appropriately characterized. This can add significant time to the process because different technical tools may be used that then must be incorporated with NRC’s tools and processes. The delay in assigning a color to a finding while the new information is being considered could also affect a plant’s placement on NRC’s action matrix, essentially delaying the increased oversight called for if the finding is determined to be greater- than-green. NRC developed a SDP Improvement Plan in order to address these and other concerns and track its progress in implementing key changes. For example, NRC introduced a new process aimed at improving timeliness by engaging decision-makers earlier in the process to more quickly identify the scope of the evaluation, the resources needed, and the schedule to complete the evaluation. NRC is also taking actions to improve its performance indicators. These actions are partly to address concerns that the indicators have not contributed to the early identification of poorly performing plants to the degree originally envisioned as they are almost always within acceptable performance levels (green). There have been several cases where plants reported an acceptable performance indicator and performance problems were subsequently identified. For example, NRC inspectors at one plant noted that while performance indicator data related to its alert and notification system in place for emergency preparedness had always been reported green, the system had not always been verified to be functioning properly. On the other hand, industry officials believe that the high percentage of indicators that are green is indicative of plants’ good performance. Several plant managers told us that they closely monitor and manage to the acceptable performance thresholds established for each indicator, and will often take action to address performance issues well before the indicator crosses the acceptable performance threshold. Because NRC inspectors verify indicator data once a year, a potential disagreement over the data might not surface for up to a year after it is reported, and it may take even longer to resolve the disagreement with the licensee. Similar to delays with the SDP, a delay in assigning a color while the disagreement is resolved could affect a plant’s placement on NRC’s action matrix, and delay the increased oversight called for if the indicator is determined to be greater-than-green. NRC plans to work with the industry to review selected indicator definitions to make interpretation more concise and reduce the number of discrepancies. To date, NRC has focused significant effort on developing a key indicator to address known problems with the performance indicators measuring the unavailability of safety systems. NRC is also in the process of changing the definition for several other indicators, in addition to considering the feasibility of new indicators. I would now like to discuss what we believe is one of NRC’s most important efforts to improve its oversight process by increasing its ability to identify and address deteriorating safety culture at plants. NRC and others have long recognized that safety culture and the attributes that make up safety culture, such as attention to detail, adherence to procedures, and effective corrective and preventative action, have a significant impact on a plant’s performance. Despite this recognition and several external groups’ recommendations to better incorporate safety culture aspects into its oversight process, it did not include specific measures to explicitly address plant safety culture when it developed the ROP in 2000. The 2002 Davis-Besse reactor vessel head incident highlighted that this was a significant weakness in the ROP. In investigating this event, we and others found that NRC did not have an effective means to identify and address early indications of deteriorating safety at plants before performance problems develop. Largely as a result of this event, in August 2004, the NRC Commission directed the NRC staff to enhance the ROP by more fully addressing safety culture. In response to the Commission’s directive, the NRC staff formed a safety culture working group in early 2005. The working group incorporated the input of its stakeholders through a series of public meetings held in late 2005 and early 2006. In February 2006, NRC issued its proposed approach to better incorporate safety culture into the ROP. NRC officials expect to fully implement all changes effective in July 2006. NRC’s proposed safety culture changes largely consist of two main approaches: first, clarifying the identification and treatment of cross- cutting issues in its inspection processes and second, developing a structured way for NRC to determine the need for a safety culture evaluation of plants. NRC has developed new definitions for each of its cross-cutting issues to more fully address safety culture aspects and additional guidance on their treatment once they are identified. For example, the problem identification and resolution cross-cutting area is now comprised of several components—corrective action program, self and independent assessments, and operating experience. NRC inspectors are to assess every inspection finding to determine if it is associated with one or more of the components that make up each of the cross-cutting areas. Inspectors then determine, on a semi-annual basis, if a substantive cross-cutting issue exists on the basis of the number and areas of cross- cutting components identified. If the same substantive cross-cutting issue is identified in three consecutive assessment periods, NRC may request that the licensee perform an assessment of its safety culture. The intent is to provide an opportunity to diagnose a potentially declining safety culture before significant safety performance problems occur. Under its approach, NRC would expect the licensees of plants with more than one white color finding or one yellow finding to evaluate whether the performance issues were in any way caused by any safety culture components, and NRC might request the licensee to complete an independent assessment of its safety culture, if the licensee did not identify an important safety culture component. For plants where more significant or multiple findings have been identified, the NRC would not only independently evaluate the adequacy of the independent assessment of the licensee’s safety culture, but it might also conduct its own independent assessment of the licensee’s safety culture. Some of NRC’s proposed actions regarding safety culture have been controversial, and not all stakeholders completely agree with the agency’s approach. For example, the nuclear power industry has expressed concern that the changes could introduce undue subjectivity to NRC’s oversight, given the difficulty in measuring these often intangible and complex concepts. Several of the nuclear power plant managers at the sites we reviewed said that it is not always clear why a cross-cutting issue was associated with finding, or what it will take to clear themselves once they’ve been identified as having a substantive cross-cutting issue open. Some industry officials worry that this initiative will further increase the number of findings that have cross-cutting elements associated with them and if all of the findings have them they will lose their value. Industry officials also warn that if it is not implemented carefully, it could divert resources away from other important safety issues. Other external stakeholders, on the other hand, suggest that this effort is an important step in improving NRC’s ability to identify performance issues at plants before they result in performance problems. Importantly, there will be additional tools in place for NRC to use when it identifies potential safety culture concerns. NRC officials view this effort as the beginning step in an incremental approach and acknowledge that continual monitoring, improvements, and oversight will be needed in order to better allow inspectors to detect deteriorating safety conditions at plants before events occur. NRC plans to evaluate stakeholder feedback and make changes based on lessons learned from its initial implementation of its changes as part of its annual self-assessment process for calendar year 2007. For further information about this statement for the record, please contact me at (202) 512-3841 (or at [email protected]). Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Raymond H. Smith, Jr. (Assistant Director), Alyssa M. Hundrup, Alison O’Neill, and Dave Stikkers made key contributions to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Nuclear Regulatory Commission (NRC) has the responsibility to provide oversight to ensure that the nation's 103 commercial nuclear power plants are operated safely. While the safety of these plants has always been important, since radioactive release could harm the public and the environment, NRC's oversight has become even more critical as the Congress and the nation consider the potential resurgence of nuclear power in helping to meet the nation's growing energy needs. Prior to 2000, NRC was criticized for having a safety oversight process that was not always focused on the most important safety issues and in some cases, was overly subjective. To address these and other concerns, NRC implemented a new oversight process--the Reactor Oversight Process (ROP). NRC continues to modify the ROP to incorporate feedback from stakeholders and in response to other external events. This statement summarizes information on (1) how NRC oversees nuclear power plants, (2) the results of the ROP over the past several years, and (3) the aspects of the ROP that need improvement and the status of NRC's efforts to improve them. This statement discusses preliminary results of GAO's work. GAO will report in full at a later date. GAO analyzed program-wide information, inspection results covering 5 years of ROP operations, and detailed findings from a sample of 11 plants. NRC uses various tools to oversee the safe operation of nuclear power plants, including physical plant inspections and quantitative measures or indicators of plant performance. To apply these tools, NRC uses a risk-informed and graded approach--that is, one considering safety significance in deciding on the equipment and operating procedures to be inspected and employing increasing levels of regulatory attention to plants based on the severity of identified performance problems. The tools include three types of inspections--baseline, supplemental, and special. All plants receive baseline inspections of plant operations almost continuously by NRC inspectors. When NRC becomes aware of a performance problem at a plant, it conducts supplemental inspections, which expand the scope of baseline inspections. NRC conducts special inspections to investigate specific safety incidents or events that are of particular interest to NRC because of their potential significance to safety. The plants also self-report on their safety performance using performance indicators for plant operations related to safety, such as the number of unplanned reactor shutdowns. Since 2001, NRC's ROP has resulted in more than 4,000 inspection findings concerning nuclear power plant licensees' failure to comply with regulations or other safe operating procedures. About 97 percent of these findings were for actions or failures NRC considered important to correct but of low significance to overall safe operation of the plants. In contrast, 12 of the inspection findings, or less than 1 percent, were of the highest levels of significance to safety. On the basis of its findings and the performance indicators, NRC has subjected more than three-quarters of the 103 operating plants to oversight beyond the baseline inspections for varying amounts of time. NRC has improved several key areas of the ROP, largely in response to independent reviews and feedback from stakeholders. These improvements include better focusing its inspections on those areas most important to safety, reducing the time needed to determine the risk significance of inspection findings, and modifying the way that some performance indicators are measured. NRC also recently undertook a major initiative to improve its ability to address plants' safety culture--that is, the organizational characteristics that ensure that issues affecting nuclear plant safety receive the attention their significance warrants. GAO and others have found this to be a significant shortcoming in the ROP. Although some industry officials have expressed concern that its changes could introduce undue subjectivity to NRC's oversight, given the difficulty in measuring these often intangible and complex concepts, other stakeholders believe its approach will provide NRC better tools to address safety culture issues at plants. NRC officials acknowledge that its effort is only a step in an incremental approach and that continual monitoring, improvements, and oversight will be needed to fully detect deteriorating safety conditions before an event occurs.
Section 7122 of the Internal Revenue Code authorizes the Secretary of the Treasury to compromise tax delinquencies. The purpose of the OIC Program is to (1) collect what can be fairly and reasonably collected from taxpayers who cannot fully pay their delinquent tax liability, (2) collect the tax in a timely and cost-effective manner, and (3) provide taxpayers with a fresh start toward complying with all future tax filing and payment requirements. Generally, IRS views the OIC Program as a last resort after taxpayers have explored all other available voluntary payment options, such as installment agreements. IRS resolves less than 1 percent of all balance due accounts through the OIC Program. In recent years, the OIC Program underwent numerous program changes intended to reduce the number of inappropriate offers submitted by taxpayers and improve its operations. The changes include the following. In 2001, IRS established the centralized OIC (COIC) processing centers in Brookhaven, New York, and Memphis, Tennessee, to reduce inventory and processing times and reduce costs. Process examiners, lower-grade staff at the COICs, perform the initial processing of new offer applications, which includes determining whether taxpayers’ applications meet IRS’s processability criteria. Offer examiners, higher-grade staff at these COICs, process less complex offers to completion by reviewing taxpayers’ financial information and making decisions about whether to accept the offers. COICs primarily examine offers involving wage and investment income. Based on a pilot test, IRS plans to have COIC staff work some offers from taxpayers with self-employment income starting in the summer of 2006. More complex offers are sent to IRS field offices around the country where offer specialists, who are higher graded than offer examiners, work the offers to completion. These offers take longer to investigate and may require face-to-face meetings with the taxpayers. In 2003, IRS implemented an offer application fee requirement. Taxpayers submitting offer applications must include a $150 fee unless they qualify for a fee waiver. In 2004, IRS revised the OIC application form to make it more user-friendly to taxpayers. In that same year, IRS management put more emphasis on communicating with taxpayers while processing offers. In addition to these program changes, the Restructuring Act also mandated a new basis for accepting offers ETA. Three Types of Compromise According to IRS regulations and guidance, compromises can be granted for one of the following three reasons: Doubt as to liability (DATL)—Doubt exists that the assessed tax liability is correct. DATC—Doubt exists that the taxpayer could ever pay the full amount of tax owed. Effective Tax Administration (ETA)—No doubt exists that the taxpayer can fully pay the taxes owed, but exceptional circumstances nonetheless lead IRS to compromise. IRS has two categories of ETA offers, hardship and non-hardship. According to IRS’s regulations, hardship ETA offers are those that IRS grants because collecting the full liability would create economic hardship for the taxpayer, while non-hardship ETA offers are granted on a basis of equity and public policy. (How economic hardship qualifies a taxpayer for an ETA offer will be addressed later in the report.) According to IRS, equity and public policy considerations may be used to accept an offer when doing so would not adversely affect voluntary compliance for taxpayers in general. While an offer is being reviewed, the statute of limitations for collection and collection actions are suspended. The statute of limitations for collection generally restricts the time IRS has to collect delinquent taxes to 10 years from the date of assessment. The statute of limitations for collection and collection actions continues to be suspended if IRS rejects an offer through the 30-day period that a taxpayer has to make a decision on whether to appeal the rejection decision. If a taxpayer appeals, the suspensions continue through the end of the appeal process. As illustrated in figure 1, the offer process starts when an offer application is submitted by a taxpayer. The application package, Form 656, consists of over 50 pages that include detailed instructions on determining eligibility for filing an offer and a worksheet for calculating the offer amount for individual and business taxpayers. The offer must be supported by a current statement of the taxpayer’s financial condition, including data on assets, liabilities, and monthly income and expenses. IRS typically receives and begins the processing of offers in one of two COICs. The first step is screening out offers based on DATL. DATL offers involving trust fund recovery penalties and personal liability for excise taxes are processed by the OIC Program and all others are referred to IRS examination staff. IRS then screens the remaining offers for processability, using five criteria: 1. current version of OIC application form used, 2. $150 application fee included,3. all required federal tax returns filed, 4. employment taxes current, and 5. taxpayer not in bankruptcy proceeding. Generally, if any of the five requirements are not met, the application is returned to the taxpayer as “not processable.” According to IRS officials, since fiscal year 2003, the requirement to use the current application form has not been enforced although it remains part of IRS’s processability criteria. Program officials said that they do not want to return offer applications to taxpayers solely because the most current form was not used. Next, IRS screens out taxpayers who, based on their self-reported financial data, can fully pay their tax debts. The financial data include income, assets, and living expenses. If, after subtracting the taxpayers self-reported living expenses from their income and assets, IRS determines taxpayers can fully pay their tax debt and no exceptional circumstances exist, the offers are rejected without further processing. IRS then sorts offers by complexity. Complex offers, such as those that are business related or those from individual taxpayers required to file Schedule C (Profit or Loss from Business), are generally sent to field offices. The less complex offers remain in COIC for processing. Next, IRS reviews each offer to determine whether the taxpayer provided enough financial information for a decision to be made about whether to accept the offer. If not, IRS requests more information from the taxpayer. If the taxpayer does not provide the information, the offer is returned and the offer is closed. A returned offer has not been rejected. When IRS has sufficient financial information to make a decision, it first determines whether an offer can be accepted on the basis of DATC. If not, IRS considers the offer under ETA rules. At any point during the process, taxpayers may withdraw their applications. The step of rejecting an offer includes an administrative review. When OIC staff propose rejecting an offer, IRS is required by the Restructuring Act to conduct an independent administrative review. If the offer is rejected, the taxpayer has the right to appeal the decision. Offers that are returned, withdrawn, or deemed unprocessable do not have appeals rights. If IRS accepts the offer, it monitors the taxpayer for 5 years to ensure that the taxpayer remains compliant with the agreement and future tax obligations. From fiscal years 2000 through 2005 the OIC Program decreased in size, according to measures such as the number of offers received by IRS, the number of offers accepted, and the dollar amount accepted in compromises. During the same years, repeat offers, as a percentage of offers received, grew significantly. According to a variety of summary measures, IRS’s OIC Program has decreased in size. The number of offers received peaked in fiscal year 2003, and in fiscal year 2005 was lower than any year since fiscal year 2000 (see table 1). Offers accepted and the year-end inventory of open offers both peaked in fiscal year 2001 and were lower in 2005 than previous years. The amount of delinquent tax liability covered by accepted offers ranged annually from about $1.3 billion to $2.5 billion during fiscal years 2000 to 2005. The amount accepted in a compromise of annual delinquent tax liability increased from 12 percent in fiscal year 2000 to 16 percent in fiscal year 2005. The amounts of delinquent tax liability covered by accepted offers, the amounts accepted, and amounts written off were lower at the end of the period than at the beginning but with some upswing over the last 3 years. While not a measure of program size, the percentage of delinquent tax liability covered by accepted offers increased to 16 percent in fiscal year 2005. IRS attributes the decline in inventory to a combination of factors, including the centralized processing established in August 2001 and the decrease in offers received. Repeat offers, as a percentage of offers received, grew significantly from fiscal year 2000 to 2005. Repeat offers occur when a taxpayer submits an offer that IRS does not accept, IRS closes the case, and then the taxpayer submits another offer covering at least some of the same tax liability. Some taxpayers submit several repeat offers. The number and percentage of repeat offers more than doubled from fiscal year 2000 to 2003 (see fig. 2). After that, the number declined, but because the number of offers received also declined, the percentage stayed about the same. In fiscal year 2005, 40 percent (or 29,527) of the offers received were repeat offers. Thousands of offers were multiple repeats. Of the 29,527 repeat offers received in fiscal year 2005, table 2 shows that for example 17,511 (or 59 percent) were second offers and 6,901 were third offers (see table 2). Taxpayers whose repeat offers were received in 2005 submitted 2.8 offers on average. IRS has not analyzed the reasons for the proportion of repeat offers, the substantial increase since fiscal year 2000 shown in figure 2, or the number of multiple repeats shown in table 2. There are a range of possible reasons. On the one hand, repeat offers could be the product of IRS attempts to reduce inventory and close offer cases more quickly. Closing cases quickly could leave some taxpayers still wanting to negotiate over the amount of their offers—they would have to submit repeat offers. On the other hand, repeat offers could be the result of taxpayer confusion or a tactic to delay collection action. Based on our analysis of OIC data, program performance has been mixed relative to five objectives—timeliness, quality, accessibility, compliance, and cost. We identified these objectives by reviewing the IRM and an IRS policy statement. IRS’s performance in one measure of timeliness has improved, and the program has met its quality goals. However, some taxpayers wait more than 2 years to get an offer accepted, and cost per offer has increased. Some of IRS’s measures mask this performance because IRS measures performance by offer and not by taxpayer. Furthermore, IRS has not researched the causes of some performance trends. Based on the IRM and an IRS policy statement, we identified five performance objectives for the OIC Program: timeliness—time taken to make a decision on an offer application, quality—extent to which IRS follows OIC Program procedures and accessibility—ease that taxpayers eligible for offers have participating compliance—extent to which taxpayers who submit offers pay their delinquent and future tax obligations, and cost—resources used to process offers. IRS officials said that they track the program’s performance with respect to timeliness, quality, and cost. They also said that they do not measure the program’s success by measuring compliance and accessibility but agreed these were aims of the program. IRS has numeric targets for timeliness and quality. The officials also view taxpayer service as another program objective. We agree that taxpayer service should be a program objective. In IRS’s telephone assistance program, service is measured by a combination of timeliness, quality, and accessibility. While there may be other measures of service, we believe that service to taxpayers is covered by the above five objectives. The OIC Program measures timeliness based on how long it takes to make a decision about an offer and not how long it has taken taxpayers, some of whom have repeat offers, to get their tax liabilities finally resolved. IRS has a 6-month target for making decisions on offers in COICs and a 9-month target for making a decision on offers in the field. Measured on an offer basis, IRS met its COIC 6-month target for 94 percent of offers and its field 9-month target for 62 percent of offers in fiscal year 2005. The picture looks different when timeliness is measured by how long it takes taxpayers to have their tax liabilities ultimately resolved—the elapsed calendar time from when IRS first receives an offer to when IRS makes a decision on a taxpayer’s final offer. In fiscal year 2005, IRS took about 6 months on average to process onetime offers (both COIC and field) but took far longer to resolve the tax liabilities of taxpayers with repeat offers. The timeliness of onetime offers has improved from an average of 8.4 months in fiscal year 2000 to an average of 5.6 months in fiscal year 2005, as shown in table 3. The average elapsed calendar time it takes for taxpayers with repeat offers to get their cases finally resolved was over 22 months in fiscal year 2005— close to the same elapsed time as in 2000. Taking almost 2 years to resolve cases could result from the growth in the proportion of repeat offers or other factors, such as the time taxpayers wait before submitting repeat offers. Timeliness from the perspective of accepted offers is shown in table 4, which shows that 40 percent of offers accepted in fiscal year 2005 had elapsed calendar times of more than 12 months from IRS receipt of first offer to final disposition of the last offer, and over 18 percent had elapsed calendar times of more than 24 months. Over 91 percent of the accepted offers taking more than 24 months were repeats. Even though IRS may be meeting its timeliness targets for processing most offers, measuring timeliness by offer masks the elapsed calendar time between receipt of a first offer and disposition of a final offer for taxpayers filing repeat offers. Furthermore, IRS has not analyzed the effect of the number and growth of repeat offers on timeliness. An analysis of the extent timeliness could be improved, if at all, by reducing repeat offers could help program managers make decisions about whether program changes to improve timeliness would be justified. For example, it might be less costly for IRS to deal once with a taxpayer, even if it takes more time to work the single case, rather than have to process repeat offers. Another issue is that IRS does not have a rationale for its numeric goals for processing times. In 2002, after we recommended that IRS set a timeliness goal for the offer program based on taxpayer needs, other benefits such as compliance, and program cost, IRS retained its old goal of 6 months for COIC offers and established a separate goal of 9 months for field offers. However, the two current goals still are not based on a documented analysis of taxpayer needs, other benefits, and program costs. Without measuring timeliness from the perspective of the taxpayer and without a rationale for timeliness goals set for taxpayers, IRS may be missing an opportunity to effectively drive program improvements from a taxpayer’s perspective. As we discussed in other reports, industry guidance for customer service recommended setting goals based on how long customers were willing to wait for the service, the value of the service to the organization, and the costs of providing the service. Measuring timeliness from the perspective of taxpayers and setting goals based on taxpayer needs would inform IRS management of any gaps between actual timeliness and the goal providing a better basis for making decisions about program improvements. IRS officials expressed concern about whether setting timeliness goals by taxpayer would be feasible or desirable. In terms of feasibility, the officials said because it does not know whether or when a taxpayer whose offer is not accepted would submit another offer, it would be difficult to develop a timeliness goal from the perspective of taxpayers. While it may be difficult to predict individual taxpayers’ behavior, IRS has historical data that may be helpful for establishing timeliness goals from the perspective of taxpayers. For example, average timeliness for taxpayers from previous years might be a benchmark useful for setting goals for future average timeliness. In terms of desirability, IRS officials said a measure of timeliness from the perspective of taxpayers might be interpreted by some as an indication that offer policies might be compromised in order to meet the goal. IRS has quality measures intended to ensure that appropriate decisions are made in offer processing. Furthermore, IRS currently sets timeliness goals for offers despite the fact that the same incentives to compromise quality seem to apply. Measured by both IRS’s internal customer accuracy measures and decisions by the Appeals function (Appeals), IRS has met its quality goals for the OIC Program (see table 5). In the COICs, IRS measures the customer accuracy rate using the embedded quality measurement system (EQMS) that was implemented in fiscal year 2004. IRS exceeded its goal of 94 percent for fiscal year 2005. For the OIC Program, EQMS measures how well employees follow offer processing procedures. Quality is measured by the sample of cases reviewed that met the standards for following the required steps, such as contacting the taxpayer or getting managerial review to process cases. IRS believes that offer examiners make more consistent decisions when they follow all the required processing steps. According to the OIC Program Manager, EQMS is better than the system previously used in the centralized processing centers, the collection quality measurement system (CQMS). CQMS is still being used in field offices but is to be phased out in fiscal year 2006 as EQMS is being phased in. IRS also met its field quality goal of 84 percent using CQMS for fiscal year 2005. According to the OIC Program Manager, IRS plans to set a field goal using EQMS after collecting and analyzing data for field cases during the first year that EQMS is implemented in field offices. Appeals data offer some additional evidence about the quality of OIC Program decisions, although the data are a limited quality indicator because only rejected offers can be appealed. Of rejected offers appealed, in fiscal year 2005, Appeals sustained 65 percent of rejection decisions while deciding to accept offers in 24 percent of the cases, as shown in table 6 (11 percent were withdrawn). A decision by Appeals to accept an offer is not always the same as overruling the OIC Program. Appeals accepted some offers that the OIC Program had rejected because taxpayers provided Appeals with new financial information. An IRS study of 113 cases where offers were accepted in Appeals concluded that 38 percent of offers were accepted by Appeals based on taxpayers providing new financial information rather than Appeals disagreeing with the OIC Program decisions. Table 6 also shows some improvement in the sustention rate from fiscal years 2002 through 2005. Declines in OIC participation rates since fiscal year 2000 raise questions about whether accessibility has decreased. We define accessibility as how easy it is for potentially eligible taxpayers to participate in the OIC Program. IRS officials agreed with this definition but said that they do not measure accessibility and do not monitor changes in accessibility over time. Tracking accessibility could provide information about the effectiveness of efforts to reduce barriers to program participation for taxpayers wishing to make legitimate offers. For example, IRS recently made changes to the offer application form intended to make the offer application process easier for taxpayers to understand. Furthermore, the Taxpayer Advocate, the American Institute of Certified Public Accountants, and the National Association of Enrolled Agents have raised concerns about barriers to OIC Program access. They cited confusion about the offer requirements and procedures, the lengthy time needed to get offers resolved, and the difficulty in getting what they believe are reasonable offers accepted as deterrents to taxpayers’ ability to participate in the program. The Taxpayer Advocate stated that some practitioners are often not willing to recommend the program to their clients because of these issues. A small number of practitioners we spoke with, as well as the practitioner organizations we contacted, made the point that the OIC process is too burdensome for taxpayers. Without a measure of accessibility, it is difficult to assess the merits of these concerns. Measuring access, or ease of participation, may require questioning taxpayers about why they did or did not participate in the program. Such direct evidence does not currently exist. However, it is possible to measure participation with readily available data. While not the same as accessibility, trends in participation rates might be an indicator of whether changes in accessibility have occurred. A measure of participation would compare OIC Program participation to the pool of potentially eligible taxpayers. Over the years 2000 to 2004, the number of accepted offers declined by more than half, as shown in figure 3. Over the same years, one proxy measure of potentially eligible taxpayers, the number of delinquent taxpayers, stayed roughly constant at 5.9 million delinquent taxpayer accounts in fiscal year 2000 and 6.0 million in 2004. It seems likely that the number of potentially eligible taxpayers is correlated with the number of delinquent taxpayers. Not all delinquent taxpayers are eligible for the OIC Program, but it seems likely that an increase in delinquent taxpayers would also increase the number of taxpayers potentially eligible for an offer. The fact that accepted offers declined by more than half at the same time that the number of delinquent taxpayers was staying roughly constant raises the question of whether something has happened to reduce the program’s accessibility. The two trends do not demonstrate that accessibility has declined because they do not directly measure ease of use. It is possible that taxpayers decided for reasons unrelated to accessibility to reduce their participation in the program. However, it is possible that concerns like those expressed by the Taxpayer Advocate explain the decline. IRS has not done an analysis to determine whether the ease of using the program has changed, and, if so, why. IRS officials told us that the reason they do not measure accessibility is that the program is available to all eligible taxpayers and that taxpayers self- select their participation. They also said that IRS has not measured the decline in the size of the program relative to changes in the pool of potentially eligible taxpayers. On the other hand, IRS has taken steps, such as requiring a $150 offer application fee and revising the offer application form, intended to reduce the number of unrealistic offers without reducing the accessibility of the program to potentially eligible taxpayers. In addition, the OIC Program Manager told us that to determine whether there are eligible taxpayers who do not participate in the program, IRS is considering studying whether some taxpayers with delinquent accounts are eligible for offers. Without a measure of accessibility that gauges ease of use, IRS does not know whether accessibility has changed over time. As a consequence, IRS does not know whether the declines in participation rates indicate a decline in accessibility, nor does IRS know whether the concerns raised by the Taxpayer Advocate and others about a decline in accessibility are correct. Furthermore, IRS would be unable to evaluate whether its efforts to reduce inappropriate offers, without reducing accessibility by eligible taxpayers, have been successful. There may be more than one way to measure accessibility. One way would be to measure program participation rates and, if participation is changing, do follow-up questioning of taxpayers about whether ease of use had changed. Potentially eligible taxpayers could be asked, for example, about whether they perceived barriers to their participating in the program. If accessibility is found to be declining, then analysis of what IRS did to cause the decline would be useful for making decisions about whether and how to address the decline. IRS Policy Statement P-5-100 and the IRM state that by accepting offers, the OIC Program should provide taxpayers a fresh start toward future voluntary compliance with their filing and payment requirements. IRS rejects offers on the basis of a financial analysis of taxpayers’ assets, expected income, and reasonable living expenses—an analysis that IRS uses to show whether taxpayers have the ability to pay more of their tax debt than they offered to pay in their OIC applications. In accordance with the compliance objective for accepted offers, IRS has a unit called Monitoring OIC (MOIC), which monitors the compliance of taxpayers with accepted offers for 5 years, and possibly beyond 5 years in cases of deferred payment offers, where payments are made over the remaining life of the collection statute. MOIC, however, does not routinely report to OIC management its aggregate data on taxpayer compliance, which would show trends on the compliance of taxpayers with accepted offers. In 2004, IRS completed a study that addressed several aspects of the OIC Program, including compliance. According to the study, about 80 percent of individual taxpayers with accepted offers from calendar years 1995 and 2001 remained in compliance with filing and payment requirements, excluding taxpayers who had received only one collection notice. The study also examined the compliance of taxpayers whose offers were rejected, withdrawn, or returned. The study found that follow-up collection actions had not been completed in many cases, even though the taxpayers had submitted offer applications stating a willingness and ability to pay part of their delinquent tax debt and even though IRS had concluded for rejected offers that the taxpayers could pay more than the amount they offered. For example, 42 percent of rejected offers during the study period, calendar years 1998 to September, 8, 2003, were pending collection action, and 15.7 percent had been declared currently not collectible (see table 7). IRS created a new unit called the Hand-Off Unit partly because the 2004 study concluded that rejected offers languished without further collection action. The Hand-Off Unit takes the rejected or withdrawn cases and initiates appropriate collection procedures with taxpayers using the financial information gained during the OIC process. Like MOIC, the Hand- Off Unit currently does not analyze compliance trends on a routine basis, although officials told us that IRS would eventually have that capability but has not set a date. To properly assess IRS performance on achieving its compliance objective, IRS also would need to collect and assess such trend information on a periodic basis. The 2004 study represents a useful assessment of OIC’s compliance benefits for one time and uses an appropriate measurement unit—the taxpayer. However, it is no longer useful for ongoing management decisions because the data in the study are now about 3 to 11 years old. The study period predated many of IRS’s recent program changes, which might affect the program’s performance with respect to compliance. For example, the new Hand-Off Unit, which was started after the 2004 report, may help achieve greater compliance of taxpayers with rejected or withdrawn offers, but IRS will not know whether it works if it does not track overall compliance trends. The OIC Program Manager said that IRS found the 2004 study too costly to repeat, requiring thousands of staff hours from the OIC Program and expertise from OPERA. However, only a portion of the work for the 2004 study was devoted to studying compliance; the Program Manager said that he did not know how much it would cost to repeat the compliance portions alone. Further, IRS does not use alternatives for the kind of compliance- benefit information the 2004 study provided, although such alternatives exist and some are lower cost. For example, IRS could repeat only the compliance portion of its OPERA study or use the existing status reports collected by MOIC, which cover taxpayers who default on their offers but are not routinely aggregated for OIC managers, to monitor trends on the compliance of taxpayers with accepted offers. The only additional costs to use the MOIC reports would be aggregating the data. The Treasury Inspector General for Tax Administration (TIGTA) also conducted a file review of accepted offers to assess aggregate compliance performance, which the OIC Program could use as a model. According to a TIGTA audit manger, the TIGTA study was something IRS should be able to do at a lower cost than the OPERA report. Using the MOIC data that are already available or employing the TIGTA approach would not yield as elaborate a study as IRS’s 2004 study, but the alternative methods would provide information more useful to managers than having no information at all. We previously concluded that having the proper performance measures in place is critical for successful program adjustments and in assessing achievement of objectives. Because aggregate compliance trends are not tracked and analyzed periodically, IRS does not know the effects that recent program changes have had on taxpayer compliance; furthermore, IRS will have greater difficulty determining what additional program changes may be needed to ensure its best performance on achieving its compliance objective. Trend information on compliance also is necessary to assess the performance of IRS’s new Hand-Off Unit. In our 2002 report, we said that IRS should develop evaluation plans before starting new initiatives; it did not do so in this case. Productivity of both COIC and field staff, measured by the ratio of offers closed per FTE, declined from fiscal years 2003 to 2005 (see tables 8 and 9). While productivity improved from fiscal years 2002 to 2003, the productivity declines in the following years resulted from IRS reducing offer processing staff at a lower rate than the decline in offers closed. For example, the average number of closed offers per FTE in COIC decreased from 251 to 165 from fiscal years 2003 through 2005. Other factors equal, decreases in productivity increase cost per offer. If IRS had maintained productivity at fiscal year 2003 levels, the agency would have had the flexibility to reallocate a substantial number of FTEs to other areas. In fiscal year 2005, IRS would have been able to reassign 110 FTEs in COICs and 7 FTEs in field offices. As the inventory of offers, which affects the number of offer closures, declined in fiscal years 2004 and 2005, IRS did reduce FTEs, particularly in the field. However, the number of offers closed declined more rapidly than the number of FTEs, hence the decline in productivity. In January 2006, IRS officials told us that they anticipate making additional staff reductions in fiscal year 2006. OIC officials provided some possible reasons for the decline in productivity, including an increase in offer complexity and a plan to keep more staff working on offers than might have been necessary to ensure that service to taxpayers was maintained. Over the fiscal years 2003 to 2005, however, there is some evidence that offers have not grown more complex. Figure 3 does not show a noticeable change in case complexity. For example, the percentage of not processable offers, the simplest and fastest cases to close, was somewhat higher in fiscal year 2005 than in fiscal year 2003. With respect to the desire to maintain service to taxpayers, IRS has shifted collections staff from one type of case to another. Thus, IRS has flexibility to move staff to maintain service in the face of an unexpected upswing in offer submissions, especially since a pool of experienced OIC processors would be available. OIC officials told us that since fiscal year 2001, they have substantially reduced the OIC Program’s costs, particularly in field offices. Based on IRS information, the number of revenue officers assigned to OIC cases have declined from 1,078 as of April 2001 to 267 in April 2006—a reduction of 811 revenue officers. In March 2006, IRS’s OIC Program Manager told us that because IRS will start processing offers from taxpayers filing simpler Schedule C forms at the COICs later in the year, it will further reduce the number of revenue officers in field offices by 100. Reliable and complete data on offer mills’ involvement with the OIC Program do not exist, preventing firm assessments on the extent that offer mills affect OIC processing. However, limited evidence from IRS, states, and our own analysis, taken together, suggests that offer mills do not have a large effect on OIC processing. There is, however, anecdotal evidence that offer mills may harm taxpayers. IRS has created procedures and guidance designed to mitigate potential negative effects of offer mills on OIC processing, although the effectiveness of the procedures and guidance cannot be measured. IRS collects some information about professional tax practitioners, who assist taxpayers making offers, but the data are not sufficient for distinguishing offer mills from legitimate practitioners. For purposes of this report, an offer mill is a professional tax practitioner that consistently uses negligent or deceptive practices to exploit taxpayers and the OIC Program by making misleading claims and submitting unrealistic offers. For example, an offer mill might use deceptive advertising, creating a false expectation that the recipient of the advertisement would qualify for an offer or save as much as the advertisement suggests. An offer mill also might file incomplete or repeat offers to exploit the rule that suspends collection proceedings while offers are being considered. IRS does collect two types of information about professional practitioners on the OIC application, but this information is not always submitted with the application. First, the OIC application asks enrolled agents to identify themselves on the form and to submit a power of attorney (POA) Form 2848 with the taxpayer’s application. In addition, the form asks taxpayers to identify anyone who helped prepare the application. However, non-enrolled agents are not required to sign the offer application. A manager at the Brookhaven COIC said that IRS has had cases in which it has learned that a professional practitioner was used but not identified in the offer application. IRS designates some offers as solely to delay the payment of taxes, which IRS tracks in the AOIC. The definition of solely to delay applies to any offer—whether submitted by the taxpayer alone or with the assistance of a POA. IRS considers an offer submitted solely to delay as one that is not substantially different from a previous offer that IRS rejected or returned. Solely to delay offers could be linked to POA or other practitioner data in the AOIC, but that data’s usefulness is limited because professional tax practitioners are not always identified on OIC applications. Additionally, because determining whether an offer is submitted solely to delay is subjective and may require enough submissions to notice a pattern, IRS may not always detect when an offer has been submitted solely to delay. The best available information on offer mills from IRS—although limited by the same factors described in the previous section—suggests that offer mills do not have a large effect on OIC processing. An IRS study published in 2004 found that a small number of offers submitted with the assistance of professional practitioners were abusive and concluded that offer mills were not driving abuse in the system. The OIC Program can make referrals to OPR regarding suspected practitioner abuse but rarely does so. In November 2005, OPR was investigating only 36 cases involving OIC and practitioners. An official with the Maryland OIC program told us that the state program has had no significant problems with offer mills or other practitioners in processing OIC applications there. Furthermore, a representative of the Federation of Tax Administrators, an organization of state tax officials, said that problems state OIC programs have with tax practitioners generally have more to do with consumer rights issues than with tax collection. In fiscal year 2005, there were 972 offers with POAs that were returned as “solely to delay.” This was about 1 percent of all cases closed in 2005. The effect of these cases on processing may have been small. IRS returned 83 percent of the offers deemed solely to delay that had POAs in 6 months or less. Anecdotal evidence also indicates that misconduct by offer mills may have harmed some taxpayers even though there was no effect on OIC processing. For example, the Connecticut Attorney General’s Office investigated one company offering OIC preparation services because the company charged taxpayers for submitting offers but then did not send the offers to IRS. In 2005, the state of Missouri settled with a firm over deceptive advertising tactics and for failing to complete OIC services as promised. OIC processing was not adversely affected in these cases. The Taxpayer Advocate also told us about one case in which an offer mill charged such a large fee that the taxpayer ended up filing for bankruptcy, rather than compromising with IRS. IRS officials said that current procedures reduce negative effects that offer mills might otherwise cause. For example, in 2004, IRS issued a consumer alert about abusive offer mills because of concerns about potentially deceptive advertising tactics used in the OIC preparation industry. The alert advises taxpayers to be wary of promoters making unrealistic claims about the OIC Program. According to the alert, “Some promoters are inappropriately advising indebted taxpayers to file an OIC application with the IRS. This bad advice costs taxpayers money and time.” IRS also has given instructions to its OIC processing staff on identifying offer mills that might be violating IRS’s rules for enrolled agents and on making referrals of potential violators to OPR. OIC process examiners and offer examiners sometimes work directly with taxpayers, rather than through offer mills. They do this because while taxpayers may be making good-faith efforts to pay what they can of their taxes by compromising, offer mills may not be making good-faith efforts to help the taxpayers. IRS officials also said that the $150 OIC application fee discourages frivolous offers. IRS has established formal means to notify taxpayers of their appeal rights, including providing information about appeal rights on the offer application form and in the offer rejection letter that IRS sends taxpayers. In addition, IRS’s Web site and some IRS publications contain information for taxpayers on rights and responsibilities in appealing rejected offers. The offer application package (Form 656) contains information on taxpayers’ rights to appeal rejected offers. Under step 7 of the application process, “What to Expect after the IRS Receives Your Offer,” is information on what a taxpayer can expect if IRS rejects an offer. Specifically, the application states that taxpayers will be sent a letter explaining why their offers were rejected and their right to submit an appeal. IRS’s Web site also provides information on appealing rejected offers, including links to information about appeal rights and how IRS reviews appeals. The Web site’s resources include Tax Topic 204, Offers in Compromise; the Collection Appeal Rights link; IRS Publication 5, Your Appeal Rights and How to Prepare a Protest If You Don’t Agree; and a video clip on the offer process with information on how to appeal a rejected offer. The IRS AOIC database contains entries intended to document the sending of rejection letters, with information on how to appeal, to taxpayers. We tested the AOIC database to ascertain whether such entries were made. Our limited review did not indicate any problems in documenting whether rejection letters and appeals instructions were being sent as required. We did not contact taxpayers to determine whether they actually received the letters. The percentage of rejected offers that were appealed indicates that many taxpayers were aware of their appeal rights. The percentage of offers appealed ranged from 30 percent to 51 percent (see table 10). IRS’s ETA regulations are consistent with the provisions of the Restructuring Act, which were broadly written. While IRS has annually accepted hundreds of offers based on ETA, non-hardship ETA offers accepted have been rare. However, hardship ETA offers are not meaningfully distinct from DATC offers. The lack of distinction between DATC and hardship ETA offers causes unnecessary program complexity and confusion for taxpayers and tax practitioners. IRS’s ETA regulations are consistent with the changes made to the OIC provisions by the Restructuring Act. The law required IRS to develop guidelines for determining when an OIC is adequate and should be accepted to resolve a dispute. The OIC provisions in the Restructuring Act were written broadly and did not specify criteria for what constitutes an adequate offer or when an offer was appropriate for resolving a dispute. IRS and Treasury staff who drafted the regulations incorporated language from the Restructuring Act’s conference report. According to the conference report, the existing OIC regulations should be expanded to permit IRS to consider factors beyond DATL or DATC in determining whether to accept a compromise. The conference report also stated that it was anticipated that IRS would take into account factors such as equity, hardship, and public policy where a compromise of an individual taxpayer’s income tax liability would promote ETA. Although the term “effective tax administration” was not defined or addressed in the Restructuring Act, IRS sought to incorporate the conference report’s ETA language into its regulations. The conference report also did not specifically define what was meant by effective tax administration. In addition to using the ETA language from the conference report, IRS’s regulations created two categories of ETA offers—non-hardship, which includes offers granted for reasons of equity and public policy, and hardship, which are granted for cases in which full payment would cause financial strain for the taxpayer. IRS accepted hundreds of ETA offers each fiscal year from 2001 to 2005. A small number of those acceptances were non-hardship ETA offers (see table 11). In fiscal year 2005, IRS accepted 467 offers on an ETA basis, with 30 being non-hardship ETA offers. The low number of non-hardship ETA acceptances is consistent with IRS guidance, which says that IRS should accept non-hardship ETA offers only in rare instances. IRS officials said that non-hardship ETA acceptances should be infrequent to keep the OIC Program from becoming an insurer of last resort. For example, an IRS official said that IRS would be wary of compromising with a business that could afford to pay its taxes but whose payroll manager embezzled company funds if the company were negligent in monitoring the manager because compromising might lead other businesses to become less diligent in protecting against such losses. On the other hand, the Taxpayer Advocate has said that making non- hardship ETA acceptances difficult to accept may erode taxpayers’ faith in the fairness of the income tax system. The Taxpayer Advocate and representatives of tax practitioner groups also have said that the low number of non-hardship ETA acceptances violates Congress’s intent in passing the Restructuring Act, which was to make compromises easier for taxpayers to reach by expanding the basis on which compromises would be made. As already noted, the provisions of the Restructuring Act on offers are broadly written and IRS’s ETA regulations are consistent with the Restructuring Act. The act did not define criteria for accepting offers. Consequently, whether the number of non-hardship ETA offers IRS accepted satisfied Congress’s intent is not clear. Although consistent with the law, regulations and guidance for reviewing hardship ETA offers are so similar to rules and guidance for determining acceptable DATC offers that the two types of offers are effectively indistinguishable from each other. For both types of offers, doubt exists that a taxpayer can afford to fully pay the tax liability owed. IRS differentiates ETA offers (both hardship and non-hardship) from DATC offers by comparing a taxpayer’s equity in assets and future income with the taxpayer’s tax liability (see fig. 4). If equity in assets and future income is less than or equal to tax liability, then IRS processes the offer as DATC. If the equity in assets and future income is greater than tax liability, then IRS processes the offer under ETA rules. IRS considers ETA only after it has determined DATC does not apply. According to IRS guidance, taxpayers are eligible for ETA offers only when they can “full pay” the liability out of their equity in assets and future income. Once IRS determines that it will consider an offer as DATC or ETA, it calculates acceptable offer amounts following the procedure in figure 5. Non-hardship ETA offers are distinguishable from DATC offers in IRS rules and guidance because the criteria used to evaluate non-hardship ETA do not overlap with DATC. However, allowable living expenses that reduce DATC offer amounts are similar to the criteria IRS uses to determine whether taxpayers qualify for hardship ETA offers, making the difference between these two types of offers unclear. For example, a taxpayer applying for a DATC offer with medical expenses would include the medical care costs in calculating an acceptable offer amount; however, the IRM also lists medical expenses as a factor that would lead to consideration for hardship ETA. Examples from IRS guidance and regulations do not add clarity to the distinction between an acceptable ETA hardship offer and an acceptable DATC offer. One example (see fig. 6) shows that taxpayers can qualify for ETA offers because of dependent care expenses; however, dependent care is also a factor that IRS considers as an allowable expense under DATC. Another example (see fig. 7) shows that taxpayers can qualify for a hardship ETA offer if fully paying their taxes would jeopardize their ability to pay basic living expenses; however, such expenses also comprise a group of factors that reduces a taxpayer’s total income for determining the amount of an offer under DATC. IRS officials said that although overlap exists between DATC and hardship ETA, taxpayers who qualify for hardship ETA today would not have qualified for DATC before the Restructuring Act because IRS did not have the authority to compromise when taxpayers' equity and income exceeded their tax liability. However, in light of the additional legal authority granted by the Restructuring Act that IRS acknowledges, the distinction IRS makes in its rules and guidance between current DATC and hardship ETA offers is not meaningful. Based on our review, only ETA cases accepted on non- hardship grounds are meaningfully distinct from DATC offers because the criteria for accepting them are different. Instructions on applying for ETA also cause unnecessary program complexity, while ETA rules and regulations cause confusion among taxpayers and professionals, according to the Taxpayer Advocate, practitioner organizations, and individual tax professionals with whom we consulted. The OIC Program Manager said that it does not matter whether taxpayers check ETA, DATC, or DATL on their applications because each offer is evaluated for all three. Yet taxpayers still must check a box on the OIC application form (Form 656) indicating which type of offer they seek. Having to determine which box to check adds complexity to the process for taxpayers and tax practitioners. The choice among offer types also adds complexity for IRS, which determines which type of offer the taxpayer has made (i.e., DATC, DATL, or ETA). One professional tax practitioner told us that in filling out an OIC application for a client, she checked more than one box even though, according to IRS definitions, the types are mutually exclusive. Confusion and complexity may increase the burden for some taxpayers—the time and costs needed to prepare an offer application. Furthermore, as was discussed earlier, the Taxpayer Advocate has said that confusion about offer requirements and program procedures may reduce the program's accessibility. Because of the wording of the instructions, taxpayers applying for hardship ETA also are faced with the paradoxical process of proving that they can pay the tax liability and then explaining in writing why they cannot afford to pay it. According to the definition in the instructions, ETA offers have no “doubt as to collectibility,” but the instructions also say that the applicant must explain the circumstances that would justify an offer—circumstances equivalent to inability to pay. The National Association of Enrolled Agents said that IRS's ETA rules were complex and difficult to understand, and the American Institute of Certified Public Accountants has said that ETA regulations do not provide sufficient guidance for determining which OICs qualify as ETA offers. The Taxpayer Advocate and other professionals also have said that it is difficult to know what types of offers will qualify for ETA based on the ETA regulations and guidance. Proposed legislation, originally introduced in the Senate, would require taxpayers to make a partial payment with their offer applications. Taxpayers seeking a lump-sum offer would be required to pay 20 percent of the amount of the offer as a nonrefundable down payment. The term “lump- sum offer” means any offer of payments made in five or fewer installments. Alternatively, a periodic payment offer would have to be accompanied by the payment of the amount of the first proposed installment. The new provision also gives the Secretary of the Treasury authority to issue regulations waiving any such payment. Finally, no user fee would be imposed on any offer accompanied by a payment. IRS would have 60 days from enactment to implement the changes. The legislative proposal that would require taxpayers to make a partial payment with their offer applications raises several questions for IRS. One is how the partial payment would apply in the case of repeat offers. For second and subsequent offers, would another partial payment be required? Is the payment nonrefundable for every disposition category? Should the rules for partial payments be consistent with the current rules for processing fees? Currently, if an offer fails to meet IRS’s processability criteria, IRS returns the $150 processing fee to taxpayers along with their offer applications. Another question is whether the proposal might affect the program’s accessibility. Would a partial payment requirement discourage eligible taxpayers from submitting offers? As discussed earlier, IRS does not monitor accessibility. Without a measure of accessibility, the impact of a partial payment on accessibility might not be easily determined. Another question is whether 60 days are enough time to implement the partial payment requirements. IRS officials stated that computer systems would require changes to accommodate the imposition of partial payments. We did not determine how long it would take IRS to make the changes. Because some delinquent taxpayers will always be unable to fully pay their tax debts, IRS's OIC Program is necessary to ensure that taxpayers pay what they can and have a “fresh start” toward complying with their future obligations. The performance of the program is important because factors like the timeliness of offer decisions can have a large impact on taxpayers in difficult financial straits and because the IRS resources devoted to the program are significant. Opportunities exist to make immediate improvements to the program and lower costs. First, staffing adjustments have not kept pace with declines in cases in recent years, resulting in lower productivity. Reducing staffing to increase productivity to its recent levels would lower program costs. Second, because the distinction between DATC and ETA hardship offers is not meaningful, the program is unnecessarily complex. Practitioners and others have complained about the resulting confusion and burden on taxpayers, which may discourage taxpayers from using the program. Costs to taxpayers and IRS could be reduced by eliminating the distinction. The success of the program also depends on how well IRS management understands the reasons for the program’s performance. One step in understanding performance is measuring it. IRS’s measurement of timeliness on an offer basis masks how long it takes to make a final decision for taxpayers to get their liabilities resolved. IRS’s tracking of accessibility is also incomplete because it is not done relative to the size of the pool of potentially eligible taxpayers. IRS’s tracking of the future compliance of program participants is also incomplete because it does not routinely measure compliance. Another step in understanding performance is setting goals. Numeric goals provide objective criteria for assessing performance. The numeric goals for OIC timeliness still are not based on an analytical assessment of taxpayer needs and other benefits, and the goals are set for each case rather than for taxpayers. A third step in understanding performance is analysis that determines the causes of performance. By understanding the causes of performance, IRS management can make better-informed decisions about how to improve performance. IRS’s 2004 compliance study is an example—it led to the creation of the Hand-Off Unit. Because IRS has implemented several recent improvement initiatives, such as the Hand-Off Unit, additional analysis is necessary to understand their impact on compliance. Further, IRS has not analyzed other trends. IRS has not determined the causes of the large growth in repeat offers since 2000, despite their impact on timeliness from a taxpayer’s perspective. In addition, IRS has not analyzed factors that affect trends in the OIC Program’s accessibility. Without such an analysis, IRS will not know whether the declining OIC participation rate is an indication of a decrease in accessibility. We recommend that the Commissioner of Internal Revenue: 1. Take the following steps to immediately improve the OIC Program: adjust staffing levels to increase productivity and reduce cost per offer, unless IRS can demonstrate that case complexity has increased and eliminate the distinctions between hardship ETA and DATC in the application, instructions, and procedures to simplify the program. 2. Develop meaningful measures of performance, including a measure of processing timeliness for taxpayers, a measure of accessibility that gauges ease of participation in the programs, and a measure of compliance for all program participants. 3. Set processing timeliness goals for taxpayers that are based on an assessment of taxpayer needs and other benefits. 4. Conduct analyses of the reasons for performance trends in order to determine causes of the growth in repeat offers; determine how repeat offers affect timelines and, if justified based on the results, take action to meet timeliness goals; determine the reasons for trends in accessibility; and determine the effectiveness of the Hand-Off Unit. If Congress’s intent regarding the number of ETA non-hardship offers has not been met to date, Congress should provide IRS with more specific guidance on the criteria for such offers. In his April 14, 2006, letter the Commissioner of Internal Revenue (see app. III) said that he partially agrees with our recommendations. IRS provided separate technical comments, which we incorporated into our report where appropriate. The Commissioner indicated that IRS believed that eliminating the distinction between economic hardship and doubt as to collectibility offers may not be the best approach but said that IRS is open to suggestions to clarify offer instructions and will consult with practitioner groups and the Taxpayer Advocate on whether more clarity is needed. The Commissioner said that the distinction is important because the Restructuring Act gave IRS additional authority to accept offers. The Commissioner further stated that the distinction has meaning for potential program participants. However, as we stated in the report, the regulations and guidance for reviewing hardship ETA offers are so similar to rules and guidance for determining acceptable DATC offers that the two types of offers are effectively indistinguishable from each other. IRS’s examples of acceptable hardship ETA offers (see pp. 36 and 37 of this report), further illustrate that they are not meaningfully distinct from DATC offers because they demonstrate that there is doubt that such taxpayers could provide for their living expenses, which IRS authorizes for all offers, and pay their tax liabilities. This makes the offers in the examples similar to DATC offers. Considering this, the OIC Program could be simplified by eliminating the differences between hardship ETA and doubt as to collectibility offers. The Commissioner agreed with our recommendation that IRS adjust staffing levels to increase productivity and reduce cost. The Commissioner said that IRS does not agree that timeliness measured by taxpayer rather than by individual offer would be an effective measure of performance. IRS said that its existing timeliness measure by OIC case closure is sufficient, but it did agree to analyze the affect of repeat offers on timeliness. An analysis of the extent that timeliness could be improved, if at all, by reducing repeat offers could help program managers make decisions about whether program changes to improve timeliness would be justified. However, as the report states, it might be less costly for IRS to deal once with a taxpayer, even if it takes more time to work the single case, rather than have to process repeat offers. IRS agreed that it could do a better job of compiling information on OIC Program compliance and will explore methods for doing so. With respect to measuring accessibility, the Commissioner said that IRS is concerned about the perception that the OIC Program is less accessible than in the past. He said that IRS would use a customer satisfaction survey to gain insights into accessibility and might do additional research about barriers to entering the program. As we stated in the report, tracking accessibility could provide information about the effectiveness of efforts to reduce barriers to program participation for taxpayers wishing to make legitimate offers. With respect to setting timeliness goals for taxpayers based on an assessment of taxpayers’ needs and other benefits, the Commissioner said that IRS’s current timeliness goals are based in part on such considerations but also said that IRS would consider whether taxpayer feedback reveals additional taxpayer needs. However, as the report states, IRS was unable to provide any analytical support for its 6- and 9-month processing goals. Furthermore, IRS does not set goals from the perspective of taxpayers. We continue to believe measuring timeliness from the perspective of taxpayers and setting goals based on taxpayer needs would inform IRS management of any gaps between actual timeliness and the goal of providing a better basis for making decisions about program improvements. The commissioner agreed to analyze the causes of the growth in repeat offers. He also agreed to study how repeat offers affect timeliness. As already noted, the Commissioner agreed to study accessibility using a customer satisfaction survey of taxpayers who participated in the OIC Program. While such a survey may be informative, its benefits may be limited because it does not question nonparticipants. As the report states, measuring access may require questioning taxpayers about why they did not participate in the program. The Commissioner agreed to study the effectiveness of the Hand-Off Unit. As agreed with your offices, unless you publicly release the contents earlier we plan no further distribution of this report until 30 days from its date. At that time, we will send copies to interested congressional committees, the Secretary of the Treasury, the Commissioner of Internal Revenue, and other interested parties. The report will also available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9110 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. To identify recent trends in Offer in Compromise (OIC) Program performance, we analyzed information and program statistics in the Internal Revenue Service’s (IRS) Automated OIC database (AOIC). Specifically, we developed independent statistical trend analyses for four of five key performance objectives—timeliness of case processing, quality, accessibility, and cost. We reviewed OIC Program data primarily from fiscal years 2000 through 2005. To determine how well IRS understands the reasons for the trends, we interviewed key officials in IRS’s SB/SE Division responsible for collection policy and the OIC Program. We also reviewed available evaluations IRS had conducted in examining these trends. To develop trend information on the timeliness of case processing, we (1) separated offers disposed by the OIC Program from those disposed by the Appeals function (Appeals), and (2) identified the number of onetime and repeat offers and developed statistics on processing times for those offers. Some taxpayers make only one effort to compromise a tax liability. We call these offers onetime offers. Other taxpayers make multiple attempts to compromise a tax liability. We call the first of these attempts an initial offer and each subsequent attempt a repeat offer. To generate statistics on processing times for the various disposition types, we developed disposition categories by aggregating disposition categories from the AOIC database. For more information about how we developed repeat offers and disposition categories, see appendix II. To assess trends in the quality of the OIC Program, we collected information and interviewed IRS officials on the accuracy rates from IRS’s embedded quality measurement system (EQMS) for the centralized processing centers. Field locations only recently implemented EQMS; consequently, we used accuracy rates from IRS’s collection quality measurement system for the field locations. We compared the program’s accuracy rates against accuracy goals to assess the extent to which IRS staff followed procedures and made appropriate decisions. We also compiled and analyzed data on offer decisions by Appeals from the AOIC database to determine trends by year. Regarding the OIC Program’s accessibility, we compiled statistics on offer receipts and the dispositions of these receipts from the AOIC database. To develop information on the pool of potentially eligible taxpayers for the program, we obtained data on IRS taxpayer delinquent accounts. We interviewed IRS officials about the measures they used to determine accessibility and also interviewed representatives of tax practitioner organizations and the National Taxpayer Advocate of the Taxpayer Advocate Service for their views about the program’s accessibility. To assess IRS’s efforts to measure compliance, we reviewed IRS’s reports on compliance by IRS’s Office of Program Evaluation and Risk Assessment (OPERA). We used IRS policy statement P-5-100 and information on the OIC Program objectives from the Internal Revenue Manual as criteria for defining compliance, which the OIC Program Director generally confirmed. We also drew on our 2002 study of IRS’s OIC program, in which we recommended that IRS make plans to conduct evaluations of initiatives that affect the program’s performance. To learn about possible alternatives for measuring compliance, we consulted an official with the Treasury Inspector General for Tax Administration to learn about its methods for studying compliance in one of its reports. We interviewed IRS officials who were knowledgeable about the Monitoring OIC (MOIC) Unit and with the OIC Hand-Off Unit to gather information about how post-OIC compliance was tracked. We developed data on the productivity of the OIC Program by obtaining information from IRS on the number of full-time equivalent staff working in the OIC Program and compared this to the number of case closures from the AOIC database. We also interviewed IRS officials regarding any IRS analysis on productivity and reasons for productivity trends. To estimate the extent of offer mills’ participation in the OIC Program, we derived the number of offers designated solely to delay in the AOIC database that also were submitted with power of attorney forms. Also using the AOIC database, we measured how long IRS took to process those cases. We interviewed IRS officials with the OIC Program in Austin, Texas, and in Brookhaven, New York, and officials at the Office of Professional Responsibility (OPR), who investigate practitioner misconduct, in Washington, D.C. We also interviewed officials with OPERA about its work on abuse of the OIC Program. We reviewed reports on potential OIC abuse by IRS and internal IRS guidance on handling suspected cases of practitioner misconduct. We interviewed officials with the Federation of Tax Administrators (FTA), the state of Maryland OIC Program, and the Connecticut Attorney General’s Office and compared their experiences with practitioner and offer mill misconduct with those cited by IRS officials. We selected FTA because its membership includes tax administration officials from states that have OIC programs. An FTA official referred us to the Maryland OIC Program. OPR cited the state of Connecticut’s involvement with investigating offer mills during an interview. Finally, we conducted literature reviews for information about offer mills. To assess how well IRS ensures that taxpayers are provided the right to appeal rejected offers, we analyzed the AOIC database to determine whether these taxpayers were sent the rejection letter notifying them of their appeal right. We reviewed IRS publications containing information about taxpayers’ rights to appeal rejected offers and searched the IRS Web site for similar information. We performed limited testing of the AOIC database to determine whether appropriate entries were being made that ensured that a computer-generated rejection letter with appeals information had been sent to each taxpayer whose offer was rejected from fiscal years 2000 to 2005. We did not contact taxpayers to determine whether they actually received the letters. We interviewed OIC Program officials about the offer appeals process and followed up with Appeals officials, including Appeals staff at the Brookhaven, New York, campus who review and process rejected offers. To determine whether IRS’s regulations on effective tax administration (ETA) were consistent with the IRS Restructuring and Reform Act of 1998 (Restructuring Act), we reviewed the Restructuring Act, its legislative history, OIC regulations that were in place before the Restructuring Act, and the regulations issued to address the Restructuring Act changes. We met with representatives of the IRS Chief Counsel’s Office who were involved in drafting the new and revised regulations on ETA offers. In addition, we reviewed their project files to gather documentation on how the ETA regulations evolved. The files contain documentation, such as internal memorandums, early draft of the regulations circulated to internal stakeholders, and public comments received after the proposed regulations were issued. In addition, we compared IRS’s internal guidance on ETA and doubt as to collectibility (DATC) and IRS’s regulations on ETA to determine whether they were distinct. We discussed ETA and DATC procedures, guidance, and rules with OIC Program officials and staff in Austin, Texas and staff in IRS’s centralized processing center in Brookhaven, New York, who processes offer applications. To gain perspective from some external OIC Program stakeholders on how IRS implemented ETA rules, we interviewed professional tax practitioners and representatives of the National Association of Enrolled Agents (NAEA) and the American Institute of Certified Public Accountants (AICPA). We selected NAEA and AICPA because they had previously testified or commented about IRS’s OIC Program. We also conducted a literature review on ETA. To comment on the legislative proposal requiring partial payments with offer applications, we drew on the results of our work relating to repeat offers and trends in OIC Program performance. Our review was conducted in accordance with generally accepted government auditing standards from February 2005 through February 2006. To examine various measures of timeliness, quality, accessibility, and cost, we obtained a copy of portions of IRS’s AOIC database as of September 30, 2005. The AOIC database contains processing information on offers submitted by taxpayers and related tax liability information since the OIC Program’s inception to the current day. The AOIC database is a relational database, and we limited our analysis to selected tables relevant to our objectives. To ensure the reliability of the computer-based data provided to us, we conducted interviews with key agency personnel to ascertain the types of program edits and controls used to ensure the accuracy of data entry and data migration into the AOIC database from IRS’s Master File. We also conducted various reliability analyses on data fields used in our analysis and reproduced reports prepared for program officials for their day-to-day management activities. We concluded that data in the AOIC database are sufficiently reliable for purposes of our engagement. We concentrated our OIC Program analyses in two main areas: (1) the length of time it takes IRS to process offers by type of offer disposition (for example, accepted or rejected dispositions) and (2) the number of times taxpayers “repeat” offer submissions when a prior submission is not accepted and the length of time this processing of multiple offers takes. We also developed statistics on offer program inventory levels, the amount and percentages of tax debt compromised, and the number of offers processed under ETA regulations. In addition, we identified the number of offers returned to taxpayers because IRS believed a principal reason for the offer submission was to delay collection activities, and we determined how many of these offers had been prepared by professional practitioners. In general, we reported statistics for the 6 most recent fiscal years beginning in fiscal year 2000. In examining reports IRS prepares from AOIC data, we determined IRS does not produce offer program statistics in a way that would allow us to answer our objectives. For example, IRS’s analyses aggregates offer disposition statistics from both IRS’s Collections function (i.e., the offer program) and its Appeals function. We wanted to separate these data in order to examine the OIC Program’s performance. We separated offer processing time between the Collection and Appeals functions by examining available date fields in the AOIC database and creating our own starting and ending processing dates. For our Collections function start date, we used the earlier of the dates IRS received an offer from a taxpayer, the IRS Received Date, or the date the offer was initially entered into the AOIC database, the Area Office Opening Date. For our Collections function end date, we used the Area Office Closing Date except for rejected offers. For rejected offers, we checked to see if a rejection letter had been generated and the date on which this occurred. If this date was earlier than the Area Office Closing Date, then we used the rejection letter date. Offers that were still being processed in the Collections function are considered open offers and do not have ending dates. The Appeals function start date was also based on the earlier of two dates: (1) the date in the AOIC database, known as the Sent to Appeals Date, when it was present, or (2) our Collections function ending date plus 30 days when the Sent to Appeals date was not available or succeeded this date on offers known to have been appealed. The Collections ending date plus 30 days is the legal limit on the amount of time given a taxpayer to appeal a rejected offer. The Appeals function ending date was always the official Area Office Closing date. We also segregated offer disposition types between the Collection and Appeals functions. The AOIC database contains 10 disposition types, of which 3 represent Appeals function dispositions. Offers that are appealed by taxpayers remain open on the AOIC database pending Appeals function disposition decisions. We segregated the dispositions by creating five GAO- derived Collections function dispositions and three Appeals function dispositions. For example, we collapsed all of the offers contained in five of the program’s disposition types, as well as certain offers still open on AOIC, into our “Rejected” offers disposition category. This showed the Collections function had rejected 247,780 offers during the program’s history. These offers were as follows: (1) the 25,054 offers accepted by IRS’s Appeals function, (2) the 43,511 offers where the Appeals Function sustained the Collections function, (3) the 42,880 offers rejected by the Collections function without appeal rights, (4) the 116,787 offers rejected by the Collections function where the taxpayer did not exercise appeal rights, (5) the 7,955 offers withdrawn in Appeals, and (6) the 11,593 offers rejected by the Collections function but not yet closed on AOIC pending possible Appeals function activities. We combined all of these offers to demonstrate that the Collections function had rejected 247,780 offers over the history of the OIC program. Tables 12 and 13 reflect this roll-up and compare other GAO-derived disposition types for the Collections and Appeals functions to IRS’s disposition types. We have also included offers currently open in AOIC to balance offers between the two disposition sets. Because many taxpayers submit more than one offer in an effort to compromise tax liabilities, and because IRS does not track multiple offers from the same taxpayer, we independently developed estimates of the average (1) number of offers taxpayers submitted on the same tax liability, (2) time it took IRS to process all of these offers, and (3) calendar time duration between the date the first in a series of offers was submitted and the date the last in the series was closed. In order to track these multiple offer submissions, we coined the term offer sets. Offer sets may contain one or many offers. We defined an offer set with only one offer as a onetime offer. For offer sets containing two or more offers, we defined the first offer in the set as an initial offer and the second and subsequent offers in the set as repeat offers. An offer set with two or more offers was also known as a repeat offer set. Our criteria for calling a subsequent offer a repeat offer depended on whether tax liability information was available for comparison between two offers. For cases where tax liability information for one or both of two chronological offer dispositions had not been migrated from IRS’s Master File to the AOIC database, a common occurrence when offers were closed not processable, we set a 1-year time limit for designating the subsequent offer as a repeat offer. Where the tax liability information was available for two offers, we compared it to see if any one tax liability matched. If it did, we called the subsequent offer a repeat and the length of time between offer submissions did not matter. Finally, any time an offer that was part of a repeat offer set was accepted, we assumed that offer was the last offer in the offer set. Any subsequent attempt by a taxpayer to compromise the same tax liabilities started a new offer set. We believe a 1-year time limit is reasonable as a criterion for establishing repeat offers because most tax modules are 1 year in length corresponding with a taxpayer’s annual filing requirement (for example, a tax module for an individual or corporate taxpayer would represent a calendar year period that they were required to file an income tax return). Taxpayers submitting offers must include all outstanding tax liabilities in the offer submissions, and we believe taxpayers who have not successfully compromised tax liability are not likely to have fully paid that tax liability and at the same time incurred a new tax liability, which they attempt to compromise within that 1-year period. The actual number of repeat offers and the average duration of time it takes taxpayers to compromise tax liabilities are estimates because (1) taxpayers continue to submit offers in the future for current tax liabilities for which prior offers were not accepted, (2) some taxpayers may fully pay outstanding tax liabilities then immediately incur new liabilities, and (3) some taxpayers filing jointly simultaneously attempt to compromise separate tax liabilities, and it was not always possible to separately identify the two sets of offers. In the first situation, we underestimated the average time it takes to compromise tax liabilities when taxpayers extend that period by making future attempts to compromise their liabilities. In the second situation, we overestimated the number of repeat offers and the average time, but we believe such occurrences are rare. In the third situation, scenarios existed where we could have either underestimated or overestimated the actual number of repeat offers or the average duration times. On balance, we believe the first situation is the most common and that our estimates of the actual number of repeat offers and the average time duration are conservative. We also used our GAO-derived dates and disposition types to develop additional statistics using the AOIC database. For example, when OIC Program staff believe one of the reasons a taxpayer submitted an offer was an attempt to delay the collections process, they will enter one of several codes designating the offer as such in the AOIC database and return the offer to the taxpayer. We analyzed AOIC data by these codes and determined how frequently offers were returned for each code, the percentages of all offers submitted that were solely to delay collection activities, and how many offers involved professional practitioners. We also determined how long it took the OIC Program to return solely to delay offers involving professional practitioners. In addition, we used the AOIC database to estimate how many ETA offers were processed over time. Before October 2005, IRS did not make a distinction between ETA offers on the AOIC database and offers accepted based on doubt as to collectability with special circumstances. These offers were commingled and categorized as offers where an alternative basis was used for compromise. However, an agency official told us that we could use all offers designated as alternative basis offers as a proxy for the number of ETA offers processed by IRS. The agency added a data field beginning in October 2005 to specifically track ETA offers. Furthermore, we calculated the Collections function’s inventory levels for fiscal years 2000 through 2005. In addition, we used the tax liability and offer amount fields in the AOIC database to determine the percentage of tax debt compromised by IRS’s Collection function. In addition to the contact named above, Charlie Daniel, Assistant Director; Evan Gilman; Eric Gorman; Shirley Jones; Susan Mak; Michael Rose; Samuel Scrutchins; and Jennifer Li Wong made key contributions to this report.
Taxpayers unable to fully pay their tax liabilities may apply for an offer in compromise (OIC), an agreement with IRS to pay what they can afford. IRS writes off the rest of the liability. In 2005, IRS accepted over 14,000 offers. Because of concerns about program performance and a new category of offers based on exceptional circumstances, GAO was asked to (1) describe the trends in program's performance and their causes and (2) determine whether IRS's regulations for exceptional circumstance offers are consistent with statute. GAO examined five program objectives: timeliness, quality, accessibility, compliance, and cost. OIC Program performance has been mixed. Timeliness improved for taxpayers making one offer to 5.8 months in 2005 but stayed constant, at an average of two years, for those making repeat offers. Quality goals have been met but IRS does not routinely track compliance and accessibility. Further, cost per offer has increased in that IRS has not decreased staffing since fiscal year 2003 in proportion to declines in offers. Improving the program depends on how well IRS management understands the reasons for the program's performance. One step in understanding performance is measuring it. However, IRS does not measure timeliness from the perspective of the taxpayer--for taxpayers with repeat offers IRS measures the time to decide each offer but not the overall time to resolve the taxpayer's liability. IRS lacks compliance and accessibility trend data useful for assessing performance. Another step in understanding performance is setting goals. IRS set numeric goals for timeliness and quality, but IRS's timeliness goals do not have a rationale and are not based on taxpayer needs or other benefits. A third step in understanding performance is analysis. While IRS has done some analyses that led to program changes, IRS has not analyzed the effect of repeat offers on timeliness to determine whether it would be less costly to deal once with a taxpayer rather than have to process repeat offers. IRS also has not analyzed whether the decrease in offers accepted since fiscal year 2003 reflects a decrease in program accessibility, or whether the efforts to improve the compliance of program participants have been successful. IRS's regulations for exceptional circumstance offers, intended for taxpayers who can fully pay, are consistent with statute. However, most exceptional circumstance offers are granted to taxpayers who cannot fully pay. These offers are not meaningfully distinct from the more common offers based on inability to fully pay. The lack of distinction causes unnecessary program complexity and confusion. Taxpayers are faced with the paradoxical process of proving that they can pay their tax liability and then explaining why they cannot.
Under A-76, commercial activities may be converted to or from contractor performance either by direct conversion or by cost comparison. Under direct conversion, specific conditions allow commercial activities to be moved from government or contract performance without a cost comparison study (for example, for activities involving 10 or fewer civilians). Generally, however, commercial functions are to be converted to or from contract performance by cost comparison, whereby the estimated cost of government performance of a commercial activity is compared to the cost of contractor performance in accordance with the principles and procedures set forth in Circular A-76 and the supplemental handbook. As part of this process, the government identifies the work to be performed (described in the performance work statement), prepares an in-house cost estimate based on its most efficient organization, and compares it with the winning offer from the private sector. According to A-76 guidance, an activity currently performed in house is converted to performance by the private sector if the private offer is either 10 percent lower than the direct personnel costs of the in-house cost estimate or $10 million less (over the performance period) than the in- house cost estimate. OMB established this minimum cost differential to ensure that the government would not convert performance for marginal savings. The handbook also provides an administrative appeals process. An eligible appellant must submit an appeal to the agency in writing within 20 days of the date that all supporting documentation is made publicly available. Appeals are supposed to be adjudicated within 30 days after they are received. Under current law, private sector offerors who believe that the agency has not complied with applicable procedures have additional avenues of appeal. Specifically, they may file a bid protest with the General Accounting Office or file an action in a court of competent jurisdiction. Circular A-76 requires agencies to maintain annual inventories of commercial activities performed in house. A similar requirement was included in the 1998 Federal Activities Inventory Reform (FAIR) Act, which directs agencies to develop annual inventories of their positions that are not inherently governmental. The fiscal year 2000 inventory identified approximately 850,000 full-time equivalent commercial-type positions, of which approximately 450,000 were in DOD. OMB has recently indicated that it intends to expand its emphasis on A-76 governmentwide. In a March 9, 2001, memorandum to the heads and acting heads of departments and agencies, the OMB Deputy Director directed agencies to take action in fiscal year 2002 to directly convert or complete public/private competitions of not less than 5 percent of the full-time equivalent positions listed in their FAIR Act inventories. In 1999, DOD began to augment its A-76 program with what it terms strategic sourcing. Strategic sourcing may encompass consolidation, restructuring or reengineering activities, privatization, joint ventures with the private sector, or the termination of obsolete services. Strategic sourcing can involve functions or activities, regardless of whether they are considered inherently governmental, military essential, or commercial. I should add that these actions are recognized in the introduction to the A-76 handbook as being part of a larger body of options, in addition to A-76, that agencies must consider as they contemplate reinventing government operations. Strategic sourcing initially does not involve A-76 competitions between the public and the private sectors, and the Office of the Secretary of Defense and service officials have stressed that strategic sourcing may provide smarter decisions because it determines whether an activity should be performed before deciding who should perform it. However, these officials also emphasized that strategic sourcing is not intended to take the place of A-76 studies and that positions examined under the broader umbrella of strategic sourcing may be subsequently considered for study under A-76. DOD has been the leader among federal agencies in emphasizing A-76 studies. DOD’s use of A-76 waned from the late 1980s to the mid-1990s, then grew substantially in 1995 before falling again in1999 to the present. DOD is currently emphasizing a combination of A-76 and strategic sourcing. Available information indicates that A-76 studies in civilian agencies have been minimal, compared with those carried out in DOD. Unfortunately, no central database exists to provide information on the actual number of studies undertaken. From the late 1970s through the mid-1990s, DOD activities studied approximately 90,000 positions under A-76. However, program controversy and administrative and legislative constraints caused a drop in program emphasis from the late 1980s through 1995. In August 1995, the Deputy Secretary of Defense gave renewed emphasis to the A-76 program when he directed the services to make outsourcing of support activities a priority in an effort to reduce operating costs and free up funds to meet other priority needs. The effort was subsequently incorporated as a major initiative under the then-Secretary’s Defense Reform Initiative, and the program became known as competitive sourcing—in recognition of the fact that either the public or the private sector could win competitions. The number of positions planned for study and the time frames for accomplishing those studies have changed over time in response to difficulties in identifying activities to be studied. In 1997, DOD’s plans called for about 171,000 positions to be studied by the end of fiscal year 2003. In February 1999, we reported that DOD had increased this number to 229,000 but then found it reduced the number of positions to be studied in the initial years of the program. In August 2000, DOD decreased the total number of positions to be studied under A-76 to about 203,000, added about 42,000 Navy positions for consideration under strategic sourcing, and extended the program to fiscal year 2005. The introduction of strategic sourcing came about as the Navy—which was having difficulty identifying sufficient numbers of positions for study—sought and obtained approval to use this broader approach to help meet its A-76 study goals. In March 2001, DOD officials announced that they had again reduced the number of positions to be studied under A-76 to about 160,000 but increased the number of strategic sourcing positions to 120,000. DOD’s latest targets include strategic sourcing study goals for each of the military services. Tables 1 and 2 show the number of positions Defense components planned to study under A-76 and strategic sourcing as of March 2001. DOD’s data shown above show fewer positions planned to be studied under both A-76 and strategic sourcing in the out-years compared to those projected before 2001. To what extent these numbers will change on the basis of recent program direction from OMB for an expanded A-76 program emphasis is yet to be determined. As these numbers changed, so did savings targets. In 1999, for example, DOD projected that its A-76 program would produce $6 billion in cumulative savings from fiscal year 1997 to 2003 and $2.3 billion in net savings each year thereafter. In 2000, DOD projected savings of about $9.2 billion in 1997-2005, with recurring annual net savings of almost $2.8 billion thereafter. Additional savings were to come from strategic sourcing, which was expected to produce nearly $2.5 billion in cumulative savings by 2005 and recurring annual savings of $0.7 billion thereafter. Together, A-76 and strategic sourcing are expected to produce estimated cumulative savings of almost $11.7 billion, with about $3.5 billion in recurring annual net savings. More recent savings estimates have not yet been made available. Most importantly, these projected savings have become more than ambitious goals, when it developed its fiscal year 2000 budget, DOD reprogrammed about $11.2 billion of these anticipated savings into its modernization accounts, spread over future years’ planning period. Our work has consistently shown that while savings are being achieved by DOD’s A-76 program, it is difficult to determine precisely the magnitude of net savings. Furthermore, savings may be limited in the short term because up-front investment costs associated with conducting and implementing the studies must be absorbed before long-term savings begin to accrue. Several of our reports in recent years have highlighted these issues. We reported in March 2001 that A-76 competitions had reduced estimated costs of Defense activities primarily by reducing the number of positions needed to perform those activities under study. This is true regardless of whether the government’s in-house organization or the private sector wins the competition. Both government and private sector officials with experience in such studies have stated that, in order to be successful in an A-76 competition, they must seek to reduce the number of positions required to perform the function being studied. Related actions may include restructuring and reclassifying positions and using multiskill and multirole employees to complete required tasks. In December 2000, we reported on compliance with a congressional requirement that DOD report specific information of all instances since 1995 in which DOD missions or functions were reviewed under OMB Circular A-76. For the 286 studies for which it had complete information, the Department’s July 2000 report to the Congress largely complied with the reporting requirement. We noted that DOD had reported cost reductions of about 39 percent, yielding an estimated $290 million savings in fiscal year 1999. We also agreed that individual A-76 studies were producing savings but stressed that savings are difficult to quantify precisely for a number of reasons: Because of an initial lack of DOD guidance on calculating costs, baseline costs were sometimes calculated on the basis of average salaries and authorized personnel levels rather than on actual numbers. DOD’s savings estimates did not take into consideration the costs of conducting the studies and implementing the results, which of course must be offset before net savings begin to accrue. There were significant limitations in the database DOD used to calculate savings. Savings become more difficult to assess over time as workload requirements change, affecting program costs and the baseline from which savings were initially calculated. Our August 2000 report assessed the extent to which there were cost savings from nine A-76 studies conducted by DOD activities. The data showed that DOD realized savings from seven of the cases, but less than the $290 million that Defense components had initially projected. Each of the cases presented unique circumstances that limited our ability to precisely calculate savings—some suggested lower savings. Others suggested higher savings than initially identified. In two cases, DOD components had included cost reductions unrelated to the A-76 studies as part of their projected savings. Additionally, baseline cost estimates used to project savings were usually calculated using an average cost of salary and benefits for the number of authorized positions, rather than the actual costs of the positions. The latter calculation would have been more precise. In four of the nine cases, actual personnel levels were less than authorized. While most baseline cost estimates were based largely on personnel costs, up to 15 percent of the costs associated with the government’s most efficient organizations’ plans or the contractors’ offers were not personnel costs. Because these types of costs were not included in the baseline, a comparison of the baseline with the government’s most efficient organization or contractor costs may have resulted in understating cost savings. On the other hand, savings estimates did not reflect study and implementation costs, which reduced savings in the short term. DOD has begun efforts to revise its information systems to better track the estimated and actual costs of activities studied but not to revise previous savings estimates. DOD is also emphasizing the development of standardized baseline cost data to determine initial savings estimates. In practice, however, many of the cost elements that are used in A-76 studies will continue to be estimated because DOD lacks a cost accounting system to measure actual costs. Further, reported savings from A-76 studies will continue to have some element of uncertainty and imprecision and will be difficult to track in the out-years because workload requirements change, affecting program costs and the baseline from which savings are calculated. Given that the Department has reduced operating budgets on the basis of projected savings from A-76 studies, it is important that it have as much and as accurate information as possible on savings, including information on adjustments for up-front investment costs and other changes that may occur over time. In monitoring DOD’s progress in implementing the A-76 program, we have reported on a number of issues that should be considered when expanding emphasis on the A-76 process, either in DOD or at other government agencies. These issues include (1) the time required to complete studies, (2) the costs and other resources needed to conduct and implement studies, (3) the difficulties involved in selecting functions to compete, and (4) the timing of budget reductions in anticipation of projected savings. This last issue is a fundamental issue that is directly affected by the first three. Individual A-76 studies have taken longer than initially projected. In launching its A-76 program, some DOD components made overly optimistic assumptions about the amount of time needed to complete the competitions. For example, the Army projected that it would take 13-21 months to complete studies, depending on their size. The Navy initially projected completing its studies in 12 months. The numbers were subsequently adjusted upward, and the most recent available data indicate that studies take about 24 months for single-function and 27 months for multifunction studies. Once DOD components found that the studies were taking longer than initially projected, they realized that a greater investment of resources would be needed than originally planned to conduct the studies. In August 2000, we reported that DOD had increased its study cost estimates considerably since the previous year and had given greater recognition to the costs of implementing the results of A-76 studies. But we expressed concern that the Department was, in some instances, still likely underestimating those costs. The 2001 President’s budget showed a wide range of projected study costs, from about $1,300 per position studied in the Army to about $3,700 in the Navy. The Army, the Navy, and the Air Force provide their subcomponents $2,000 per position studied. Yet various officials believe these figures underestimate the costs of performing the studies. Officials at one Army major command estimated that their study costs would be at least $7,000 per position. One Navy command estimated its costs at between $8,500 and $9,500 per position. Our own assessment of a sample of completed A- 76 studies within the Army, the Navy, the Air Force, and Defense agencies showed that study costs ranged from an average of $364 to $9,000 per position. In addition to study costs, significant costs can be incurred in implementing the results of the competitions. Transition costs include the separation costs for civilian Defense employees who lose their jobs as a result of competitions won by the private sector or when in-house organizations require a smaller civilian workforce. Such separation costs include the costs of voluntary early retirement, voluntary separation incentives, and involuntary separations through reduction-in-force procedures. The President’s Budget for Fiscal Year 2001 included for the first time all Defense components’ estimated costs of implementing A-76 competitions and showed a total of about $1 billion in transition costs resulting from A-76 studies for fiscal years 1997-2005. Selecting and grouping functions and positions to compete can be difficult. Because most services faced growing difficulties in or resistance to finding enough study candidates to meet their A-76 study goals, DOD approved strategic sourcing as a way to complement its A-76 program. The Navy, for instance, had planned to announce 15,000 positions for study under A-76 in fiscal year 1998 but announced only 8,980 (about 60 percent). The following year it planned to announce 20,000 positions but announced 10,807 (about 54 percent). Although DOD’s FAIR Act inventory in 2000 identified commercial functions involving about 450,000 civilian positions, including about 260,000 associated with functions considered potentially eligible for competition, DOD does not expect to study all these functions. It remains to be seen to what extent the Department will significantly increase the number of functions it studies under A-76 in the near future. Department officials told us that the process identified few new functions and associated positions that could be studied under A-76 and that the increases in positions identified did not automatically translate into potentially large numbers of additional studies. The number of positions that will actually be studied for possible competition may be limited by a number of factors, including the following: Some activities are widely dispersed geographically. Having positions associated with commercial activities that are scattered over many locations may prevent some of them from being grouped for competition. Some work categorized as commercial may not be separated from inherently governmental or exempted work. In some cases, commercial activities classified as subject to competition are in activities that also contain work that is inherently governmental or exempt from competition, and the commercial workload may not always be separable from the workload performed by the exempted positions. Resources to conduct A-76 studies are limited. Officials of several military service commands have told us that they already have aggressive competition programs under way and that they lack sufficient resources and staff to conduct more competition studies in the near future. Even before it developed its FAIR Act inventory, DOD had already established goals for positions that the services and the Defense agencies should study and the savings to be achieved. For the most part, the services and Defense agencies delegated to their components responsibility for determining which functions to study. DOD then fell behind in its initial timetable for initiating and completing A-76 studies. Service officials told us that they had already identified as many competition opportunities as they could to meet savings goals under the A-76 program, and they believed that their capacity to conduct studies beyond those already underway or planned over the next few years was limited. Difficulties encountered in identifying A-76 study candidates, and in launching and completing the studies in the time frames initially projected, along with greater than expected costs associated with completing the studies, have led to concerns among various service officials about their ability to meet previously established savings targets. Some Defense officials have also voiced uncertainties over cost estimates and savings associated with strategic sourcing and the lack of a rigorous basis for projecting savings from this effort. Data included in the President’s fiscal year 2001 budget submission indicated that the Navy estimated that study costs and savings generated by strategic sourcing efforts would be virtually the same as those generated by A-76 studies for each position studied. Office of the Secretary of Defense officials have noted there is a wide variation in the types of initiatives that make up strategic sourcing and, consequently, that there can be wide variation in the resultant savings. These uncertainties led us to previously recommend that DOD periodically determine whether savings are being realized in line with the reductions in operating accounts that are based on projected savings. Increasing emphasis on A-76 has served to underscore concerns expressed by both government employees and industry about the process. Federal managers and others have been concerned about organizational turbulence that typically follows the announcement of A-76 studies. Government workers have been concerned about the impact of competition on their jobs, their opportunity for input into the competitive process, and the lack of parity with industry offerors to appeal A-76 decisions. Industry representatives have complained about the fairness of the process and the lack of a “level playing field” between the government and the private sector in accounting for costs. It appears that everyone involved is concerned about the time required to complete the studies. Amid these concerns over the A-76 process, the Congress enacted section 832 of the National Defense Authorization Act for Fiscal Year 2001. The legislation required the Comptroller General to convene a panel of experts to study the policies and procedures governing the transfer of commercial activities for the federal government from government personnel to a federal contractor. The Panel, which Comptroller General David Walker has elected to chair, includes senior officials from DOD, private industry, federal labor organizations, and OMB. Among other issues, the Panel will be reviewing the A-76 process and implementation of the FAIR Act. The Panel had its first meeting on May 8, 2001, and its first public hearing on June 11. At the hearing, over 40 individuals representing a wide spectrum of perspectives presented their views. The Panel currently plans to hold two additional hearings, on August 8 in Indianapolis, Indiana, and on August 15 in San Antonio, Texas. The hearing in San Antonio will specifically address OMB Circular A-76, focusing on what works and what does not in the use of that process. The hearing in Indianapolis will explore various alternatives to the use of A-76 in making sourcing decisions at the federal, state, and local levels. The Panel is required to report its findings and recommendations to the Congress by May 1, 2002. This concludes my statement. I would be pleased to answer any questions you or other members of the Subcommittee may have at this time. For further contacts regarding this statement, please contact Barry W. Holman at (202) 512-8412 or Marilyn Wasleski at (202) 512-8436. Individuals making key contributions to this statement include Debra McKinney, Stefano Petrucci, Thaddeus Rytel, Nancy Lively, Bill Woods, John Brosnan, and Stephanie May. DOD Competitive Sourcing: Effects of A-76 Studies on Federal Employees’ Employment, Pay, and Benefits Vary (GAO-01-388, Mar. 16, 2001). DOD Competitive Sourcing: Results of A-76 Studies Over the Past 5 Years (GAO-01-20, Dec. 7, 2000). DOD Competitive Sourcing: More Consistency Needed in Identifying Commercial Activities (GAO/NSIAD-00-198, Aug. 11, 2000). DOD Competitive Sourcing: Savings Are Occurring, but Actions Are Needed to Improve Accuracy of Savings Estimates (GAO/NSIAD-00-107, Aug. 8, 2000). DOD Competitive Sourcing: Some Progress, but Continuing Challenges Remain in Meeting Program Goals (GAO/NSIAD-00-106, Aug. 8, 2000). Competitive Contracting: The Understandability of FAIR Act Inventories Was Limited (GAO/GGD-00-68, Apr. 14, 2000). DOD Competitive Sourcing: Potential Impact on Emergency Response Operations at Chemical Storage Facilities Is Minimal (GAO/NSIAD-00-88, Mar. 28, 2000). DOD Competitive Sourcing: Plan Needed to Mitigate Risks in Army Logistics Modernization Program (GAO/NSIAD-00-19, Oct. 04, 1999). DOD Competitive Sourcing: Air Force Reserve Command A-76 Competitions (GAO/NSIAD-99-235R, Sept. 13, 1999). DOD Competitive Sourcing: Lessons Learned System Could Enhance A-76 Study Process (GAO/NSIAD-99-152, July 21, 1999). Defense Reform Initiative: Organization, Status, and Challenges (GAO/NSIAD-99-87, Apr. 21, 1999). Quadrennial Defense Review: Status of Efforts to Implement Personnel Reductions in the Army Materiel Command (GAO/NSIAD-99-123, Mar. 31, 1999). Defense Reform Initiative: Progress, Opportunities, and Challenges (GAO/T-NSIAD-99-95, Mar. 2, 1999). Force Structure: A-76 Not Applicable to Air Force 38th Engineering Installation Wing Plan (GAO/NSIAD-99-73, Feb. 26, 1999). Future Years Defense Program: How Savings From Reform Initiatives Affect DOD’s 1999-2003 Program (GAO/NSIAD-99-66, Feb. 25, 1999). DOD Competitive Sourcing: Results of Recent Competitions (GAO/NSIAD-99-44, Feb. 23, 1999). DOD Competitive Sourcing: Questions About Goals, Pace, and Risks of Key Reform Initiative (GAO/NSIAD-99-46, Feb. 22, 1999). OMB Circular A-76: Oversight and Implementation Issues (GAO/T-GGD-98-146, June 4, 1998). Quadrennial Defense Review: Some Personnel Cuts and Associated Savings May Not Be Achieved (GAO/NSIAD-98-100, Apr. 30, 1998). Competitive Contracting: Information Related to the Redrafts of the Freedom From Government Competition Act (GAO/GGD/NSIAD-98-167R, Apr. 27, 1998). Defense Outsourcing: Impact on Navy Sea-Shore Rotations (GAO/NSIAD-98-107, Apr. 21, 1998). Defense Infrastructure: Challenges Facing DOD in Implementing Defense Reform Initiatives (GAO/T-NSIAD-98-115, Mar. 18, 1998). Defense Management: Challenges Facing DOD in Implementing Defense Reform Initiatives (GAO/T-NSIAD/AIMD-98-122, Mar. 13, 1998). Base Operations: DOD’s Use of Single Contracts for Multiple Support Services (GAO/NSIAD-98-82, Feb. 27, 1998). Defense Outsourcing: Better Data Needed to Support Overhead Rates for A-76 Studies (GAO/NSIAD-98-62, Feb. 27, 1998). Outsourcing DOD Logistics: Savings Achievable But Defense Science Board’s Projections Are Overstated (GAO/NSIAD-98-48, Dec. 8, 1997). Financial Management: Outsourcing of Finance and Accounting Functions (GAO/AIMD/NSIAD-98-43, Oct. 17, 1997).
This testimony discusses the Department of Defense's (DOD) use of the Office of Management and Budget's Circular A-76, which establishes federal policy for the performance of recurring commercial activities. DOD has been a leader among federal agencies in the use of the A-76 process and at one point planned to use the process to study more than 200,000 positions over several years. However, the number of positions planned for study has changed over time and the Department recently augmented its A-76 program with what it terms strategic sourcing. DOD has saved money through the A-76 process primarily by reducing the number of in-house positions. Yet, GAO has repeatedly found that it is extremely difficult to measure the precise amount of savings because available data has been limited and inconsistent. The lessons learned from DOD's A-76 program include the following: (1) studies have generally taken longer than initially expected, (2) studies have generally required higher costs and resources than initially projected, (3) finding and selecting functions to compete can be difficult, and (4) making premature budget cuts on the assumption of projected savings can be risky. Both government groups and the private sector have expressed concerns about the fairness, adequacy, costs, and timeliness of the A-76 process.
The Internet is a vast network of interconnected networks that is used by governments, businesses, research institutions, and individuals around the world to communicate, engage in commerce, perform research, educate, and entertain. The Internet became widely accessible to U.S. households by the mid-1990s. Early on, the primary means to access the Internet was a dial-up connection, in which a standard telephone line is used to make an Internet connection. A dial-up connection offers data transmission speeds of up to 56 kilobits, or 1,000 bits per second (Kbps). Broadband access to the Internet became available by the late 1990s. Broadband differs from a dial-up connection in certain important ways. First, broadband connections offer a higher-speed Internet connection than dial-up. For example, some broadband connections offer speeds exceeding 1 million bits per second (Mbps) both upstream (data transferred from the consumer to the Internet service provider) and downstream (data transferred from the Internet service provider to the consumer). These higher speeds enable consumers to receive information much faster and thus enable certain applications to be used and content to be accessed that might not be possible with a dial-up connection. The higher transmission speeds that broadband offers cost more than dial-up, and some broadband users pay a premium to obtain very-high-speed service. Second, broadband provides an “always on” connection to the Internet, so users do not need to establish a connection to the Internet service provider each time they want to go online. Although broadband often is referred to as a singular service, it is available in a wide variety of data speeds—ranging from 768 Kbps to greater than 100 Mbps. FCC’s current categories for collecting data on the number of broadband subscribers by advertised download and upload speeds range from greater than 200 Kbps but less than 768 Kbps to equal to or greater than 100 Mbps. On August 20, 2009, as part of the proceeding to develop a National Broadband Plan, FCC posted a public request for comment on defining “broadband.” Consumers can receive a broadband connection to the Internet through a variety of technologies that offer varying speeds of service, including, but not limited to, the following: Cable modem. Cable television companies first began providing broadband service in the late 1990s over their cable networks. When provided by a cable company, broadband service is referred to as cable modem service. Cable modem service is primarily available in residential areas. Cable modem service enables cable operators to deliver broadband service by using the same coaxial cables that deliver pictures and sound to television sets. Most cable modems are external devices that have two connections, one to the cable wall outlet and the other to a computer or router. Although the speed of service varies with many factors, download speeds of up to 6 Mbps are typical. Cable providers are developing even higher-speed services. DSL. Local telephone companies provide digital subscriber line (DSL) service, another form of broadband service, over their telephone networks on capacity unused by traditional voice service. To provide DSL service, telephone companies must install equipment in their facilities and install or provide DSL modems and other equipment at customers’ premises and remove devices on phone lines that may cause interference. Most residential customers receive older, asymmetric DSL (ADSL) service with download speeds of 1.5 Mbps to 3 Mbps. ADSL technology can achieve speeds of up to 8 Mbps over short distances. Newer DSL technologies can support services with much higher download speeds. Fiber. This technology, also known as fiber optic, converts electrical signals carrying data to light and sends the light through transparent glass fibers smaller than the diameter of a human hair. Fiber optic systems can transmit data at speeds far exceeding current DSL or cable modem speeds, typically by tens of gigabits per second. Fiber optic technology may be provided in several ways, including fiber to a customer’s home or business or to a location somewhere between the provider’s facilities and the customer. In the latter case, the last part of the connection to the customer’s premises may be provided over cable, copper loop, or radio technology. Such hybrid arrangements may be less costly than providing fiber all the way to the customer’s premises, but they generally cannot achieve the high transmission speed of a full fiber-to-the-premises connection. Satellite. Three providers currently offer broadband service via satellite in the United States. These providers use geostationary satellites that orbit in a fixed position above the equator and wirelessly transmit and receive data directly to and from subscribers. Satellite companies provide transmission from the Internet to the user’s computer and from the user’s computer to the Internet, eliminating the need for a telephone or cable connection. Typically a consumer can expect to receive (download) at a speed of about 1 Mbps and send (upload) at a speed of about 200 Kbps. Transmission of data via satellite causes a slight lag in transmission, typically one-quarter to three-fourths of a second, thus rendering this service less suitable for certain Internet applications, such as videoconferencing. While satellite broadcast service may be available throughout the country, it generally costs more than most other broadband modes and its use requires a clear line of sight between the customer’s antenna and the southern sky. Both the equipment necessary for service and recurring monthly fees are generally higher for satellite broadband service, compared with most other broadband transmission modes. Wireless. Land-based, or terrestrial, wireless broadband connects a home or business to the Internet using a radio link. Some wireless services are provided over unlicensed radio spectrum and others over spectrum that has been licensed to particular companies. In licensed bands, some companies are offering fixed wireless broadband throughout cities. Also, mobile telephone carriers—such as the large companies that provide traditional cell phone service—have begun offering broadband mobile wireless Internet service over licensed spectrum—a service that allows subscribers to access the Internet with their mobile phones or laptops in areas throughout cities where their provider supports the service. A variety of broadband access technologies and services also are provided on unlicensed spectrum—that is, spectrum that is not specifically under license for a particular provider’s network. For example, wireless Internet service providers may offer broadband access in particular areas by establishing a network of subscriber stations, each with its own antenna that relays signals throughout a neighborhood and has a common interface to the Internet. Subscribers place necessary reception equipment outside their homes that transmits and receives signals from the nearest antenna. Also, wireless fidelity (Wi-Fi) networks—which provide broadband service in so-called hot spots, or areas within a radius of up to 300 feet—can be found in cafes, hotels, airports, and offices. Such netowrks generally use a short-range technology that provides speeds up to 54 Mbps. Some technologies, such as Worldwide Interoperability for Microwave Access (known as WiMAX), can operate on either licensed or unlicensed bands, and can provide broadband service up to approximately 30 miles. FCC has primary responsibility for regulating broadband. Section 706 of the Telecommunications Act of 1996 directs FCC to encourage the deployment of advanced telecommunications capability, which includes broadband, to all Americans. Under this authority, FCC has to date established a minimal regulatory environment for broadband Internet access services. In the past, FCC has stated that less regulation has encouraged providers to invest in broadband infrastructure. The Communications Act, as amended, allows FCC to classify services as telecommunications services or information services, the latter being subject to fewer regulatory restrictions. FCC, through a number of proceedings, classified broadband Internet access (regardless of the platform) as an information service. FCC does not have explicit statutory authority to regulate the provision of information services; however, FCC has the authority to impose regulations under what is termed its ancillary jurisdiction to regulate services that are reasonably related to its existing statutory authority. FCC also has the authority to adopt broadband regulations to ensure that broadband providers are capable of providing authorized surveillance to law enforcement agencies. As part of its responsibilities, FCC has periodically issued a report to Congress on the status of advanced telecommunications capability in the United States, including the quality of broadband data. To assist in the preparation of this report, in 2000, FCC implemented the previously described broadband reporting form, a semiannual reporting requirement for facilities-based broadband Internet service providers. In November 2004, FCC modified its rules on filing this information, and the revised rules went into effect for the companies’ second filing in 2005. Specifically, FCC removed existing reporting thresholds, and all companies were required to report their total state subscribership by technology. In 2006, we reported that the approach FCC then used to collect data on broadband deployment, which counted broadband service providers with subscribers at the ZIP code level, resulted in inadequate information about broadband deployment. Subsequent to our recommendation, in March 2008, FCC acted to increase the precision and quality of its broadband data by revising its methodology and requiring that broadband providers report the number of broadband connections in service by census tract. In addition to FCC’s data collection effort using its broadband reporting form, the Broadband Data Improvement Act calls for additional actions to improve the quality of data available on broadband deployment. Among other things, the act directs FCC to (1) periodically survey consumers to collect information on the types of technologies used by consumers to access the Internet, the applications or devices used in conjunction with broadband service, and the actual connection speeds of users; (2) collect information on reasons why consumers have not subscribed to broadband services; (3) determine certain demographic data for geographical areas not served by any provider of advanced telecommunications capability (i.e., areas where broadband has not yet been deployed); and (4) provide information on the extent of broadband service capability, including the speed and price of broadband service in a total of 75 communities in at least 25 countries. FTC also has regulatory jurisdiction over broadband services with respect to competition and consumer protection issues. FTC’s jurisdiction over broadband services comes chiefly from its statutory mandate to prevent “unfair methods of competition” and “unfair or deceptive acts or practices in or affecting commerce” under FTC’s enabling legislation, the FTC Act. Although this authority is very broad, certain limited market sectors are expressly excluded from FTC’s enforcement authority. In particular, FTC’s enforcement authority does not reach “common carriers subject to the Communications Act of 1934, as amended. However, since most broadband Internet services are not provided on a common carrier basis, they are generally part of the larger economy subject to FTC’s general competition and consumer protection authority with regard to methods, acts, or practices in or affecting commerce. FTC has, where appropriate, investigated and brought enforcement actions in matters involving access to content via broadband and other Internet access services. Additionally, FTC has brought a variety of cases against Internet service providers that have engaged in allegedly deceptive marketing and billing practices. Two other federal agencies have responsibility for telecommunications policies. The Office of Science and Technology Policy (OSTP) within the Executive Office of the President has a broad mandate to advise the President and the federal government on the effects of science and technology on domestic and international affairs and has led interagency efforts to develop science and technology policies and budgets. NTIA is the President’s principal telecommunications and information adviser and works with other executive branch agencies to develop the Administration’s telecommunications policies. Although there are limitations that we discuss later, consumers interested in broadband service can generally contact providers or search provider Web sites to determine the availability of service, advertised price, and advertised speed of broadband service in their area. For example, consumers can go to att.com or timewarnercable.com and enter their street address to learn about the availability of broadband service at their address, including price and advertised speeds. Each Web site also provides a phone number that consumers can use to reach a customer service representative to obtain information on availability, price, and advertised speeds of service. Consumers can then make their own comparisons of these prices and advertised speeds. In addition, third parties provide consumer Web sites, such as dslreports.com, that assemble this information for consumers to review. However, actual delivered speeds depend on multiple factors, such as the equipment of the consumer, the applications in use, and Internet traffic, and may not always match advertised speeds or the theoretical maximum speeds stated by the provider. Consequently, there are tools available to consumers to measure actual delivered speed. Consumers with broadband service have access to their actual delivered speeds through speed tests from broadband provider Web sites and third parties. Speed tests generally measure the “last mile” speed (download and upload) of the consumer’s connection. Some third-party Web sites also provide information on actual delivered speeds of service and allow consumers to compare speeds. For example, speedtest.net allows individuals to compare their speed with that of other consumers by provider or in a set geographic region. Some states have also completed broadband mapping efforts that provide consumers with information on broadband performance, including availability and advertised speed. We previously reported that 12 states had mapped broadband deployment, and 2 of these states, California and Massachusetts, had mapped both the speed and availability of broadband in their state and placed the information on their state’s Web site. In its 2008 report, California also provided information on average delivered upload and download speeds aggregated throughout the state and advertised residential speeds by price. The stakeholders we interviewed told us that FCC’s broadband data, collected through its broadband reporting form, constitute the primary data source generally used to measure performance and make comparisons across various segments of the United States, although there are limitations, which we discuss later. The Commission has tracked broadband subscribership and deployment since 2000 through its broadband reporting form. In 2006, we reported that the approach FCC then used to collect data on broadband deployment, which counted broadband service providers with subscribers at the ZIP code level, resulted in inadequate information about broadband deployment. To improve this information, in 2008, the Commission revised the semiannual reporting requirements of the broadband reporting form. The Commission now requires most broadband providers to file subscribership information by census tract, including the number of subscribers by technology, speed tier, and business/residential connection. In addition, mobile wireless service providers are now required to report the number of connections in (1) individual states, (2) the census tracts that best represent their broadband service footprint, and (3) in a separate category, the number of subscribers whose device and subscription permit them access to the lawful Internet content of their choice. These changes are expected to result in data that are more detailed than what was previously collected. The first round of data filings under the new requirements was due on March 16, 2009. As of September 2009, FCC staff was still in the process of analyzing the information. Stakeholders also identified the Pew Internet & American Life Project’s reports on home broadband adoption as a source for measuring adoption and making comparisons across the United States. The results in the reports are based on data from approximately 2,300 telephone interviews (on both landline and cellular telephones) conducted by Princeton Survey Research International over the course of a month. The 2009 report included the following information that can be used to compare rural and nonrural areas: broadband adoption, broadband connection type, and, when applicable, reason for not having broadband access or Internet access. Through our literature review and interviews with stakeholders, we focused on 10 performance measures often used by industry, government, and other stakeholders to make international comparisons of broadband service, as summarized below (limitations of these measures are discussed later). These measures fall into two general categories: (1) broadband- specific measures and (2) more general measures that cover a wide array of information and communications technology (ICT). The broadband-specific rankings measure a nation’s broadband performance by focusing on the availability, penetration (or adoption), and quality of broadband in each country, and include those listed below (see table 1 for the U.S. ranking for each.) Broadband Adoption Index. The Phoenix Center for Advanced Legal and Economic Public Policy Studies recently developed the Broadband Adoption Index (BAI), which proposes to compare the actual value that a society derives from broadband usage with that country’s target level for adopting various broadband technologies based on maximizing societal well-being. These targets vary by technology, demographic group, and country. The index does not include an overall ranking of countries based on broadband performance, because each country has its own unique set of adoption targets. Broadband Quality Score. The Oxford Saïd Business School in Oxford, United Kingdom (UK), in conjunction with the University of Oviedo in Oviedo, Spain, and Cisco Systems, Inc., created the Broadband Quality Score (BQS) in September 2008 to highlight each representative nation’s ability to benefit from next-generation Web applications and services. According to the study, to establish broadband leadership, countries must focus on broadband availability, penetration, and quality. Broadband Subscribers per 100 Inhabitants. OECD produces many broadband-related measures annually on its online broadband Web site. According to FCC and many of the stakeholders we interviewed, one of the most widely reported figures on broadband performance is OECD’s count of broadband subscribers per 100 inhabitants by technology. OECD also collects comparative data from its 30 member countries on multiple broadband measures such as penetration, usage, coverage, prices, services and speeds, and choice and competition. However, unlike other stakeholders, OECD does not aggregate its data into a composite indicator of national broadband performance. Broadband Performance Index. The European Commission recently implemented the Broadband Performance Index (BPI), which measures and benchmarks the overall broadband performance of European Union member states based on a range of factors, which could include speeds, rural coverage, affordability, innovation, and other socioeconomic dimensions. In particular, the BPI ranks the EU-27 countries plus Norway in terms of supply and demand factors that affect the penetration and use of broadband. ITIF broadband rankings. For its broadband rankings, the Information Technology and Innovation Foundation (ITIF) measures three primary broadband indicators, household penetration (rather than subscriber), average speed, and price, to rank the broadband performance of OECD nations. ITIF notes the importance of non-policy factors on a nation’s broadband performance, including demographic, economic, and broadband supply variables. In contrast to broadband-specific rankings, the other performance measures we identified were based on each country’s development and use of ICT. These rankings are more general, focusing on the larger picture of how ICT usage, infrastructure, and skills can affect a country’s economic growth. According to an official at the Technology and Policy Institute (TPI), broadband is but one component in the makeup of a country’s ICT landscape, as ICT encompasses Internet usage along with other forms of telecommunications. According to FCC, these various measures demonstrate the value of understanding the broader context when making comparisons regarding broadband deployment and adoption. Examples of ICT-specific rankings include the following: Connectivity Scorecard. The Dean at the Haskayne School of Business at the University of Calgary in Calgary, Canada, worked in collaboration with Nokia Siemens Networks and LECG (a global services and consulting firm) to release the first version of the Connectivity Scorecard in 2008. The scorecard measures the impact of ICT on economic growth in three key areas of society—the consumer sector, the business sector, and the government sector. The report presents separate sets of rankings for “innovation-driven economies” and “resource- and efficiency-driven economies” while specifically focusing on each country’s ICT infrastructure and usage. E-readiness Ranking. The Economist Intelligence Unit (EIU) is the business information arm of The Economist Group, publisher of The Economist magazine. The EIU produces an annual E-readiness Ranking, which measures the quality of a country’s ICT infrastructure as well as the ability of its consumers, businesses, and government to use ICT to their benefit. The EIU makes this assessment by specifically measuring a country’s connectivity and technology infrastructure, business environment, social and cultural environment, legal environment, government policy and vision, and consumer and business adoption. Overall, more than 100 separate qualitative and quantitative criteria are considered. Networked Readiness Index: The World Economic Forum, in cooperation with INSEAD international business school’s eLab research center and Cisco Systems, Inc., produced the Networked Readiness Index (NRI). The NRI is used to assess the extent to which different economies benefit from the latest ICT advances based on their ICT environment, readiness, and usage while taking into account the key roles played by individuals, businesses, and governments. The NRI covers 134 economies worldwide and accounts for nearly 70 factors. International Communications Market. In its 2008 report, Ofcom, the regulator for the UK communications industry, described developments in international communications markets, including information on broadband availability and usage. In the report, Ofcom aimed to provide statistically driven international comparative data for the UK communications sector by examining trend data from 2002 to 2007 on how various countries’ industries, consumers, and regulatory landscapes affect their communication markets. ICT Development Index. The International Telecommunication Union (ITU), a United Nations agency, developed the ICT Development Index (IDI), which measures the development of ICT, the level of advancement of ICT, and the development potential of ICT in more than 150 countries worldwide, comparing their progress between 2002 and 2007. The purpose of the index is to track the global digital divide and to measure each country’s progress toward becoming an “information society.” The primary index measures ICT infrastructure/access, use, and skills, while a separate index was created to capture the price of ICT relative to a country’s income. Eight of the 10 performance measures listed above are “composite indexes,” i.e., combinations of measures that are generally used to try to account for and normalize a variety of factors such as demographic, economic, and geographic differences among countries, which according to many of the stakeholders we spoke with can affect broadband deployment and penetration. Several of the stakeholders identified advantages to using composite indexes in making international comparisons. Officials from the European Commission reported that composite indexes are a useful tool to summarize the multidimensional issues, such as the socioeconomic differences among countries, which cannot be captured by a single indicator. According to ITU, compared with single indicators, composite indexes allow grouping several key performance indicators into one figure that captures a variety of information and provides a more comprehensive picture. While the various indexes differ on which demographic, economic, and geographic factors play a greater role in the supply and demand of broadband, income, age, education, population density, gross domestic product (GDP) per capita, and intermodal competition are generally considered important. According to ITIF, nonpolicy factors, such as demographic, economic, and broadband supply variables explain about three-quarters of the differences among nations’ broadband performance in international rankings. The determination of which factors to include or exclude in a composite index can greatly affect a nation’s ranking in a report, as demonstrated by the fact that the United States’ broadband and ICT rankings vary greatly by study, as shown in table 1. Even though consumers have access to measures, stakeholders told us that measures of price, actual delivered speed, and service reliability have limitations that may affect their usefulness for consumers: Price. Stakeholders told us the available pricing measures for consumers are limited. For example, officials from the Consumer Federation of America and Pew Internet & American Life Project told us the lack of a comprehensive and consistent measure from the government for consumers to compare prices from providers was a limitation. They added that improved measures for prices would help consumers make more informed decisions about broadband services. Although FCC has open proceedings on requiring providers to include measures of price in the broadband reporting form, it currently does not collect this information. Actual delivered speed. Stakeholders also identified limitations regarding the speed tests for consumers to measure actual delivered speeds. A representative from Akamai, a company that handles approximately 15 to 20 percent of all Internet traffic worldwide through its global server network, said one problem with speed tests is that the result can be significantly affected by the location of the server that is used to test the speed; the farther away the server, the less accurate the result. Many other factors can also affect a user’s speed of service, such as congestion on the network, time of day, and other applications that the user may have open on the computer when testing. NTIA officials told us that the speed tests are not able to determine the Internet traffic congestion points, if any, along the chain of networks. An official from the Pew Internet & American Life Project told us the results of the speed tests are not verified by other parties. He also explained that some third-party Web sites that attempt to compare actual delivered speeds have limited numbers of respondents and do not have an independent party verify the results, a fact that decreases the utility of the information for making comparisons. Finally, an official from the Information Technology and Innovation Foundation said the lack of comprehensive data for consumers to compare actual delivered speeds from providers was a limitation for consumers in comparing service options and policy makers in monitoring broadband. Actual delivered speed can be an important measure for consumers because it can determine whether or not a connection can be used to originate and receive high-quality voice, data, graphics, and video. FCC has open proceedings on requiring providers to report actual delivered speeds on the broadband reporting form, but it currently does not collect this information. While broadband connection speeds that customers experience are generally not identical to the advertised speeds or theoretical maximums offered by the broadband provider, there is some evidence that consumers are not focused on this issue. Despite access to the tools to measure actual speed, one study found that few people actually know the speed of their broadband connections. In its report titled “Home Broadband Adoption 2006,” the Pew Internet & American Life Project reported that 81 percent of broadband users did not know their home connection speed. In addition, the federal government has received relatively few complaints regarding broadband speed. From February 1, 2008, through May 12, 2009, FCC reported receiving about 624,000 informal complaints, of which only 157 were related to broadband speed. Further, FTC reported receiving approximately 147 complaints that could be related to broadband speeds from January 2005 through June 19, 2009. According to some stakeholders, such as the Information Technology and Innovation Foundation, consumers appear more concerned with their end user experience, such as the ability to complete transactions or use their applications. Service reliability. Some stakeholders we contacted, including BroadbandCensus.com, IEEE (previously known as the Institute of Electrical and Electronics Engineers), the Internet Engineering Task Force, Akamai, an economist from the Massachusetts Institute of Technology (MIT), NTIA, and Wireless Internet Service Provider Association (WISPA) are concerned that there is no measure for consumers that addresses service reliability. A service reliability measure would provide information to consumers on factors such as transmission quality, which affects perceived speed and could be useful to consumers in comparing the reliability of broadband services. According to an official from Akamai, service quality is the most difficult performance measure to define, measure, and relay to a consumer. While consumers have measures of price, advertised speed, and actual delivered speed to make decisions regarding broadband service, some stakeholders suggested improved measures of price and actual delivered speeds for consumers, as shown in table 2. As shown in table 3, stakeholders identified arguments for and against the proposed measures. It should be noted that while federal and state agencies and public/private partnerships, academicians and think tanks, consumer advocacy groups, and trade and industry groups identified arguments for and against the proposed measures, broadband providers generally only provided arguments against the proposed measures. Thus, while stakeholders identified multiple alternatives, they differed on the need for FCC to develop additional reporting requirements to measure price, average actual delivered speed, and service reliability as follows: Consumer advocacy groups and academicians and representatives from think tanks generally believed there was a need for improved information on price and actual delivered speeds to make comparisons and good decisions about service. These stakeholders preferred that FCC require broadband providers to report price per megabit per second and the averaged actual delivered speed of last-mile connections (from the home to the first provider node or aggregation point) to provide more consistent measures for consumers to make comparisons. These stakeholders generally believed that calculating price per megabit should be done using the published, stand-alone nonpromotional, noncontractual price. Some suggested providing an average price by speed tier, while others suggested providing the lowest and highest prices by speed tier. Finally, some consumer advocacy groups and academicians and representatives from think tanks also favored a measure on service reliability to provide consumers with information on the quality of their connections. In contrast, broadband providers and trade and industry groups generally did not perceive a need for additional broadband measures because, in their opinion, price and speed information is readily available from providers and third-party sources. According to these stakeholders, additional reporting requirements would be an intrusion into a market that is working, as evidenced by falling prices for increased speeds. They added that additional reporting requirements would be an impediment to investment in infrastructure, as more resources would need to be devoted to data collection. These stakeholders also reported that price per megabit and the average actual delivered speed are difficult to measure (as previously shown in table 3), and that FCC is not likely to report the information in a timely fashion. For example, in the past, it has taken FCC close to a year to report the data from the broadband reporting form once it has been submitted by broadband providers. While officials at federal and state agencies and public-private partnerships generally said more information is good, there were mixed opinions on the need for FCC to require additional broadband measures. None of the federal agencies we interviewed provided an opinion; an official with the California Public Utilities Commission was uncertain if additional requirements were needed because similar information is already available to the public; and of the two interviewed, one public/private partnership was for additional broadband measures and one was against. Finally, all stakeholder groups generally noted that FCC’s efforts to develop periodic surveys, per the Broadband Data Improvement Act, and a voluntary registry for consumers to report information about their broadband service, could be used to collect and disseminate price and speed information for consumer use. However, stakeholders also cautioned that periodic consumer surveys and a voluntary registry may not provide reliable information because consumers are not informed enough about the price and speed of their broadband service to report accurate information, and they believe that this should be taken into consideration when reviewing the results. Additionally, consumers may not take the time to enter their information in a registry, as current voluntary registries for broadband data sponsored by third parties are sparsely populated. Despite FCC’s efforts to improve the data collected through its broadband reporting form, comparisons of broadband service across various segments of the country still have the following limitations that diminish their usefulness in informing policy and investment decisions: While FCC requires most broadband providers to report broadband subscribership on the broadband reporting form, it does not have a reporting requirement for these providers to report broadband availability. Additionally, although the majority of those we interviewed cited the change from reporting by ZIP codes to census tract as an improvement, some said the data still do not provide enough granularity to track subscribership in tribal lands or rural areas. In fact, according to FCC’s report Bringing Broadband to Rural America: Report on a Rural Broadband Strategy, there are no accurate data on broadband deployment in rural America, including where broadband facilities are deployed, prices, speeds, and the number of subscribers. FCC also does not require broadband providers to report price information for broadband services on its broadband reporting form, so it is difficult to measure how price varies across various segments of the country. The Commission has open proceedings concerning whether and how the Commission could collect price information for broadband services. For example, the Commission sought comment on requiring providers to report, for each state or each census tract in which they offer service, the monthly price the provider charges for stand-alone broadband service in each of the speed tiers used for the broadband reporting form, potential alternatives, and whether and in what form the Commission should use the reported service price information. Similarly, FCC does not require broadband providers to include information on actual broadband connection speeds experienced by consumers, although the data from the revised broadband reporting form will provide information on the number of connections by advertised speed. As previously mentioned, actual delivered speed can determine the applications that can be run by consumers and could be useful in comparing broadband service across various segments of the country. The Commission also has open proceedings concerning how the Commission might require broadband service providers to report actual broadband connection speeds, and any alternative means, in addition to or other than requiring such service provider reporting. Some stakeholders noted that FCC may overestimate the number of wireless broadband users. FCC’s reporting requirement for mobile wireless broadband service providers collects data on the number of terrestrial mobile wireless subscribers whose subscription and device allow them to access the Internet content of their choice, not the number of consumers actually using broadband on the device. According to a Vice President and Senior Fellow at the Technology Policy Institute, it is unlikely that all persons whose subscription and device allow them to access the Internet actually use the service. As a result, counts of the number of terrestrial mobile wireless subscribers whose subscription and device allow them to access the Internet content of their choice may overestimate the number of wireless broadband users. However, other stakeholders, such as an official with the Rural Utilities Services, thought the reporting standard would produce accurate results, as they thought most consumers that paid for the service would use it. Stakeholders we spoke with generally characterized mobile wireless as a complement to and not a substitute for fixed wireline service. They added that this may change as the technology improves over time. Stakeholders also generally agreed that the mobile wireless counts should be kept separate from fixed wireline counts when determining deployment and availability. Stakeholders also identified limitations with the Pew Internet & American Life Project data. While the survey collects information on cost, speed, availability, and usage, the data are limited because the sample size lacks the granularity needed for making comparisons at the state or regional level. Despite the concerns about FCC’s data collected through the broadband reporting form, several stakeholders said they found the data useful. According to one academic expert, FCC’s broadband data are the best publicly available data on the geographic dispersion of broadband services across the United States. In addition, an official with a consumer advocacy organization said FCC’s changes to the broadband data collection struck the right balance between the need for detailed subscribership data and the burden to providers of gathering such information by choosing the census tract as the geographic unit for data collection. To address the limitations in broadband data, recently enacted legislation requires the Secretary of Commerce to obtain more complete data on broadband availability. The Broadband Data Improvement Act requires the Secretary of Commerce to establish a grant program for multiple purposes, including collection of state-level broadband data. The American Recovery and Reinvestment Act of 2009 requires NTIA to establish a comprehensive nationwide inventory map of existing broadband service capability and availability in the United States that depicts the geographic extent to which broadband service is deployed and available from a commercial provider or public provider throughout each state. By February 17, 2011, NTIA must make the national inventory map available online to the public in a form that is interactive and searchable. The Recovery Act provides up to $350 million, pursuant to the Broadband Data Improvement Act, for developing and maintaining the national broadband inventory map. NTIA has used the grant-making authority provided under the Broadband Data Improvement Act to establish the State Broadband Data and Development Grant Program. Through this program, NTIA has solicited grant applications from states for projects designed to collect data, develop state maps, conduct state planning efforts, and deliver data to NTIA for the purposes of developing the national broadband map. As of September 9, 2009, NTIA had received applications representing all 50 states, 5 territories, and the District of Columbia. NTIA is currently reviewing the applications and plans to announce funding decisions beginning in early fall 2009. Applicants must demonstrate that they have the ability to provide a substantially complete set of all broadband mapping data on or before February 1, 2010, and to complete such data collection by March 1, 2010. NTIA officials told us they are working closely with FCC regarding the development of the map. As part of its efforts, NTIA is requiring awardees under the State Broadband Data and Development Grant Program to provide, among other things, the following information: for each facilities-based provider of broadband service, a list of all census blocks of 2 square miles or smaller in which broadband service is available in the provider’s service area; for census blocks of greater than 2 square miles, for each facilities-based provider of broadband service, a list of all street segments in the census block in which broadband service is available in such provider’s service area; for wireless providers, geographical information system compatible polygonal shape files depicting areas in which broadband service is available; technology type of service provided by census block, street segment, or shape file area, as applicable; maximum advertised speed available across each service area or local franchise area, by metropolitan or rural statistical area; actual delivered speed that can be consistently achieved during expected periods of heavy network usage by census block, or street segment, as applicable; and middle-mile connection points. Though the program does not require it, awardees may satisfy program requirements by providing address-level data. Awardees may also provide last-mile connection points, if available. Identification of a provider’s name and its availability/speed at a particular address is considered confidential. However, identification of a service provider’s specific service area, or “footprint,” at the census block or street segment level is not considered confidential and will be displayed on the national broadband map. The initial period of performance for awards under the program was 5 years from the date of the award. However, on September 10, 2009, NTIA announced that it will fund the mapping and data collection efforts for 2 years from the date of the award and will assess lessons learned, determine best practices, and investigate opportunities for improved data collection prior to obligating funding for subsequent years. In the notice of funds availability for the State Broadband Data and Development Grant Program, NTIA noted that it reserved the right to request that FCC exercise its authority to compel any service provider subject to its jurisdiction to provide data. NTIA also explained that, to the extent possible, the service areas of individual providers will be aggregated with those of other providers of the same technology type. According to NTIA officials, this determination was based on its review of the comments, an examination of mapping methodologies employed at the state level, and consultation with FCC. Stakeholders generally agreed that the national broadband inventory map would help supplement gaps in FCC’s broadband data by providing detailed data on availability and subscribership across the country. For example, a Pew Internet & American Life Project official told us that broadband mapping has the most potential for providing the granular and accurate information required to make comparisons across the country. Several stakeholders also explained that in order for the national broadband map to be effective, NTIA needs to develop data collection standards to help ensure that the data collected by each state are comparable across states. Some stakeholders also stressed the need for collecting information regarding demand side data (desire for service or usage). Despite the consensus among stakeholders regarding the potential benefits of broadband mapping, there are some concerns about the effort. We found NTIA did not provide guidance on how to calculate actual delivered speed that can be consistently achieved during expected periods of heavy network usage at the address. For example there is no guidance on the number of speed measurements that must be taken or a definition of heavy network usage. We have previously reported that consistency— the extent to which data are collected using the same procedures—is a key dimension of data quality and a key attribute of a successful performance measure. NTIA officials told us they chose not to provide this guidance because each provider may have a different method for measuring speed, and they did not want to prescribe a standard method, given the multiple technologies used. However, this could result in inconsistent measurements across grantees, limiting the effectiveness of the mapping effort in making comparisons across the country. While NTIA required applicants to provide a description of the methods the applicant intends to employ to verify data accuracy, it did not set out specific standards on how to do so. NTIA’s notice of funds availability did provide the following example: “A project should propose to collect availability data by address . . . and should cross-check that data for accuracy by using at least one other metric.” We have previously reported that both verification and validation of performance data are important for reducing the risk of producing inaccurate data; this additional information helps to place the credibility of an agency’s reported performance data in context for decision makers. NTIA offic told us they chose not to specify how grantees should verify data becauser they did not want to be too prescriptive, as allowing states to develop thei own data verification processes may yield best practices that can be used going forward. While it is too early to determine the effect, if any, of the limited guidance, the lack of specific standards for data verification could result in inconsistent data across states, limiting the effectiveness o data in making comparisons acros s the country. The broadband providers we spoke with were generally concerned about the cost and burden of complying with any additional reporting requirements. For example, officials from Time Warner told us that some providers do not store data in an address-by-address format and would have to revise their existing data collection procedures, taking time and resources away from network upgrades. According to FCC, broadband providers already average 337 staff hours to complete the reporting requirements for the broadband reporting form. Other stakeholders, such as Connected Nation, Consumers Union, and the Organization for the Promotion and Advancement of Small Telecommunications Companies (OPASTCO), also acknowledged that additional reporting requirements can be particularly burdensome to small broadband providers in rural areas that do not have the staff and resources of larger broadband providers. In addition, the NTIA requirement to provide data on availability may overlap with FCC’s requirement for broadband providers to report subscribership information through the broadband reporting form, because subscribership is a subset of availability. Service must be available for a consumer to be a subscriber. To ease the potential burden on broadband providers, NTIA has timed its future data collection efforts to coincide with FCC’s broadband data collection. Finally, some stakeholders, including the Pew Internet & American Life Project, Consumer Federation of America, and Consumers Union, were concerned that some data underlying the state maps would not be publicly available for review. They explained that public-private partnerships often agree to nondisclosure agreements with broadband providers to facilitate data collection by easing provider concerns regarding what the providers consider to be the proprietary nature of the data. However, according to these stakeholders, this reduces the transparency of the maps and prevents other interested parties from analyzing the information. Again, stakeholders generally noted that FCC’s efforts to develop periodic surveys (per the Broadband Data Improvement Act) and a voluntary registry could be used to collect and disseminate price and speed information to make comparisons of broadband service across the country. But they cautioned that information gleaned from these efforts is limited and therefore should be a supplement to other data collection efforts, because, as previously mentioned, consumers may not be well informed about the price and speed of their Internet service. As previously discussed, stakeholders reported that socioeconomic differences among countries can limit the efficacy of international comparisons. For example, OECD and ITU report broadband subscribers per 100 inhabitants rather than as a percentage of households. According to a senior official at the Technology Policy Institute, household size alone explains most of the differences in the broadband rankings of countries, since countries with larger households are likely to have lower per capita residential connections. As the Phoenix Center demonstrated, even if every home and business in every OECD country were wired with a broadband connection, the United States’ per capita rank would fall from 15th to 20th because the United States has a larger average household size than countries, such as Sweden and Iceland, that rank above it. According to FTC staff, because the socioeconomic status of individual countries and the historical nature of their Internet access markets can vary widely, simple comparisons of individual indicators such as broadband deployment and adoption rates across countries may not be meaningful. In contrast to OECD’s use of subscriber data, the composite indexes we previously described attempt to take into account the socioeconomic differences and other variables among countries when comparing broadband performance. However, according to stakeholders, even composite indexes provide limited analysis because of their complex nature and the number of variables they seek to measure. For example, one of the authors of the Connectivity Scorecard noted that composite indexes are “ultimately based on subjective decisions about which indicators to include or exclude and how to weight these indicators.” The more factors or variables considered in a composite index, the more data must be collected, normalized, and weighted for comparative purposes. A spokesperson for the EIU’s E-readiness ranking stated that more variables increase the room for error. Multiple variables also make it difficult to determine a causal relationship for policy-making purposes between the variable and its measured impact on the result, according to officials with the European Commission. For example, the EIU included nearly 100 quantitative and qualitative variables in its E-readiness Ranking Report in an attempt to measure the impact of a country’s social, political, economic, and technological developments on its ICT usage and infrastructure. A representative of the EIU told us that there are limitations to this approach, and that some of the unit’s data must be estimated because of the sheer number of variables the EIU attempts to consider for the 70 countries in the E-readiness Ranking Report. Stakeholders also reported that the necessary data to improve international comparisons of broadband deployment and penetration are not available. OECD and others have noted that while supply-side data from broadband providers are both readily available and easily quantifiable, demand-side data from consumers for measuring broadband penetration are limited. Some stakeholders, such as officials with ITU, TPI, and ITIF, have noted the importance of collecting demand-side data through household surveys to more accurately reflect how consumers use their personal broadband service for economic or social gain. Governments are also increasingly recognizing the importance of collecting better demand-side data. For example, EU member countries are now required to collect household survey data on ICT usage. In addition, stakeholders reported a lack of uniformity and reliability with the data used to make international broadband comparisons, whether by composite index or single indicator. For example, although most of the countries that participate in international broadband ranking systems recognize broadband to be Internet service above 256 Kbps, there is no internationally agreed upon definition for broadband, which affects the comparability of the data collected. OECD and ITU have recommended uniform reporting standards among their member countries, but the standards are neither enforceable nor applicable to countries outside their membership. In addition, some of the organizations that develop international comparisons rely on participating countries to provide the needed data rather than independently gather the data directly from providers or in the form of household surveys, a fact that leads some to question its reliability. The officials we interviewed from the organizations that develop international comparisons told us they have limited ability to corroborate the data received from participating governments, outside of questioning and confirming a figure when a number appears out of line with trend data. Estimates are also made when the data are simply lacking for a particular country. Currently, discussions are also taking place on how to collect and differentiate among wireline, wireless, and mobile wireless broadband counts. According to OECD, wireless Internet connections at broadband speeds are increasingly available and particularly important in underserved areas around the world. Similarly Internet access via mobile cellular networks has grown rapidly with the increasing availability of third-generation (3G) networks and enabled devices that allow users to access the Internet over mobile cellular networks using a laptop, cell phone, or alternative mobile device. A representative from the Economist Intelligence Unit stated that mobile wireless Internet access is particularly important for individuals in developing countries, such as in Africa, where mobile access may be their primary Internet source. However, stakeholders noted that it is important to differentiate between 3G subscribers whose plan may allow them to access the Internet on their mobile device and those who actually take advantage of the service; current data usually do not differentiate and are therefore potentially misleading. OECD is in discussions with member countries to develop a common methodology to improve the collection of mobile wireless data. Despite the concerns raised about the limitations of the measures used for international comparisons, several stakeholders found the comparisons useful. As previously mentioned, OECD’s count of broadband subscribers per 100 inhabitants by technology is one of the most reported figures. Representatives from the Consumer Federation of America, Free Press, and the Pew Internet & American Life Project said the OECD broadband comparisons provide valuable information to policy makers. In its guidance on developing composite indicators, OECD noted that composite indexes used by other organizations in making international broadband comparisons are recognized as a useful tool in policy analysis and public communication. The indexes serve the important purpose of raising awareness among policy makers and the public of areas that deserve particular attention in future policy decisions. FCC has noted that a more fully developed picture of broadband markets would provide more accurate and useful international comparisons. The recent Broadband Data Improvement Act mandated that FCC include in future 706 reports information that compares the extent of broadband service capability in a total of 75 communities in at least 25 countries abroad for each of the data benchmarks for broadband service under FCC’s current speed tiers. The Commission was directed to choose international communities for the comparison that will offer a population size, density, topography, and demographic profile that are comparable to those of various communities within the United States. In May 2009, FCC officials informed us that they had already assembled a cross-bureau team of economists and attorneys to perform this international comparison. FCC staff is currently in the process of identifying and reaching out to a number of countries believed to have the relevant broadband data necessary to make such comparisons. According to the officials, they have sent letters to 37 countries to request data. They are working under the assumption that the mandate will require them to communicate the results of their comparisons in the next Section 706 report, which is to be released in February of 2010. In addition, on March 31, 2009, FCC posted a public request for comment on the international comparisons component of the act. The majority of stakeholders we spoke with generally support FCC’s efforts to develop an additional international comparison on broadband performance. Although the term “community” was not defined in the act and had yet to be defined by FCC, this level of analysis could be more granular and therefore more comparable than what is generally provided in current international comparison reports. Representatives from organizations such as Connected Nation and Free Press support data that are collected and analyzed at a more granular local level rather than at a national level, because they believe that such data make the comparisons more relevant. A wide range of measures to assess broadband performance is generally available to consumers, industry, and government. However, many stakeholders told us that the measures used by consumers and those used to make comparisons across the United States and among other countries have limitations. Reaching a compromise among broadband providers, consumer advocates, and others on improved broadband measures in the United States has proven to be difficult because they do not agree on alternatives for improvement. Nevertheless, all stakeholders are generally supportive of NTIA’s State Broadband Data and Development Grant Program and its effort to create a national broadband inventory map, which could help fill some current gaps in data. NTIA has made progress in (1) implementing its State Broadband Data and Development Grant Program and (2) requiring grantees to collect data that have important implications for consumers, policy makers, and industry in measuring broadband performance. NTIA will begin receiving data by March 2010 as part of its new grant initiative to collect state-level broadband data and establish a national broadband inventory map. However, NTIA lacks specific guidance for grantees on calculating actual delivered speeds. Without such guidance, it will be difficult to ensure the consistency, and therefore the quality, of the data, limiting the effectiveness of the mapping effort in making comparisons across the country. In addition, while NTIA provided potential grantees with an example of how to verify data accuracy, it did not provide specific standards to verify data accuracy. Consequently, NTIA will need to determine whether the data provided in the initial submission are accurate, and if additional guidance is needed. Developing procedures to help ensure consistent and accurate data is critical, as NTIA begins to distribute funds to grantees and they begin their data collection. More importantly, this effort has the potential to provide consumers, policy makers, and industry with accurate and reliable information such as broadband availability, type, and advertised and actual delivered speed by census block, information that could be used by each in their decision-making process and help guide broadband investment in unserved or underserved populations. To increase the data quality and subsequent results from the State Broadband Data and Development Grant Program, including a searchable nationwide inventory map of existing broadband service capability and availability in the United States, we recommend the Secretary of Commerce examine the first round of data collection and determine whether to develop specific guidance for grantees to improve the consistency and accuracy of the data collected under the program. We provided a draft of this report to the Department of Commerce and FCC for their review and comment. The Department of Commerce provided written comments, which are reprinted in appendix II. In its written comments, the Department of Commerce generally agreed with our recommendation and stated that it had already begun taking actions to address the recommendation. More specifically, the Department of Commerce stated that immediately following the awarding of grant funds, it will investigate opportunities for improved data collection methods including qualitative and quantitative analyses of data collection and verification methods, as well as an assessment of which methods are cost- efficient and accurate. FCC responded that it did not have any comments on the draft report. We are sending copies of this report to the Secretary of Commerce and the Chairman of the Federal Communications Commission. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about his report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Contact information and major contributors to this report are listed on appendix III. To gather information related to both objectives, we reviewed related documentation and laws, including the Broadband Data Improvement Act, enacted in 2008; the legislative history of the act; the American Recovery and Reinvestment Act of 2009 and its legislative history; various Federal Communications Commission (FCC) proceedings; and reports from the Congressional Research Service (CRS). We also conducted a literature review to identify broadband performance measures, including international broadband comparisons. To identify the broadband performance measures available to consumers, industry, government, and other stakeholders, we interviewed officials and representatives from several stakeholder groups. On the basis of the requirements of the mandate, the literature review, the judgment of our staff with expertise in broadband and telecommunications issues, and suggestions from the initial interviews held, we determined to include the following stakeholder groups in our analysis to ensure a variety of perspectives and views on broadband performance measures: academicians and think tanks, broadband providers, consumer advocacy groups, federal and state agencies and public/private partnerships, international organizations, and trade and industry groups. We used the same process to identify potential stakeholders for interviews. Table 4 contains a detailed list of the stakeholders included in our study: To evaluate the limitations, if any, of the measures, and how the measures could be supplemented or improved, we interviewed and reviewed related documentation from the stakeholders previously mentioned to obtain their opinions and analysis on the strengths and limitations of the measures and any potential options identified. We also asked the stakeholders to discuss the validity and reliability of the measures and any potential improvements. Although representatives from the think tanks and academicians we interviewed identified limitations with the data that are used to make international comparisons, stakeholders generally use the same sources, thought the data were adequate, and support current efforts being made to improve the quality of the data. We conducted this performance audit from February 2009 through October 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, David Sausville (Assistant Director), Eli Albagli, Derrick Collins, Amy Rosewarne, Andrew Stavisky, Hai Tran, Amy Ward-Meier, and Mindi Weisenbloom made key contributions to this report.
The Broadband Data Improvement Act, enacted in 2008, established a variety of initiatives intended to improve the quality of state and federal data on broadband (i.e., high-speed Internet) services and promote the deployment (the building of infrastructure over which broadband services can be provided) of affordable broadband services to all parts of the nation. The act required GAO to conduct a study to consider and evaluate additional broadband metrics or standards. This mandated report addresses (1) the measures generally available to consumers, industry, government and others, and (2) the limitations, if any, of the measures and how they could be supplemented or improved. To identify and evaluate the measures, GAO conducted a review of literature and related laws and interviewed and reviewed related documentation from stakeholder groups. Multiple measures are generally available to consumers, industry, and government to assess broadband performance. Consumers can generally access measures of availability, price, advertised speed, and actual delivered speed from providers and third parties to compare services. Industry and government also have access to some measures that enable comparisons across segments of the United States to inform policy and guide investment. For example, the Federal Communications Commission's (FCC) data from its semiannual reporting requirement for providers are the primary source for comparing the availability of and subscribers to broadband. Through a literature review and interviews with stakeholders, GAO focused on 10 measures that can be used to make international comparisons of broadband service to inform policy. Eight were composite indexes that are generally used to account for factors such as demographic and economic differences among countries, which, according to stakeholders, can affect broadband deployment and penetration (the number or percentage of subscribers per capita or per household). Through available documentation and discussions with stakeholders, GAO found that current measures have limitations, views were mixed on potential alternatives, and ongoing efforts need improvement: (1) According to some stakeholders, the lack of comprehensive measures from the government to compare price, actual delivered speeds, and service reliability data from providers is a limitation for consumers. FCC has open proceedings on requiring providers to report such information, but there was no consensus among stakeholders on the need for additional reporting requirements and measures. (2) Stakeholders told GAO that FCC's semiannual data collection from providers does not include information on availability, price, or actual delivered speeds, which limits the ability to make comparisons across the country and inform policy or investment decisions. Stakeholders generally agreed that the Department of Commerce's effort to develop a national broadband inventory map through its State Broadband Data and Development Grant Program would address some gaps and provide detailed data on availability, subscribership, and actual delivered speeds, but the department did not provide guidance to grantees on calculating actual delivered speeds or specific standards to verify the data collected. This could result in inconsistent data and limit the effectiveness of the effort. GAO has previously reported that consistency and data verification are important for reducing the risk of producing inaccurate data. (3) Finally, the measures used for international broadband comparisons have limitations for a variety of reasons, including socioeconomic differences that make the comparisons difficult. Despite the concerns, stakeholders found the measures useful to help inform policy. Stakeholders generally supported FCC's efforts to develop international comparisons because the comparisons will be at a local level within each country, and could provide more relevant information.
The H-1B program enables companies in the United States to hire foreign workers for work in specialty occupations on a temporary basis. A specialty occupation is defined as one requiring theoretical and practical application of a body of highly specialized knowledge and the attainment of a bachelor’s degree or higher (or its equivalent) in the field of specialty. The law originally capped the number of H-1B visas at 65,000 per year, but the cap has changed several times pursuant to legislation. The American Competitiveness and Workforce Improvement Act of 1998 increased the cap to 115,000 for fiscal year 1999 and fiscal year 2000. The American Competitiveness in the Twenty-First Century Act of 2000 (AC21) further increased the limit to 195,000 for fiscal year 2001 through fiscal year 2003. In fiscal year 2004, the cap reverted to its original level of 65,000. Over this period, statutory changes also allowed for certain categories of individuals and companies to be exempt from or to receive special treatment under the cap. In 2000, AC21 exempted all individuals being hired by institutions of higher education, as well as nonprofit and government-research organizations, from the cap. More recently, the H-1B Visa Reform Act of 2004 allowed for an additional 20,000 visas each year for foreign workers holding a master’s degree or higher from an American institution of higher education to be exempted from the numerical cap limitation. In addition, in 2004, consistent with free trade agreements, amendments allowed for up to 6,800 of the 65,000 H-1B visas to be set aside for workers from Chile and Singapore. Figure 1 depicts the cap levels over the last 20 years and important changes to provisions related to their application. See appendix V for a list of selected H-1B program laws, with descriptions of key provisions. While the H-1B visa is not considered a permanent visa, H-1B workers can apply for extensions and pursue permanent residence in the United States. Initial petitions are those filed for a foreign national’s first-time employment as an H-1B worker and are valid for a period of up to 3 years. Generally, initial petitions are counted against the annual cap. Extensions—technically referred to as continuing employment petitions— may be filed to extend the initial petitions for up to an additional 3 years. These extensions may be filed for extended employment; sequential employment (when an H-1B worker changes employers within his or her 6- year time period); or for concurrent employment (when an H-1B worker intends to work simultaneously for a second employer). Extensions do not count against the cap. While working under an H-1B visa, an H-1B worker may apply for legal permanent residence in the United States. After filing an application for permanent residence, H-1B workers are eligible to obtain additional 1-year visa extensions until their green card is issued. To obtain such extensions, the green card application must be employment-based (i.e. not a green card sponsored by a family member). Employment-based green cards can take a number of years to obtain due to limits on the number of green cards issued to individuals from different countries and in particular employment categories. Labor, Homeland Security, and State each play a role in administering the application process for an H-1B visa. Labor’s Employment and Training Administration (Employment and Training) receives and approves an initial application, known as the Labor Condition Application (LCA), from employers. Homeland Security’s U.S. Citizenship and Immigration Services (USCIS) reviews an additional employer application, known as the I-129 petition, and ultimately approves H-1B visa petitions. For prospective H-1B workers residing outside the United States, State interviews these approved applicants and compares information obtained during the interview against each individual’s visa application and supporting documents, and ultimately issues the visa. For prospective H-1B workers already residing in the United States, USCIS updates the workers’ visa status without involvement from State. Homeland Security’s USCIS has the primary responsibility for administering the H-1B program, which includes responsibility for tracking the number of approved petitions against the established cap. Generally, Homeland Security accepts H-1B petitions in the order in which they are received. However, for those years in which USCIS anticipates that the number of I-129 petitions filed will exceed the cap, USCIS holds a “lottery” to determine which of the petitions will be accepted for review. For the lottery, USCIS uses a computer-generated random selection process to select the number of petitions necessary to reach the cap. USCIS runs two lotteries—one for cases subject to the 65,000 cap, and another for the 20,000 visas available to foreign workers holding a master’s degree or higher from an American institution of higher education. With regard to enforcement, Labor, Justice, and Homeland Security each have specific responsibilities. Labor’s Wage and Hour Division (Wage and Hour) is responsible for enforcing program rules by investigating complaints made against employers by H-1B workers or their representatives and assessing penalties when employers are not in compliance with the requirements of the program. Justice is responsible for investigating complaints made by U.S. workers who allege that they have been displaced or otherwise harmed by the H-1B visa program. Finally, Homeland Security’s Directorate of Fraud Detection and National Security (FDNS) collaborates with its Immigration and Customs Enforcement Office to investigate fraud and abuse in the program. The application and approval process for an employer to hire an H-1B worker requires submission of an LCA and the I-129 petition. The employers must first submit the LCA to Employment and Training for certification. The LCA may reflect requests for one or more workers. On this form, employers must provide their company name, address, Employer Identification Number, and the rate of pay and work location for the anticipated H-1B workers among other information. Submission of the LCA to Employment and Training also involves employers making four attestations: (1) that they will pay H-1B workers the amount they pay other employees with similar experience and qualifications or the prevailing wage; (2) that the employment of H-1B workers will not adversely affect the working conditions of U.S. workers similarly employed; (3) that no strike or lockout exists in the occupational classification at the place of employment; and (4) that the employer has notified employees at the place of employment of the intent to employ H-1B workers. These attestations are designed to protect both the jobs of domestic workers and the rights and working conditions of foreign temporary workers. H-1B-dependent employers or employers found to have committed a willful failure or misrepresentation during the 5-year period preceding the filing of the LCA must make additional attestations on their LCA. They must attest (1) that they did not displace a U.S. worker within the period of 90 days before and 90 days after filing a petition for an H-1B worker; (2) that they took good-faith steps prior to filing the H-1B application to recruit U.S. workers and that they offered the job to a U.S. applicant who was equally or better qualified than an H-1B worker; and (3) that they not place the H-1B worker with any other employer, unless they inquired and have no knowledge that, within the 90 days before and 90 days after the placement, the other employer has displaced or intends to displace a U.S. worker with the H-1B worker. Unlike some other temporary visa programs, the H-1B program does not require employers to provide evidence that they have first “tested” the U.S. labor market by trying to hire a U.S. worker. Under other temporary visa programs, such as the H-2A program for temporary agricultural workers and the H-2B program for temporary nonagricultural seasonal or intermittent workers, an employer must, for example document that it has conducted detailed recruitment efforts, advertised the job as specified, listed the job with its State Workforce Agency, and under certain circumstances, document why it did not hire applicants it rejected. In the H-1B program, only those employers that are designated as H-1B-dependent or willful violators are subject to any type of labor market test. However, these employers need only attest, rather than demonstrate, that they took good faith steps to hire a U.S. worker. Once Labor has approved the LCA, employers must submit the certified LCA to Homeland Security, along with the I-129, for additional review. The I-129, submitted by employers to Homeland Security for each prospective H-1B worker, must show the wage that will be paid, the location of the position, and the worker’s qualifications, among other information. Figure 2 summarizes the steps required to obtain an H-1B visa. Available data show that, while demand for H-1B workers by employers has fluctuated with the economy over the past decade, the demand for H-1B workers tended to exceed the cap, as measured by the numbers of initial petitions submitted by employers. In addition, although the vast majority of employers for which Homeland Security processed petitions were approved to hire just one worker, a small number of employers consistently garnered about 30 percent of all approved petitions. Although a precise measure of demand for H-1B workers does not exist, a key proxy—the number of initial petitions for new H-1B workers submitted to Homeland Security annually—indicates that demand for H-1B workers tended to exceed the cap over the last decade. As shown in figure 3, from 2000 to 2009, initial petitions for new H-1B workers submitted to Homeland Security by employers who are subject to the cap exceeded the cap in all but 3 fiscal years. If initial petitions submitted by employers exempt from the cap are also included in this measure, the demand for new H-1B workers is higher, since over 14 percent of all initial petitions across the decade were submitted by employers who are not subject to the cap. However, the number of initial petitions submitted annually is likely to be an underestimate of demand for two reasons. First, employers subject to the cap may stop submitting initial petitions once they know the cap has been reached. Second, according to Homeland Security officials, Homeland Security stops accepting petitions that are subject to the cap once the cap is reached. Consequently, we cannot precisely determine the level of any unmet demand among those employers who are subject to the cap. When requests to extend H-1B workers’ visas (i.e., extensions) are included in the total count of submitted petitions, we found that submitted petitions not subject to the cap generally increased as a proportion of overall petitions submitted from fiscal years 2000 to 2009, and greatly exceeded those that were subject to the cap in the last half of the decade. As shown in figure 4, during this time period, the proportion of all submitted petitions that were not subject to the cap increased from 48.9 percent in 2000 to 64.6 percent in 2009. However, as noted previously, submitted petitions for H-1B workers subject to the cap are likely to be underestimated. Additionally, Homeland Security’s data system does not enable us to determine which petitions for H-1B workers were subject to the 20,000 master’s cap. Another proxy of demand for H-1B workers is the number of employers that submitted petitions for H-1B workers to Homeland Security each year. As shown in figure 5, the overall number of employers submitting petitions for H-1B workers—both initial petitions and requests for visa extensions— fluctuated from 44,675 in fiscal year 2000 to 58,956 in fiscal year 2009 with a high of 80,945 in fiscal year 2004, showing much less annual fluctuation than the overall number of H-1B workers they requested. This proxy is also likely to underestimate demand because any additional employers submitting petitions for H-1B workers subject to the cap were not counted after the cap was reached. Most employers that submitted petitions to Homeland Security were approved, and most were approved for one H-1B worker, but a small percent of employers garnered over one-quarter of all H-1B approvals between fiscal year 2000 and fiscal year 2009. Over the 10-year period, about 94 percent of all submitted petitions (initial and extensions) were approved, with a high of 97 percent in fiscal year 2006 and a low of 84 percent in fiscal year 2009. With respect to the number of approved workers per employer, 68 percent of employers were approved for 1 H-1B worker and about 99 percent of all employers with approved petitions (627,922) were approved for 100 or fewer workers. However, over the decade, less than 1 percent of all employers with approved petitions were approved to hire almost 30 percent of all H-1B workers. Further, according to Labor’s application data, between 3 and 5 percent of all employers were categorized as being either H-1B-dependent or willful violators between fiscal year 2002 and fiscal year 2008. However, Labor does not require employers to report (and therefore Labor’s data do not indicate) the proportion of H-1B workers that comprise each employer’s workforce. Among the top H-1B-hiring employers—those approved for large numbers of H-1B workers—are employers that function as “staffing companies,” (i.e., employers that apply for H-1B workers but ultimately place these workers at the worksites of other employers as part of their business model, many of which also outsource work overseas). Some foreign- owned information technology (IT) services firms have publicly stated that their ability to provide IT services to U.S. customers depends in part on access to significant numbers of H-1B and L-1 visa workers. Ultimately, the prevalence of these employers participating in the H-1B visa program is difficult to know because there are no disclosure requirements and Homeland Security does not track such information. However, using publicly available data on H-1B-hiring employers we learned that at least 10 of the top 85 H-1B-hiring employers in fiscal year 2009 participate in staffing arrangements, of which at least 6 have headquarters or operations located in India. Together, in fiscal year 2009, these 10 employers garnered nearly 11,456 approvals, or about 6 percent of all H-1B approvals. Further, 3 of these employers were among the top 5 H-1B-hiring companies, receiving 8,431 approvals among them. To better understand the impact of the H-1B cap and program on H-1B employers, GAO interviewed 34 companies—including individual structured interviews with 31 companies and group discussions with 3 companies—about how the H-1B program affects their costs of doing business, their R&D activities, and their decisions about whether to locate work overseas. These companies reported that the H-1B cap created various costs, but those costs varied depending on the size and maturity of the company. While many companies said that access to skilled labor is a significant factor in locating their R&D labs, few said that the H-1B cap was an important factor in their decisions about locating activities (either R&D or other skilled work) abroad, with the exception of IT services firms. Many of the 34 companies we spoke with cited a range of direct and indirect costs associated with the H-1B cap and program features, including staffing uncertainties, legal and administrative fees, and other costs. However, the nature and extent of some costs varied with the type of firm. According to firms we interviewed, uncertainty in staffing due to the cap has imposed varied, and for some significant, costs to doing business, although they are difficult to quantify. Twenty-one of the 31 firms we interviewed individually reported that they had H-1B petitions denied due to the cap in years when the cap was reached early in the filing season. In these years, the firms did not know which, if any, of their H-1B candidates would obtain a visa, and several (7) firms said that this situation created uncertainty that interfered with both project planning and candidate recruitment. Two firms also said that delays in processing their petitions, such as requests for additional evidence, sometimes resulted in their candidates accepting other positions in the United States or abroad instead of waiting for a resolution. In addition, two firms mentioned that in order to get the petition application in before the deadline, they sometimes made job offers to candidates who required H-1B visas before they were certain of the need to hire them. Firms cited other costs associated with acquiring H-1B hires, such as legal and administrative costs, and Homeland Security filing fees. For H-1B applications, the combined legal and filing fee costs among the 26 firms that reported this information to us ranged from an estimated $2,320 to $7,500 or more per petition. However, several firms mentioned that petitions that generated additional requests for evidence from Homeland Security could result in higher legal costs, as well as additional administrative costs resulting from the staff hours required to collect extensive evidence. Several firms we spoke to also noted that Homeland Security filing fees have increased significantly in recent years—for example, Homeland Security fees for firms that are not exempt from the cap have risen from $110 in fiscal year 2000 to $2,320 in fiscal year 2009. With regard to firms that eventually file applications for permanent residency for their H-1B workers, some employers we spoke with noted that their total legal and administrative costs for the duration of the process are large. For example, one company official estimated the combined costs of the H-1B and green card process to be about $16,000 over the duration of the process. Two respondents noted that they have such long-term costs in mind when considering H-1B candidates. While only a few respondents brought up the cost of sponsoring an H-1B worker for permanent residency, nearly all (30 out of 31) of the firms we spoke with indicated that they had sponsored at least some of their H-1B visa holders for permanent residency, and 8 said that they typically sponsor all H-1B visa holders whose job performance was satisfactory. In years when firms did not receive approvals for all of their H-1B petitions, most of the large, multinational firms we spoke with reported that they were generally able to hire their preferred candidates because the firms were skilled at navigating the immigration system. Specifically, 12 of the 14 large, multinational firms we spoke with reported having found a way to hire a job candidate denied an H-1B visa due to the cap. They did so, for example, by sending the candidate to work in an overseas office and subsequently bringing him or her in on an L-1 visa, or by extending the practical training period allowed under their student visa for an additional year. Some firms noted, however, that these alternatives can be very costly. For example, after H-1B visas for preferred job candidates fell through, nine companies said they had sometimes placed their job candidates temporarily overseas, and three mentioned that this process required the company to pay an “expatriate package,” with allowances for housing and living expenses. One company executive said hiring an employee on an expatriate package is often three times more costly than hiring the same employee in the U.S.—a point with which others we spoke with concurred. Of the 13 smaller H-1B employers we spoke with, 8 indicated that they had incurred significant business costs resulting from petitions denied due to the cap, delays in processing H-1B petitions, and other costs associated with the H-1B program. Six of the smaller companies we spoke with had petitions denied due to the cap, and of these, four indicated they did not have the resources or the infrastructure in place to pursue alternatives such as placing a desired employee abroad for a year. In addition, executives from four of the six small firms we spoke with who had petitions denied due to the cap told us that they had to delay or cancel projects, or hire second-choice employees, because they were unable to hire all of the employees for whom they sought H-1B petitions. Several firms in technology-intensive fields such as IT product development—both large and small—stressed that the product development cycles in their industries are extremely compressed, and in order to be competitive, they frequently need to develop new products in a matter of months, not years. Some of these firms told us that any delay in hiring an essential employee can, therefore, result in significant losses. One founder of a technology company, who valued his 3-year-old firm at about $100 million dollars, said a 3-month delay in product development could mean lost opportunities worth several million dollars. To gain the perspective of entities that support and work with emerging technology companies (high-tech “start-up” firms), we spoke with venture capital and law firm representatives who reported that start-ups, in particular, often have less time and fewer resources for navigating the immigration system, and the impact of employee immigration problems on them can be substantial. Some founders of start-ups and venture capital firms with whom we spoke reported that the skills required by small firms and emerging companies in high-tech sectors are often extremely specialized, and sometimes these firms cannot readily find a “second- choice” employee in the U.S. labor market. For example, one start-up founder stressed that competition for “the best people” is fierce in “a high- growth, venture-backed business” where building “complex software faster and better than companies that are orders of magnitude larger” is critical to survival. In addition, foreign nationals seeking to found new companies in the United States can face a unique set of difficulties. Two lawyers we spoke with whose firms work with many emerging technology companies in Silicon Valley described cases in which entrepreneurs attempting to establish very early-stage technology start-ups were unable to obtain H-1B or other work visas for themselves and either relocated the project abroad or had to abandon the start-up. When asked about how the H-1B cap affected their decisions on where to locate their R&D activities and other operations, 15 of the 28 companies who responded to these questions said the H-1B cap was not an important factor in their decisions on the location of these activities. The 20 firms we spoke with that conducted R&D were in a variety of industries—including semiconductor and electronics manufacturing, pharmaceutical companies, software publishing and financial services—and 7 of these 20 were in the manufacturing sector. Several firms that conducted R&D reported that their H-1B workers were essential to this work in the United States. Furthermore, access to skilled labor from around the world was very important to a number of the firms we interviewed; 15 of the companies we spoke with had R&D centers or labs overseas, and 8 of these firms told us that these centers or labs had been set up largely to access the skilled workforce in that country. However, only four said the H-1B cap was an important determinant in the creation of these overseas centers. Respondents from several of the multinational companies we spoke with—whether headquartered in the United States or not—regarded their firms as global entities, and five said that their decisions to expand overseas are primarily driven by the pursuit of new markets. In addition, firms said many other factors are involved in such decisions, including the cost of labor; access to a workforce in a variety of time zones; language and culture; proximity to universities; and tax law. While the majority of company officials we spoke to said they had not moved work offshore due to the H-1B program or cap, several respondents from one group of companies—IT services firms—told us they have moved or would move work offshore as a result of the cap or changes in the administration of the H-1B program. One large IT services firm that had both an onshore staffing component and offshore outsourcing component noted that in years when the H-1B cap prevented hiring all the foreign workers sought, the company could locate a larger portion of the work project overseas. Two IT staffing firms we spoke with—firms that place H-1B workers at the worksites of client companies—said their U.S. business relies heavily on the H-1B program because H-1B visa holders are more willing to relocate around the country, and one noted that H-1Bs accept lower wages than U.S. workers. Several executives at IT staffing firms we interviewed noted that, since issuance of a January 2010 Homeland Security memo, Homeland Security is more aggressively enforcing a requirement that staffing firms be able to provide evidence of an employer-employee relationship with the H-1B worker they sponsor by, for example, having a contract with their clients in place. Executives from staffing firms told us they often cannot have a contract in place because they provide labor on short notice to their client firms. As a result of the increased enforcement of this provision, executives at one staffing firm told us that they no longer hired H-1Bs for their staffing business, and executives at several other staffing firms reported that they had ceased hiring new H-1B workers, hiring instead only foreign nationals already in the country with a current H-1B visa. Executives at some companies who already had an offshore location reported expanding the portion of their work conducted overseas, and others reported that they had either opened an offshore location to access labor from overseas or were considering doing so. Some researchers have noted that some IT services firms that conduct offshore outsourcing and employ large numbers of H-1B workers offer engineering and R&D services. Although 3 of the 10 IT services firms we spoke with described themselves as conducting R&D, 2 of the 3 noted that this R&D involved innovation while on-the-job. Some experts we spoke with also noted that learning and technological innovation is often attained on the job or through informal collaboration, as opposed to through formal R&D efforts. Thus, while the movement of IT services work offshore in response to the H-1B cap may not result in the direct transfer of formal R&D, it may nonetheless result in movement of innovation offshore. Companies we spoke with reported several concerns with the H-1B petition adjudication process, including the amount of paperwork required and the level of evidence requested during this process. Companies and experts we spoke with suggested several program modifications that could remedy some of these reported problems. Increasingly burdensome adjudication process: Eighteen of the firms we spoke with maintained that the review and adjudication process had become increasingly burdensome in recent years, with many of these firms complaining about the amount of paperwork they needed to provide as part of the adjudication process. Further, eight firms—of all sizes and across a range of industries—complained that the number of requests for additional evidence from Homeland Security increased significantly in recent years. Relatedly, in prior work, we suggested that Congress consider streamlining the H-1B approval process by eliminating the separate requirement that employers first submit an LCA to Labor for review and certification, since another agency (USCIS) subsequently conducted a similar review of the LCA. Three years after our recommendation, in 2003, USCIS was moved under the newly formed Homeland Security; however, Congress has not taken action to streamline the process. Inconsistencies in the adjudication process: Executives at several companies we spoke with provided examples of what they viewed as inconsistencies in the adjudication process. For example, one company executive noted that the petitions it sends to one of Homeland Security’s two processing centers are often processed more efficiently than the petitions it sends to the other processing center. Another executive noted that at times, “decisions on approving or denying the H-1B visa applications seem arbitrary.” This executive provided an example of a USCIS adjudicator who decided that the project for which the company sought an H-1B worker did not require “specialty education,” but the executive felt that if the adjudicator had contacted the client firm, they could have easily seen that a specialist was required. Other firms noted that some adjudicators ask for evidence that seems unnecessary. For example, an immigration lawyer at a multinational pharmaceutical company said that agency requests for evidence do not always appear to be “thoughtful,” and cited a Request for Evidence that demanded a review of the qualifications of an applicant who had received a science degree from Oxford University. Adjudication process not customized for different employers: Several companies we spoke with complained that the adjudication process is the same for all H-1B employers, irrespective of the employer’s track record with the H-1B program. For example, the Immigration Policy Manager for a large, household-name Fortune 100 company recounted being asked to provide photographic evidence of its headquarters as part of the Request for Evidence in the petition review process. As another example, the Chief Executive Officer of a small software application developer who had been using the H-1B program for over 10 years recounted the frustration of interviewing 60 U.S. candidates before finding 3 candidates through international hiring, and then facing a vetting process that questioned his effort to hire a U.S. citizen. At the same time, Homeland Security staff we spoke with reported having to review large stacks of paperwork to adjudicate a single petition. Experts we spoke with suggested that Homeland Security consider creating a risk-based adjudication process whereby businesses are ranked on their experience with the program and past compliance issues. Such a process could permit well-vetted businesses with a strong track-record of H-1B regulatory compliance access to a streamlined process for petition approval and reduced requests for evidence, thus reducing the burden to firms of providing evidence, and would permit Homeland Security investigators to focus their investigative efforts efficiently. Rigidities in the lottery system: Several company executives, industry representatives, and academic researchers we spoke with cited examples of what they viewed as rigidities in the lottery system, especially in years when the H-1B cap is hit early. Several industry representatives told us that the lottery process does not allow their clients to rank their top choices; as a result, firms do not necessarily receive approval for the most desired H-1B candidates. Several companies we spoke with also raised the issue that the annual allotment period does not allow firms to make their hiring decisions in response to business needs throughout the year, especially during years when the cap is hit early in the year. Some company executives and researchers we spoke with suggested the following: a more efficient system would permit employers to rank their applications so that they are able to hire the best qualified worker for the job in highest need; and to allow more flexible hiring of H-1B workers, Homeland Security consider distributing the allocation of H-1B permits throughout the year (such as quarterly) rather than annually. Visas for emerging technology companies: Entrepreneurs and venture capital firms we interviewed said that program rules can inhibit many emerging technology companies and other small firms from using the H-1B program to bring in the talent they need, constraining the ability of these companies to grow and innovate in the United States. For example, for the earliest stage ventures, when the person who needs the H-1B visa is the entrepreneur, there is sometimes no “firm” in existence yet that can meet legal criteria for employing H-1Bs. While it is not necessarily the role of the H-1B program to provide work visas for foreign entrepreneurs, several parties we spoke with discussed the risk of the United States losing its advantage in high-tech entrepreneurship if U.S. immigration policy undermines the ability and interest of new entrepreneurs to move to high- tech communities like Silicon Valley. Some venture capital firms and businesses we spoke with suggested that, in order to promote the ability of entrepreneurs to start businesses in the United States, Congress should consider creating a visa category for entrepreneurs, available to persons with U.S. venture backing. Agency officials expressed reservations about the feasibility of GAO’s past recommendation and the suggestions from experts and company executives on improving the application process. Homeland Security officials believed that Labor would be better suited to review the LCA because Labor has specialized knowledge about the computation of prevailing wages. Labor officials, however, conceded that their review of the LCA is limited by statute, as discussed above. In regard to the potential adoption of a customized adjudication process, Labor officials noted that a strong track record of compliance with program rules does not guarantee future compliance. Homeland Security officials also noted that establishing a system for employers to rank their submitted petitions in order of priority might increase the likelihood of fraud if it also increased incentives for employers to submit applications for hypothetical workers in order to capture a larger proportion of those selected for the lottery. State officials raised questions about the logistics required for allocating H-1B petitions throughout the year—for example, whether or how employers would be permitted to resubmit petitions after receiving a denial in one quarter, and whether such a system might result in more employers being denied access to H-1B workers during peak seasons. Homeland Security officials also noted two efforts currently under way to streamline the application process for prospective H-1B employers. Homeland Security is in the process of developing a product that would allow it to use data from a private data vendor to automatically download certain data on employers and update those data over time so that in the future, employers may not have as heavy a burden in filing their petitions. This product is currently being tested. Second, Homeland Security is currently preparing a proposed rule, which is being reviewed and considered within the agency, to allow employers to submit requests for H-1B slots before submitting an LCA. This rule would spare employers that were not chosen in the lottery from having to file an LCA and could also reduce workloads for Labor. The officials did not know whether and when a proposed rule would be published for comment and finalized. Data on the total number of H-1B workers in the United States are not available because of limitations in agency data. In addition, although Homeland Security is responsible for tracking the number of H-1B petitions approved under the cap or the number of H-1B visas issued, it cannot precisely do so. However, Homeland Security is currently taking steps to address these limitations. Data on the annual cohort of people approved to be H-1B workers (referred to as “approved H-1B workers” in this report) offer some information on the characteristics of likely H-1B workers, including their countries of birth, occupations, and education levels. Although Homeland Security generally tracks the flow of likely H-1B workers into the United States on an annual basis, it cannot determine the size of the cumulative H-1B workforce because several agencies or departments manage data on this population over time, and the systems that capture the data are not easily linked. H-1B petition approvals are captured in Homeland Security’s CLAIMS 3 data system, as are changes in visa status for approved H-1B workers who are already residing in the country at the time of approval. However, visas for H-1B workers living abroad at the time of approval are captured by a data system that is administered by State and not linked to CLAIMS 3. Further, information on visa holders who actually enter or exit the United States is tracked via Homeland Security’s United States Visitor and Immigrant Status Indicator Technology (US-VISIT) program, which is not systematically linked to CLAIMS 3. Because these data systems do not use a unique, person-centric identifier for H-1B workers, Homeland Security cannot determine, for example, how many approved H-1B workers living abroad actually received an H-1B visa and/or ultimately entered the country. Similarly, Homeland Security does not track H-1B workers after their visas expire, and cannot readily determine if and when H-1B workers apply for or are granted legal permanent residency, leave the country, or remain in the country on an expired visa. The fact that electronic records from different systems are not linked also results in unnecessary duplication of efforts. For example, according to State officials, while State has some capacity to query Homeland Security’s CLAIMS 3 database, its consular posts cannot import data from CLAIMS 3 to their own data system, so State contractors re-enter information from CLAIMS 3 manually into State’s data system. Further, although Homeland Security is responsible for tracking the number of H-1B petitions approved under the cap and the number of H-1B visas issued, it does not maintain precise information on this. To implement the statutory cap on H-1B visas, Homeland Security must take the necessary steps to maintain an accurate count of the number of aliens subject to the annual cap who are issued visas or otherwise provided nonimmigrant status by the Immigration and Nationality Act. However, according to Homeland Security officials, the department’s current processes do not allow them to determine precisely when approvals reach the number set by the cap. Instead, they stop accepting initial petitions for new H-1B workers that are subject to the cap when they estimate that the number of approved petitions is approaching the mandated limit. In fiscal year 2005, Homeland Security’s Office of Inspector General found that USCIS exceeded the 65,000 cap limit by about 7,000 approved petitions and recommended the agency maintain more precise control over the number of H-1B visas issued. Although the recommendation was closed by the Office of Inspector General in 2006, Homeland Security officials concede they still cannot precisely count, in an ongoing manner, petitions accepted under the cap despite several changes in how the agency accepts, monitors, controls, and forecasts receipts for submitted petitions subject to the cap. Further, officials noted that as long as the process of submitting and adjudicating H-1B petitions remains the same, they are unlikely to be able to provide a precise count of petitions accepted under the cap. The capability to better track the cumulative H-1B workforce and petitions accepted under the annual cap may develop with the eventual completion of Homeland Security’s program to modernize business processes and information systems, although challenges remain. Homeland Security’s “Transformation Program” is a multiprogram, multiyear effort, ongoing since 2005, that includes a plan to implement an electronic I-129 petition, with a unique identifier for each H-1B worker. Using this identifier, Homeland Security would likely be able to share data with State and other external partners. Homeland Security officials reported, for example, that they are currently working with agencies that include Justice and State to create a cross-reference table of agency identifiers for individuals applying for visas. Ultimately, the table would capture each record for th e same person and employer from all partner agency programs, such that records for a specific individual can be merged under one unique person- centric identifier. When this occurs, it will be possible to identify who is the United States at any one point in time under any and all visa program USCIS plans to develop internal guidance for the electronic I-129 petition over the next 2 years. However, according to previous GAO reports, Homeland Security’s Office of Inspector General, and Homeland Security officials, the agency faces challenges finalizing and moving ahead with implementation of the program. in s. Between fiscal year 2000 and fiscal year 2009, the majority of approved H-1B workers (initial and extensions for both employers subject to the cap and cap-exempt employers) were born in Asia. Over the last decade, the top four countries of birth for approved H-1B workers were India, China, Canada, and the Philippines. Across all 10 years, about 64 percent of approved H-1B workers were born in these four countries, with the largest group from India (see fig. 9). Over the same period, more than 40 percent of approved H-1B workers (initial and extensions for both employers subject to the cap and cap- exempt employers) were approved to fill occupations in systems analysis and programming. The next-highest occupational category was college and university education, which represented about 7 percent of H-1B approvals, as shown in figure 10. As compared to fiscal year 2000, in fiscal year 2009, approved H-1B workers (initial and extensions for both employers subject to the cap and cap-exempt employers) were more likely to be living in the United States than living abroad at the time of their initial application, to have an advanced degree (master’s, professional, or Ph.D.), and to have obtained their graduate degrees in the United States. From fiscal year 2000 to fiscal year 2009, the proportion of newly approved H-1B workers that were already living in the United States increased from 43 to 62 percent. Many of these workers are likely to have been on student or another visa status. In 2000, 40 percent of approved H-1B workers (initial and extensions) possessed an advanced degree (master’s, professional, or Ph.D.), which increased to 59 percent by fiscal year 2009 (see fig. 11). One reason for this increase may be the H-1B Visa Reform Act of 2004, which allowed for an additional 20,000 approvals each year for foreign workers holding a master’s degree or higher from an American institute of higher education. Since then, the proportion of approved H-1B workers who graduated with a master’s degree from an American institution of higher education increased from 29 to 36 percent of all approved workers—including initial petitions and visa extensions. These findings are consistent with previously discussed findings that there has been an increase in the number of approved H-1B workers receiving advanced degrees from U.S. universities, as well as those who are already residing in the United States at the time of H-1B visa approval. This in turn suggests that, in general, the approved H-1B population may include more recent graduates, who are younger and more highly educated, as compared to their U.S. citizen counterparts in similar occupations. In turn, the U.S. citizen population in similar occupations may include older, more experienced workers. Finally, data on a cohort of approved H-1B workers whose petitions were submitted between January 1, 2004, and September 30, 2007, (including initial petitions from both employers subject to the cap and cap-exempt employers) indicate that a substantial proportion subsequently applied for permanent residence in the United States. Specifically, from a cohort of 311,847 approved H-1B petitions, we were able to obtain unique matches for 169,349 petitions from Homeland Security’s US-VISIT data. Of these, GAO found that 56,454 of the individuals listed on these H-1B petitions had submitted a petition for permanent residence by 2010. Thus, at least 18 percent of the total cohort had applied for permanent residence by 2010. Further, about half of those that applied had been approved for permanent residence by 2010, 45 percent were still pending, and just 3 percent had been denied. In addition to lack of data on the total H-1B workforce previously discussed, the potential impact that raising the H-1B cap would have on the wages and employment of U.S. workers is difficult to estimate because of complex economic relationships. On the one hand, if the H-1B program successfully provides needed skills for the U.S. economy, economic theory suggests that the program should contribute to long-run economic growth, which is beneficial for all workers. For example, additional skilled labor could increase innovation and productivity, potentially leading to improved competitiveness of U.S. businesses, higher wages in aggregate, and lower prices on goods and services purchased by American consumers. On the other hand, certain groups of U.S. workers may experience lower wages and employment as a result of the inflow of H-1B workers. Furthermore, changes in the wages and employment of both U.S. workers and H-1B workers reflect both changes in demand for labor and changes in the supply of labor, making it difficult to determine the effect that changes in the number of H-1B workers would have on outcomes for U.S. workers. Although demand for H-1B workers seemed to fluctuate in concert with broad economic indicators, relationships still cannot be inferred. As shown in figure 12, the number of submitted H-1B petitions has generally followed overall employment growth in the U.S. economy. This appears consistent with economic theory that suggests that businesses require additional labor during periods of economic growth, so employers will likely submit more H-1B petitions during these periods. At the same time, wage rates and employment levels for U.S. workers generally rise during periods of economic growth. Therefore, the number of H-1B petitions tends to rise when wages and employment for U.S. workers are rising (although the number of approvals is limited by the H-1B cap), and to fall when wages and employment for U.S. workers are falling. However, this relationship does not reveal what the wage rates and employment rates of U.S. workers would have been in the absence of H-1B workers. Due to these complex economic relationships, coupled with limitations in data on the total H-1B workforce discussed previously, we did not attempt to forecast the impacts of prospective changes in the H-1B cap on the U.S. labor force. While GAO did not attempt to forecast the impacts of prospective changes in the H-1B cap, we examined 10 years of retrospective data on the employment, unemployment, and earnings of U.S. workers in the three occupations that absorbed the largest proportion of H-1B workers relative to the stock of U.S. workers in these occupations. The three occupations with the highest number of H-1B approvals relative to the number of U.S. workers in that occupation were (1) systems analysts, programmers, and other computer-related workers; (2) electrical and electronics engineers; and (3) college and university educators. For example, among systems analysts, programmers, and other computer-related workers aged 18 to 50, the number of approved H-1B petitions (initial and extensions) was 10 percent of the total stock of U.S. citizen workers in private sector jobs in this occupation in calendar year 2008. Our analysis of these three occupations generally revealed a mixed earnings and employment picture for U.S. workers in professions absorbing H-1Bs. (See appendix III for additional details.) With respect to median earnings, we found that U.S. workers in all three occupations, in every year, had significantly higher median earnings levels compared to all professional U.S. workers. With respect to real earnings growth, systems analysts, programmers, and and other computer-related workers had significantly higher real earnings growth compared to all professional workers; in contrast, for electrical and electronics engineers, real earnings growth was not significantly different from that for professional workers, and for college and university educators, real earnings growth was relatively flat over the decade. Unemployment rates for both (1) systems analysts, programmers, and other computer-related workers and (2) electrical and electronics engineers were relatively cyclical; in contrast, the unemployment rate for college and university educators was somewhat less sensitive to business cycle fluctuations over the decade. Employment levels (i.e., the number of workers employed) for electrical and electronics engineers declined significantly over the decade; employment levels for systems analysts, programmers, and other computer-related workers were essentially unchanged; and employment levels for college and university educators grew significantly over the decade. To examine more closely whether H-1B workers are being paid salaries that are comparable to U.S. workers, we examined data on salaries for the three occupations that absorbed the largest proportion of H-1B workers relative to the stock of U.S. workers in 2008, and compared this to data on the reported salaries listed by the employer on H-1B petitions. A comparison of median annual salaries reveals that for systems analysts, programmers, and other computer-related workers—the largest of the three occupational categories we examined—H-1B workers tended to earn less than U.S. workers; however, some of the salary gap appears to be explained by differences in ages, which may reflect differences in the extent of their work experience. As shown in table 1, which summarizes the median reported earnings of H-1B and U.S. workers by age and occupation, among systems analysts, programmers, and other computer- related workers, differences in median reported earnings between H-1B workers aged 20 to 29 and U.S. workers of the same age were not statistically significantly different, and the same was true for workers aged 30 to 39; however, H-1B workers aged 40 to 50 had median reported earnings that were significantly lower than the median earnings of U.S. workers in this occupation. Among electronics and electrical engineers, we did not find significant differences in median earnings of approved H-1B workers and U.S. workers, overall and within the age groups we examined. Among college and university educators, differences in reported earnings between H-1B workers and U.S. workers were not statistically significant except among younger age groups in which the H-1B workers had higher reported earnings than U.S. workers in the same age category; however, we could not account for all factors that might affect salary levels. (See the discussion “Limitations of Wage Comparisons” in appendix I for more information.) For all groups, differences in other factors, such as skill level, might explain some of the remaining salary differences; however, a lack of data on these factors precludes our analysis of them. In addition, differences in factors such as geographic location, size of firm, and industry, as well as level of education, which may also affect salary differences, are not controlled for here due to data limitations. For example, if certain groups of workers are more heavily concentrated in high-cost parts of the country, this will be reflected in the median wage. (For additional analyses comparing U.S. workers with approved H-1B workers, see appendix II.) In an attempt to better understand these results, we interviewed academic researchers and labor advocates who have studied the impact of H-1B workers on particular segments of the workforce. These experts and advocates provided examples of several specific segments of the workforce for which they believe the H-1B program has had negative impacts. Because H-1B workers tend to be younger (with less potential work experience) than their U.S. counterparts who tend to be older (with more potential work experience), some labor advocates we spoke with argued that the H-1B program detrimentally impacts older IT professionals. Several researchers and labor advocates have stated that technology companies seek to replace older, American IT workers with cheaper, younger workers that are freshly supplied through the H-1B program in order to lower costs, and that IT companies have no incentive to retain and retrain older workers with the latest skills, since the H-1B program provides ready access to young workers with cutting-edge training. While companies could use any young, skilled workers to lower their labor costs in this manner, advocates argue that the H-1B program facilitates the practice of displacing older IT workers because it provides an inflow of new workers in IT fields that is much larger than would otherwise be available to U.S. employers. The analysis presented here does not provide a test of this theory because it does not identify what the wages of older U.S. IT professionals would have been in the absence of the H-1B program, nor does it account for the myriad factors affecting wage, for which we lack data. Three researchers we spoke with expressed concern about the disincentives that U.S. students face in entering science, technology, engineering, and mathematics (STEM) fields. For example, one disincentive is the duration of postdoctoral positions. Data show that since the 1960s, postdoctoral positions—which are generally exempt from the H- 1B cap—have increased in length, with the largest increase in biological sciences. One researcher posited that the increasing length of postdoctoral positions, especially in the biomedical fields, is due in part to the presence of large numbers of foreign nationals who are willing to work in these low-paid positions for many years. For foreign nationals, these postdoctoral positions may offer an entrée into the U.S. labor market, and the salaries may also compare well to the opportunities available in their home countries. Testimonial evidence also suggests that U.S. software programmers, particularly those seeking IT consulting jobs, may have been detrimentally affected by the significant presence of H-1B workers and, in particular, by the presence of certain staffing companies. One labor advocate we spoke with stated that staffing companies that farm out H-1B IT workers to other companies are abundant in the Northeast region of the country, and their presence has dramatically reduced the availability of jobs for U.S. software programmers in that region. Labor investigators we spoke with also noted the concentration of H-1Bs in the IT consulting industry, particularly in the Northeast region. These investigators noted that the bulk of the complaints they receive in this region pertain to staffing companies. In addition, Labor investigators told us that some staffing companies at times will pass an open position amongst themselves, rather than making a job opening known to the workforce at large. For example, if one staffing company is contacted with a request for a software programmer and does not have a worker with the appropriate skillset, this firm may— unbeknownst to the firm that is seeking the worker—“subcontract” the job out to a second staffing company who does have a worker with the appropriate skillset—and the first staffing company might take a cut of the wages received. Responsibility for the protection of workers with regard to the H-1B visa program is shared by four departments and their respective divisions. By virtue of their specific and often cordoned responsibilities, however, there is only nominal sharing of the kind of information that would allow for better employer screening or more active and targeted pursuit of program abuses. Once a visa-holder is employed, divisions within Labor, Homeland Security, and Justice may pursue enforcement of the H-1B program requirements in accordance with their broader responsibilities for enforcing labor or immigration laws. However, their work is largely complaint-driven, and information sharing among them, or with offices that must screen H-1B applications, is also limited. Table 2 summarizes agency oversight responsibilities and limitations, which are further elaborated in the following pages. Labor’s Employment and Training Administration. While Employment and Training reviews the LCA form submitted by a petitioning employer, this review is limited by law to looking for missing information and obvious inaccuracies. For example, an employer may have failed to checkmark all the boxes for attesting to his or her willingness to comply with program requirements. While Labor’s review catches some administrative errors made by applicants, it does not check the validity of the information on the LCA. Consequently, the review is not intended to identify potential employer violations such as work sites that do not exist or lack of compliance with the attestations made on the LCA. This review is primarily conducted electronically with officials reviewing the information flagged by the electronic system as problematic. Any greater scrutiny by Employment and Training is limited by law. Homeland Security’s U.S. Citizenship and Immigration Services. Adjudicators with USCIS conduct a review of both the employer’s application and the foreign worker as a job candidate. Specifically, Homeland Security reviews two documents for consistency: the LCA submitted originally to Labor for review, and the I-129 petition, which is submitted by businesses to Homeland Security and generally contains correlating information. In addition to reviewing for consistency, USCIS adjudicators explained that they take steps to verify the facts provided for both the employer and the prospective worker—for example, by requesting additional information from the employer. USCIS’s adjudicators do not receive information regarding suspicious or problematic employers from Labor’s Employment and Training Administration that Labor analysts may have become aware of during their review of the LCA because Labor does not have a formal mechanism for sharing such information with Homeland Security. Department of State. State plays a role in the H-1B program by interviewing and potentially issuing visas to H-1B candidates living abroad, whose petitions were approved by Homeland Security. State conducts its own review of the H-1B petitioner and documentation pertaining to his or her employer by comparing information gleaned from interviews against basic information in the LCA and I-129 petition, such as the name of the petitioner and the foreign worker. However, official department guidance instructs consular officials not to question the petition approvals made by Homeland Security when making their decision on the visa application without having obtained new evidence. State guidance stipulates that a petition can only be sent back when there has been a clear error committed in adjudicating the I-129 petition or new evidence is submitted that contradicts Homeland Security adjudicators’ decisions. Officials noted that there is a high threshold for the identification of a clear error and this rationale is almost never used. State has, however, recommended 1,744 revocations in fiscal year 2009 based on new evidence. As a general rule, State consular officers treat information provided to and reviewed by Homeland Security on business establishments, relationships, and individual qualifications as bona fide. Labor’s Wage and Hour Division. Labor’s Wage and Hour investigates H- 1B complaints primarily related to improper wage payments and failures to notify workers that a company intends to hire an H-1B worker. However, its ability to enforce worker protections with regard to the H-1B program is limited. Although the Secretary of Labor has authority to initiate investigations, Wage and Hour reported that it had never initiated an investigation under this authority. Officials explained that they rarely proactively investigate companies for H-1B violations, and that they may generally only act on formal complaints. Moreover, by law, investigations can only be initiated from information obtained from an aggrieved or credible party outside of Labor. Further, Labor officials told us they have interpreted this restriction to include information from Homeland Security as well. As a result, Labor’s Wage and Hour could not initiate a complaint based on any information it might receive from Homeland Security, such as information on potential abuses that Homeland Security might glean from its review of the I- 129 petition. In a prior report, GAO suggested that Congress remove these legal restrictions, but Congress has yet to take action. While the majority of complaints received by Labor have been reported by H- 1B workers, very few complaints are filed. In 2009, only 664 out of 51,980 companies approved to hire new or extending H-1B workers had complaints against them. According to agency officials, H-1B workers are likely to be reluctant to file complaints against employers for fear that the company might be disbarred, which in turn could result in the complainant and fellow H-1B workers at the company losing their jobs and potentially having to leave the United States. Further, investigators told us that even after an H-1B worker files a complaint, the H-1B worker may not cooperate in the investigation for fear of similar repercussions. In these instances, investigators are sometimes unable to complete the investigation. The relatively small number of H-1B- related complaints in 2009 nevertheless resulted in Labor requiring companies to pay over $10 million in unpaid wages to 1,202 workers and $739,929 in civil monetary penalties (see table 3). Labor’s ability to enforce worker protections under the program is also hampered by obstacles cited by officials at both headquarters and in Wage and Hour’s Northeast Regional Office, which receives the greatest number of H-1B complaints: First, with the introduction in June 2009 of the automated “iCERT” system maintained by Employment and Training, Wage and Hour stated that they can no longer access the database of LCAs. Prior access had allowed investigators to quickly assess the accuracy of the attestations made by an employer. Without this access, officials stated that they must request the LCA from the employer, which can increase the time and resources required to conduct an investigation. Employment and Training reported that improved access to the iCERT System is under development and planned for implementation in April 2011. Second, Wage and Hour has limited ability to persuade employers to cooperate with investigations. The fine it can levy against employers for not cooperating is far less than the potential penalty for a finding of noncompliance with the terms of the program. Investigators noted that when employers do not cooperate, it can take them months to obtain the requested paperwork, which essentially stalls the time-sensitive investigation. Third, Wage and Hour lacks subpoena authority to obtain such records directly from the employer. In contrast, Wage and Hour, as well as Employment and Training, have subpoena power for other labor protection programs they administer, such as under the Fair Labor Standards Act and the Migrant and Seasonal Agricultural Worker Protection Act. According to Wage and Hour officials, subpoena power increases cooperation from companies and is the most effective way to speed up investigations, since companies could face harsh penalties, such as debarment, for not cooperating. The Department of Justice also has subpoena power for its investigations related to the H-1B program. Justice officials we spoke with noted that while they rarely have to invoke subpoena power in their investigations, generally employers are aware of the subpoena power and are therefore more likely to comply with Justice’s requests for records. Homeland Security’s Directorate of Fraud Detection and National Security. In its capacity to investigate immigration fraud, FDNS has recently introduced some proactive enforcement for the H-1B program through several random investigations into temporary visa programs. Through a Benefit Fraud and Compliance Assessment (BFCA), Homeland Security examined 246 H-1B petitions for possible violations. The BFCA found that 21 percent of H-1B petitions involved fraud or technical violations. Examples of fraud include cases in which businesses listed on the LCA and I-129 did not exist; educational degrees were found to be fraudulent; signatures were forged on supporting documents; and H-1B workers were performing duties or receiving payment significantly different from those described in the applications. As a result of the high rate of fraud identified in the BFCA, Homeland Security launched what it calls its Administrative Site Visit and Verification Program—an ongoing initiative to visit work sites of H-1B-hiring companies considered to be at a higher risk for abusing the program, according to officials. During fiscal year 2010, USCIS oversaw 14,433 H-1B site inspections, which resulted in 1,176 adverse actions. Such actions can include the revocation or denial of benefits, and may involve referral of a case for criminal investigation. FDNS is continuing to evaluate this initiative and refine the indicators it uses to identify groups of high-risk companies. Department of Justice’s Office of Special Counsel. Justice’s Office of Special Counsel also conducts investigations; but its enforcement abilities are also limited. Justice’s jurisdiction limits it to pursue charges related to unfair immigration-related employment practices, such as discriminatory hiring or firing. For example, such charges generally allege that an H-1B worker was hired in place of a U.S. worker, or that a company is using discriminatory hiring practices that put U.S. workers at a disadvantage, such as explicitly advertising for an H-1B worker. Justice receives and investigates few charges related to the H-1B program (at most 70 per year over the last 5 years with the number decreasing) and reported that their ability to enforce the law depends on the willingness and ability of U.S. workers to complain. Justice officials explained that the low number of charges they receive is likely because U.S. workers are often unaware that an employer intends to or did hire an H-1B worker. For example, although employers are supposed to post a public statement declaring their intention to hire an H-1B worker, the statement might be posted in a lunch room where it may or may not be seen by affected employees. Further, Labor investigators reported that many of the companies they investigate do not comply with requirements to post notice. In contrast, Labor requires applicants for other temporary visa programs, such as the H-2A program, to display such postings on a centralized Web site that is managed by Labor. Justice officials noted that the lack of a centralized Web site makes it difficult for U.S. workers to learn that U.S. employers are hiring H-1B workers and also for Justice to monitor the compliance of companies with antidiscrimination law, especially those operating offshore. Justice informally shares information on a periodic basis with Labor and Homeland Security when it receives information about potential abuse that does not fall under its jurisdiction. However, there is no formal mechanism in place to exchange information with these other agencies, although officials explained that some attempts to arrange information- sharing agreements between Justice and these agencies have been made in the past. When Justice has referred cases that fell within Labor’s jurisdiction, Justice officials told us they were not generally made aware of the outcomes of these referrals. Although Justice accepts referrals from other agencies, officials reported only receiving one referral from Labor related to the H-1B program. Department of State. If or when State officials learn of an employer potentially violating program requirements, unlike other agencies, State may act as an aggrieved party on behalf of an H-1B worker and file a formal complaint with Wage and Hour regarding the business. For example, agency officials noted that during consular interviews with spouses of H-1B workers attempting to enter the United States, the consular official may uncover potential abuses by the H-1B worker’s employer and will then file a complaint with Wage and Hour. However, such incidents are limited in number, with an average of 160 recommendations per year since 2005. The laws governing the H-1B program do not include explicit provisions to hold employers that obtained the H-1B worker through a staffing company accountable to the program requirements that are applicable to the employer who applied for H-1B visas on behalf of foreign workers. As previously noted, some staffing companies complete and submit to Labor an LCA as the employing company, but then contract the H-1B worker out to another employer. At times, that employer may contract the H-1B worker out again, creating multiple middlemen according to officials (see fig. 13). Regardless of where the H-1B worker is ultimately employed, Wage and Hour officials told us that only the staffing company, as the employer who has petitioned for the visa and made the attestations to comply, is technically accountable and ultimately liable for complying with program requirements. They explained that the contractual relationship itself does not transfer the obligations of the contractor for worker protection to any subsequent employers. Especially in instances in which multiple middlemen are involved, it is difficult to expect the staffing companies themselves to be accountable for the actions of an employer up to three or four employers removed. Wage and Hour investigators reported that a large number of the complaints they receive were related to the activities of staffing companies. In fact, investigators from the Northeast region—the region that receives the highest number of H-1B complaints (see fig. 14)—said that nearly all of the complaints they receive involve staffing companies and that the number of complaints are growing. However, the precise number of complaints related to staffing companies is not known because Labor does not track this information in its complaint data. The most frequent type of violation resulting from a complaint is that the employer failed to pay the required wage rate. Other frequent violations identified as a result of complaints include the failure of the employer to post notice that they intend to hire an H-1B worker and the failure to comply with the attestations made in the LCA (see table 4). Complaints received by Wage and Hour pertaining to staffing companies generally relate to the payment of H-1B workers, according to investigators. Officials told us that the most common complaint associated with staffing companies pertained to unpaid “benching”—when a staffing company does not have a job placement for the H-1B worker and does not pay them. In these instances, a staffing company sometimes asks the H-1B worker to conduct their own job search or to take an unpaid leave until the company identifies a client. For example, one investigator described how one employer maintained a house for its unemployed H-1B workers, and instructed them to conduct their own Internet searches for a job placement. In another case, Wage and Hour found that a staffing company forced employees to go on leave when it did not have jobs for them and boarded them in a guesthouse while they were unemployed. At times, employees are unaware of their right to receive payment during these “benched” time periods which, according to one complainant, lasted as long as 13 months. Investigators said that the problem of unpaid benching has become more severe with the economic downturn as staffing companies have fewer jobs in which to place H-1B workers. Instead, they may “stockpile” the workers in anticipation of an economic recovery. In investigating complaints related to staffing companies, investigators often identify additional violations of the attestations on the LCA. For example, Labor officials noted that in 90 percent of their investigations related to staffing companies, the hiring company did not post notice of the filing of the LCA indicating the intention to hire an H-1B worker. In some instances, according to these officials, the subsequent employer may not even know that the contracted worker is an H-1B worker, much less be aware of any requirements associated with the visa—such as the requirement for employers to post notice of their filing of an LCA. In addition, in some instances workers procured by staffing companies were either not working for the employer listed or not performing the duties described on the LCA. Some attempts have been made to control the use of staffing companies in other visa programs and in the H-1B program. For example, the L-1 Visa Reform (Intracompany Transferee) Act of 2004 essentially barred staffing companies—whose main revenue source is providing labor-for-hire—from receiving L-1 visas. However, according to experts we interviewed, some staffing companies avoided this legal restriction by differentiating themselves from staffing companies by describing themselves as “IT solutions” companies. In addition to providing labor-for-hire, such companies sell the development of a product, and therefore are not barred from the use of L-1 visas. Additionally, in January 2010, Homeland Security issued a memo on determining when there is a valid employer-employee relationship between a staffing company and an H-1B worker for whom it has obtained an H-1B visa. Whether there is such a relationship depends largely on the right of a staffing company to control the manner and means by which the H-1B nonimmigrant works. However, officials indicated that it is too early to know if the memo has improved compliance with program requirements. Changes to the H-1B program over time have weakened U.S. worker protections related to the (1) temporary nature of the program, (2) pool of H-1B workers eligible for H-1B status, and (3) cap. Since the 1990s, the law has allowed H-1B workers to pursue permanent residency in the United States and to remain in the country for an unlimited period of time while their permanent residency application is pending. The Immigration Act of 1990 removed the requirement that H-1B visa applicants have a residence in a foreign country that they had no intention of abandoning. In addition, H-1B workers were able to apply for permanent status and eventually to obtain an unlimited number of annual extensions as long as they file an LCA for permanent residency at least 1 year prior to submitting the final application for an extension of the H-1B visa. As a result of these legislative changes, the number of H-1B workers in the workforce has likely increased. As noted elsewhere in this report, many H- 1B workers apply for green cards. In fact, among a cohort GAO reviewed, at least 18 percent applied for green cards within 6 years or less of the start date of their H-1B visas. Although the employment-based permanent residency applications take a number of years for a decision, the amount of time varies by home country, with approvals of employment-based permanent visas for skilled-worker categories taking the longest for citizens from China, India, and Mexico. An H-1B worker from one of these countries could remain in the United States for over a decade before obtaining a green card. Legislative changes have broadened the skill requirements for H-1B workers. The original H-visa program, established under the Immigration and Nationality Act in 1952, authorized visas for aliens with a residence in a foreign country that the alien had no intention of abandoning, who were of distinguished merit and ability, and were coming to the United States to perform temporary service of an exceptional nature requiring such merit and ability. However, in 1990, besides removing the foreign residence requirement, the original language was replaced with language authorizing H-1B visas for aliens coming temporarily to the United States to perform services in a “specialty occupation.” A specialty occupation was defined as one that required a theoretical and practical application of a body of highly specialized knowledge, and at a minimum, a bachelor’s or higher degree in the specific specialty (see app. V). This increased the pool of eligible workers to include a wider range of skill levels. Labor’s application data show that H-1B workers are often not paid wages associated with the highest skills in their fields. Specifically, these data show that over half (54 percent) of the workers with approved LCAs from June 2009 through July 2010 were categorized as entry-level positions and were paid at the lowest pay grades allowed under the prevailing wage levels (see table 5). This pay grade is designated for jobs needing a basic understanding of duties and the ability of the worker to perform routine tasks that require limited judgment. In comparison, 6 percent of approved applicants whose wages were reported on the LCA were paid within the top pay grade designated for workers that requires sufficient experience and a high level of independent judgment. However, such data do not, by themselves, indicate whether H-1B workers are generally less skilled than their U.S. counterparts, or whether they are younger or more likely to accept lower wages. In contrast to the H-1B visa program, temporary visa programs in other countries take steps to identify foreign workers with skills that are in short supply. For example, Australia has a system in which applicants receive points for certain types of qualifications that are in short supply in the Australian economy. Those with the highest number of points are granted visas to enter. The United Kingdom uses a committee comprised of five independent economists to identify shortages in particular occupations. Canada has a pilot temporary program under way that also attempts to identify specific jobs where shortages exist and skills are needed. The U.S. permanent visa program also does more to assure that the skills of the foreign worker are not readily available state-side by, in most cases, requiring the employer who sponsors the green card applicant to attest that it was not able to find a comparably skilled U.S. applicant. While providing employers greater access to foreign labor, exemptions to the H-1B cap and the existence of other visa programs for temporary workers have increased their numbers far beyond the cap. With universities, nonprofit organizations that conduct research, and governmental research organizations able to hire an unlimited number of H-1B workers through exemptions, many more H-1B workers have entered the United States each year than the annual numeric limit of 65,000 imposed by the cap. For example, 87,519 workers (initial and extensions) in 2009 were approved for visas to work for 6,034 cap-exempt companies. In addition, company executives reported that as an alternative to the H-1B visa, companies use the L-1 visa—which allows foreign workers to relocate to a company’s U.S. office after having worked abroad for the company for at least 1 year. As previously noted, between 2000 and 2008, the number of foreign workers issued L-1 visas—which are not subject to a cap—has increased by more than 50 percent. As noted earlier, Homeland Security currently does not have the capability to determine the cumulative H-1B workforce, such that the effect on U.S. workers can be assessed. Whether or not Homeland Security’s Transformation Program will address this problem remains to be seen. In creating the H-1B visa program, Congress sought to strike a difficult balance between satisfying the needs of a wide variety of businesses for high-skilled foreign labor while protecting access to jobs and appropriate compensation for U.S. workers. The initial temporary nature of the program and the annual cap were key tools to protect U.S. workers. Over its history, however, Congress has made numerous changes to the program, including broadening the eligibility requirements and allowing for exemptions to the cap and for H-1B workers to pursue long-term residency. The result is that, today, the number of H-1B workers approved to enter the United States each year greatly exceeds the numeric limit established by the cap, and the majority of applicants are categorized as entry-level. Moreover, a substantial proportion appears to remain in the country beyond the 6-year visa period in pursuit of permanent residency. Homeland Security, faced with challenges in administering a program managed in part by four different federal agencies, has difficulty tracking the cap and cannot readily determine how many H-1B workers are currently in the United States or how many stay after their visas expire. Lack of information on the total H-1B workforce makes it impossible to understand the long-term impact of the program and leaves the program vulnerable to fraud and abuse—a known issue in this program. Restrictions on agencies’ abilities to enforce program requirements and coordinate with one another widen the risk of fraud and abuse, and undermine efforts to enforce worker protections. Restrictions on sharing and leveraging information between and within federal agencies likely inhibit the pursuit of worker allegations of abuse and allow some labor abuses to go undetected. The involvement of staffing companies, whose share of H-1B workers is not precisely known but is likely not trivial, further weakens enforcement efforts because the end-user of the H-1B worker is not liable for complying with labor protection requirements. At the same time, many members of the business community we interviewed cited their own frustrations with the ability of this program to serve their needs for high-skilled labor. The one-size-fits-all application process wastes business and government resources in compiling and reviewing paperwork on well-vetted companies with years of experience in the program. The lottery system does not permit companies to prioritize their candidates, and as a result, coveted H-1B slots may not be allocated to companies’ top candidates. The annual application cycle hinders flexibility in hiring, prompting some companies to prematurely petition for candidates instead of holding out for better ones in years when the cap is hit early. Moreover, start-up companies, which some argue are the backbone of innovation in the United States, cannot use the H-1B visa for their employees until their company is fully established. In an era when companies are competing in a global market for cutting- edge skills, the H-1B program plays an important role. As currently structured, however, the program may not be used to its full potential and may be detrimental in some cases. Some improvements can be made through executive actions by the agencies overseeing the program. However, balancing the needs of the economy for high-skilled foreign labor and protecting the employment and wages of current U.S. workers is a policy matter for Congress. Certainly there are no easy solutions, but data we present suggest that the program may continue to fall short and raise difficult policy questions. Such questions include the appropriateness of the current qualifications for H-1B workers, the use of H-1B visas as a bridge to permanent residence, the involvement of staffing companies in the H-1B program, and exemptions from the cap. As Congress considers immigration reform in consultation with diverse stakeholders and experts, and as Homeland Security moves forward with its modernization efforts, this is an opportune time for Congress to review the goals and purpose of the H-1B program and re-examine its key provisions. To ensure that the H-1B program continues to meet the needs of businesses in a global economy while maintaining a balance of protections for U.S. workers, Congress may wish to consider reviewing the merits and shortcomings of key program provisions and making appropriate changes as needed. Such a review may include, but would not necessarily be limited to the qualifications required for workers eligible under the H-1B program, exemptions from the cap, the appropriateness of H-1B hiring by staffing companies, the level of the cap, and the role the program should play in the U.S. immigration system in relationship to permanent residency. To reduce duplication and fragmentation in the administration and oversight of the H-1B application process, consistent with past GAO matters for congressional consideration, consider eliminating the requirement that employers first submit a Labor Condition Application (LCA) to the Department of Labor for certification, and require instead that employers submit this application along with the I-129 application to the Department of Homeland Security’s U.S. Citizenship and Immigration Services for review. To improve the Department of Labor’s ability to investigate and enforce employer compliance with H-1B program requirements, consider granting the department subpoena power to obtain employer records during investigations under the H-1B program. To help ensure the full protection of H-1B workers employed through staffing companies, consider holding the employer where an H-1B visa holder performs work accountable for meeting program requirements to the same extent as the employer that submitted the LCA form. Based on our review, we are making four recommendations. We are making the following two recommendations to the Secretary of Homeland Security: To help ensure that the number of new H-1B workers who are subject to the cap—both entering the United States and changing to H-1B status within the United States—does not exceed the cap each year, U.S. Citizenship and Immigration Services should take steps to improve its tracking of the number of approved H-1B applications and the number of issued visas under the cap by fully leveraging the transformation effort currently under way, which involves the adoption of an electronic petition processing system that will be linked to the Department of State’s tracking system. Such steps should ensure that linkages to the Department of State’s tracking system will provide Homeland Security with timely access to data on visa issuances, and that mechanisms for tracking petitions and visas against the cap are incorporated into U.S. Citizenship and Immigration Services’ business rules to be developed for the new electronic petition system. To address business concerns without undermining program integrity, U.S. Citizenship and Immigration Services should, to the extent permitted by its existing statutory authority, explore options for increasing the flexibility of the application process for H-1B employers, such as allowing employers to rank their applications for visa candidates so that they can hire the best qualified worker for the jobs in highest need; distributing the applications granted under the annual cap in allotments throughout the year (e.g. quarterly); and establishing a system whereby businesses with a strong track-record of compliance with H-1B regulations may use a streamlined application process. We are making the following two recommendations to the Secretary of Labor: To improve the transparency and oversight of the posting requirement on the Labor Condition Application (LCA), as part of its current oversight role, the Employment and Training Administration should develop and maintain a centralized Web site, accessible to the public, where businesses must post notice of the intent to hire H-1B workers. Such notices should continue to specify the job category and worksite location noted on the LCA and required by statute on current noncentralized postings. To improve the efficiency and effectiveness of its investigations of employer compliance with H-1B requirements, the Employment and Training Administration should provide Labor’s Wage and Hour Division searchable access to the LCA database. The Departments of Homeland Security, Justice, Labor, and State were provided a draft of this report for review and comment. The Departments of Homeland Security and Justice provided written responses to one or more of our recommendations, which appear in appendixes VI and VII of this report. Labor and State did not provide a written response to our recommendations. In addition, Homeland Security, Justice, and Labor provided technical comments, which have been incorporated into the report where appropriate. In brief, the Department of Justice expressed support for our recommendation that Labor develop and maintain a Web site where businesses post notice of their intent to hire H-1B workers. In addition, Justice offered two more recommendations that build on our findings regarding the lack of a labor market test for most H-1B employers and the limited use of its complaint process by U.S. workers. However, Homeland Security did not agree with the two recommendations we made pertaining to Homeland Security’s U.S. Citizenship and Immigration Services, nor did it agree with one matter for congressional consideration. The recommendation in our draft report on improving H-1B cap management emphasized that Homeland Security should leverage its transformation effort by reaching an agreement with State to ensure that, by linking data systems, it would have real-time information on the number of visas approved under the cap. In response, Homeland Security cited as evidence of its intentions the work already under way to develop an electronic exchange of visa and immigration data with State. However, in our review of the department’s memorandum of agreement and letter of intent with State that discuss such exchanges, we did not find specific references to improving cap management with State’s visa data. Further, Homeland Security added that data to be exchanged with State may only slightly improve cap management because State’s data (1) do not include individuals already in the United States who are seeking to change their visa status, and (2) will be too old to assist Homeland Security with cap management, since it is typically months after Homeland Security approves petitions that State issues visas to individuals residing outside of the United States. We understand that, for individuals already residing in the United States, Homeland Security does not depend on State, but has its own data on changes in visa status for approved H-1B workers. We also acknowledge that for individuals residing outside of the United States, there is some lapse in time between Homeland Security’s approval of an H-1B petition and State’s issuance (or decision not to issue) a visa. Nevertheless, we maintain that possessing timely and accurate information on petitions and visas that count against the cap for all individuals—both within and outside the United States—could provide a more reliable basis for ongoing monitoring with respect to the annual visa cap. Improved tracking would in turn provide Homeland Security the information it needs to reduce the potential for exceeding the visa cap and ensure that, in high-demand years, only 65,000 visas are issued. In response to Homeland Security’s comments, we clarified our recommendation with respect to the steps it should take, including the importance of incorporating better tracking mechanisms in the business rules to be developed for the new electronic petition system. With regard to our recommendation that U.S. Citizenship and Immigration explore options for increasing the flexibility of the application process for H-1B employers within its statutory authority, Homeland Security raised several concerns about the feasibility of our suggested options and noted one initiative under way that may expedite the application process for employers. We continue to believe that additional efforts are warranted to more fully explore the potential benefits and costs of these options. Homeland Security said it believes that current law does not allow the department to exempt petitioners with track records of H-1B compliance from evidentiary requirements. However, we believe there may be additional opportunities to streamline the application process for businesses by not requiring them to resubmit evidence that they have already provided, without exempting petitioners from evidentiary requirements. Homeland Security noted its own initiative—the Validation Instrument for Business Enterprises (VIBE) system—is intended to reduce the need for petitioners to submit certain documentation by providing the department with the means to verify the petitioners’ information through an independent source, but acknowledged in its technical comments that the system will not necessarily reduce the burden of providing supporting documentation for petitioners in the immediate future. Homeland Security noted that implementation of a beneficiary ranking process would be extremely complicated and resource intensive and would decrease flexibility for employers. We continue to believe that such obstacles could be surmountable through technology. For example, petitioner-level electronic accounts—as planned in the electronic petition system slated for 2012—could allow employers to manage, and possibly change, their rankings without necessarily decreasing their flexibility. Homeland Security also stated that implementing a quarterly cap allocation is neither warranted nor feasible. Again, we believe that distributing the annual cap in allotments throughout the year might be feasible with an electronic petition system. For example, quarterly allocations of visas could be administered by creating an electronic queue whereby petitions that were not selected in one lottery round would have priority in the next. An automated cap management system that combines electronic tracking and queuing might reduce, for federal managers themselves, the level of complexity involved in managing the program. Homeland Security also expressed concern that a ranking system might encourage petitioners to over-submit petitions in an attempt to increase their chances of obtaining H-1B workers. The department is, nevertheless, considering a similar option that would allow petitioners to request visa slots prior to submitting an LCA. Whether or not petitioners would over- submit in response to either option is a matter that we believe should be further studied or tested. In summary, we believe that Homeland Security’s ongoing Transformation Project—currently in its development phase— affords the opportunity to explore creative and thoughtful solutions to the challenges of administering the H-1B program. Such an examination could weigh the potential costs and risks associated with the options we outlined against their potential benefits in savings for taxpayers and petitioners—with the ultimate goal of supporting legitimate business needs while not compromising worker protections. Homeland Security disagreed with our asking Congress to consider transferring the review of the LCA from Labor to U.S. Citizen and Immigration Services, citing internal lack of expertise in wage and labor determinations and Labor’s role in enforcing labor violations. While we recognize Labor’s expertise, Labor officials told us that, unless its legal authority is expanded to allow for verification of employer attestations, the Employment and Training Administration’s review can only ensure that employers have completed the form’s questions and check-box questionnaire and that there are no obvious inaccuracies. We maintain that such a limited review could be readily subsumed in Homeland Security’s petition adjudication process because it too reviews the LCA. We are not recommending that Labor’s enforcement role, carried out by its Wage and Hour Division, be transferred to Homeland Security. Further, we do not believe that Labor’s enforcement efforts would be compromised by transferring the LCA approval process to Homeland Security, especially in light of challenges Wage and Hour faces with gaining access to LCA information from the Employment and Training Administration, as identified in this report. We are sending copies of this report to the Secretaries of Homeland Security, Labor, and State, the Attorney General, appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VIII. We conducted our work in response to a House report that accompanied the Consolidated Security, Disaster Assistance, and Continuing Appropriations Act of 2009. The House report directs GAO to examine the impact of the H-1B visa cap on the ability of domestic companies to develop modern technology and perform innovative scientific research and development (R&D), while ensuring U.S. workers are not unfairly displaced or otherwise disadvantaged by H-1B visa holders. To do this, in agreement with cognizant Hill staff, GAO addressed five objectives. Specifically, with respect to H-1B employers, we examined what is known about (1) their demand for H-1B workers and (2) how the H-1B cap affects their costs, R&D, and offshoring decisions. With respect to H-1B and U.S. workers, we examined what is known about (3) H-1B worker characteristics, (4) how raising the H-1B cap might affect employment and wages of U.S. workers, and (5) how well H-1B program requirements ensure that U.S. workers are not displaced or disadvantaged by the program. This appendix provides a detailed account of the data sources used to answer these questions, the analyses we conducted, and any limitations we encountered. The appendix is organized into four sections. Section 1 describes the key information sources we used for the report. Section 2 describes our methods for comparing the characteristics and wages of U.S. workers with those of approved H-1B workers (which are presented in the report and in appendix II). Section 3 describes our methods for analyzing employment levels, unemployment rates, and wages of U.S. workers in those occupations with the highest concentration of approved H-1B workers. Section 4 describes our methods for analyzing the long-term immigration outcomes of a cohort of H-1B approved H-1B workers. Our information sources included electronic data from datasets administered by the Departments of Labor, Homeland Security, Justice, and State, and by private vendors. Details on the scope and purpose of these data are described below. For each of the datasets described above, we conducted a data reliability assessment of selected variables by conducting electronic data tests for completeness and accuracy, reviewing documentation on the dataset, interviewing knowledgeable officials about how the data are collected and maintained and their appropriate uses, or completing all of these. For the purposes of our analysis, we found the variables that we reported on from these datasets to be sufficiently reliable. In addition to electronic data, our information sources included interviews with a nonprobability sample of H-1B employers, site visits, and reviews of agency documentation and pertinent literature. Details on the scope and purpose of these information sources are also described below. To obtain information on the characteristics of employers requesting H-1B workers and the positions they sought to fill over the past decade, we analyzed two administrative datasets containing information from the Labor Condition Application (LCA) filed by prospective H-1B employers to the Department of Labor (Labor). First, we analyzed the Efile H-1B Disclosure Data managed by Labor’s Employment and Training Administration (Employment and Training). These data included all the applications filed electronically from 2002 through 2009. We analyzed the data from a total of 2,451,785 applications to determine (1) the number of unique companies that submitted applications each year; (2) the total number of H-1B workers these companies requested each year; (3) the number of applications that were certified or denied; and (4) the number of companies that were either H-1B dependent (i.e., those with 15 percent or more of their workforce comprised of H-1B workers) or willful violators. Second, we obtained and analyzed more recent data on LCAs that were filed from June 2009 through July 2010, which were processed through Labor’s new iCERT system. Unlike Labor’s data from previous years, the iCERT data contained detailed information on the prospective H-1B worker’s skill level, which is specified by the employer on the LCA. We received data including the skill level listed on 258,847 LCAs that were filed between June 2009 and July 2010. The iCERT data also contain a variable indicating whether the petitioning employer was H-1B dependent. We obtained and tabulated this variable for the top 150 H-1B hiring companies (which we defined as those requesting the highest number of H-1Bs in 2009). To understand trends in H-1B complaints received by Labor’s Wage and Hour Division (Wage and Hour) and their outcomes over time, we analyzed extracts from Wage and Hour’s Investigative Support and Reporting Database. Specifically, for fiscal year 2000 to fiscal year 2009, we obtained the number of complaints received, the number of complaints resulting in investigations, the number of cases found to have violations, and the prevalence of specific types of violations. We also requested and received additional data from Wage and Hour on the outcomes of these investigations including the total back wages due to employees, the number of employees that were due to receive back wages, the total civil monetary penalties assessed to violators, and the number of disbarments of H-1B employers. To identify potential regional variations, for fiscal year 2006 to fiscal year 2009, we collected and reviewed the number and nature of complaints investigated by region. To more fully understand the extent and impact of the most common types of violations alleged in complaints that Wage and Hour investigated, for fiscal year 2009, we analyzed the number of cases involving each type of violation, the number of times each violation occurred, and the number of employees impacted. To determine the characteristics of U.S. workers in select occupations over the past decade, we analyzed Current Population Survey (CPS) data. The CPS Basic Monthly Survey—a survey of about 50,000 households that is conducted by the Bureau of Labor Statistics (BLS)—provides a comprehensive body of information on the employment and unemployment experience of the nation’s population. The March Annual Social and Economic CPS supplement is one source of detailed information on income and work experience in the United States. We used both the basic monthly CPS survey data and published estimates based on these surveys over the past decade to produce annual estimates for the 10- year period. We used the March 2009 Annual Social and Economic supplement to produce some additional estimates in this report. A more complete description of the surveys, including sample design, estimation, and other methodology can be found in the CPS documentation prepared by Census and BLS. We used the March 2009 supplement data to produce estimates for U.S. citizens’ longest held job in the previous year, highest degree attained, age, and wages. For this analysis, we restricted the population to those U.S. citizens who were full-time wage and salary workers (excluding self- employed) aged 18 to 50 and working for private employers (excluding government). We estimated median salaries for this population by age and education level for three occupations of interest: (1) systems analysis, programming, and other computer-related occupations; (2) electrical/electronic engineering; and (3) college and university educators. The occupation and salary information used was for the longest held job in 2008. We compared these median estimates to median salaries reported on 2008 H-1B worker petitions for similar occupations and age groups. We used CPS’s basic monthly survey data to examine how the proportion of H-1B to U.S. citizen workers changed over the last decade for these same five occupations of interest. Specifically, for 2000 to 2009, we computed yearly averages from the 12 monthly CPS surveys from each of the years. For these estimates, we restricted the population to U.S. citizen full-time adult workers. Although the occupational categories are the same as those used for the March 2009 supplement analysis, the occupation was for the job held by the U.S. worker the prior week. Additional details of this analysis are presented in Sections 2 and 3 of this appendix. Because the CPS is a probability sample, based on random selections, the sample is only one of a large number of samples that might have been drawn. Since each sample could have provided different estimates, confidence in the precision of the particular sample’s results is expressed as a 95 percent confidence interval. This is the interval that would contain the actual population value for 95 percent of the samples that could have been drawn. The 95 percent confidence intervals provided in this report were developed from standard error estimates that were either provided by BLS for the underlying estimate (for median weekly wages), or computed using formulas and methods described in CPS documentation. Consistent with the CPS documentation guidelines, we do not produce annual estimates from the basic monthly CPS data files for populations of less than 35,000, or estimates based on the March supplement data for populations of less than 75,000. To help analyze trends in demand for and characteristics of H-1B workers and employers over the last decade, we used administrative data collected by Department of Homeland Security’s (Homeland Security) U.S. Citizenship and Immigration Services (USCIS) reflecting information supplied by prospective H-1B employers on the I-129 form, the form that is used to petition for an H-1B worker. These data, known as the Computer Linked Application Information Management System, Version 3.0 (CLAIMS 3) provide detailed information on the characteristics of prospective H-1B employers and workers. We used two versions of these data—CLAIMS 3 Mainframe and CLAIMS 3 Local Area Network (LAN)—because they contained different variables. Using the CLAIMS 3 Mainframe database from fiscal years 2000 through 2009, for each fiscal year we determined the number of H-1B initial petitions and extensions submitted by all employers, employers subject to the cap, and cap-exempt employers; the number of H-1B petitions approved or denied by Homeland Security, by initial petitions and extensions; the total number of companies and the total number of cap-exempt companies that submitted and had approved H-1B petitions, by initial petitions and extensions; the characteristics of companies that were approved by Homeland Security to hire H-1B workers, including their industry codes and the number of workers they requested; and the characteristics of workers that Homeland Security approved as H-1Bs, including whether or not the workers were residing in the United States at the time of application; their countries of birth, education level, age, rate of pay, occupation, industry, and the location of their prospective place of employment. Because the CLAIMS 3 Mainframe database does not distinguish or contain data on petitions subject to the master’s cap, we also obtained and analyzed data from the CLAIMS 3 LAN database from fiscal years 2004 through 2009 to determine the number of approved H-1B petitions for workers who graduated with a master’s degree or higher from an American institution of higher education. To understand how the number and demographic characteristics of approved H-1B workers compared to U.S. citizen workers over the last decade, we used USCIS’s CLAIMS 3 Mainframe H-1B approval data for 2008 for five key occupations: (1) systems analysis, programming, and other computer-related occupations; (2) electrical/electronic engineering; (3) college and university educators; (4) accountants, auditors, and related occupations; and (5) physicians and surgeons. These analyses are described in detail in Section 2. Finally, we used CLAIMS 3 Mainframe data for fiscal year 2009 to identify the 150 employers with the highest number of approved H-1B petitions, and we collected additional data on these employers as described below in the “Data from Private Vendors” section. While the CLAIMS 3 data provided a variety of information on approved H-1B workers, these data had several limitations with respect to understanding demand for and characteristics of H-1B workers. Most importantly, they did not provide information on how many H-1B workers, whose petitions were approved, were actually working in the United States in any particular year. Therefore, although the CLAIMS 3 data are informative about approved H-1B petitions and about some characteristics of the workers listed on those petitions, these characteristics may not be indicative of the characteristics of all H-1B workers in a given year. For example: Of the H-1B petitions submitted in fiscal year 2008 and approved, we do not know the proportion that began work in 2008. Some may not have started work until 2009; others may not have started work at all. An individual H-1B worker could be represented in multiple petitions filed by different employers in the same year. An individual H-1B worker could be represented in multiple petitions filed by the same employer in the same year prior to March 2008. USCIS’s CLAIMS 3 data can only provide information on the flow of new H-1B workers into the U.S. workforce, not about the stock of all H-1B workers in those occupations. In other words, they can provide information on the number of H-1B workers whose petitions were submitted and approved for fiscal year 2008, but not on the number of H- 1B workers that were actually employed in the United States in 2008. Because of these uncertainties, we do not know how well the characteristics of approved H-1B workers whose petitions were submitted in any year would approximate the characteristics of the population of H-1B workers actually employed in that year. Further, because of these limitations we do not know the number of new H-1B workers actually entering the U.S. workforce in any given year. To examine the long-term immigration outcomes for H-1B workers, we obtained data from Homeland Security’s US-VISIT Arrival Departure Information System (ADIS) database, which were matched against H-1B petition data. US-VISIT data are collected on noncitizens at the point of entry into the United States and contain other immigration information, including the dates of entry into and exit from the country; the date a petition to convert to permanent residency was submitted (if one was submitted) and the status of that petition (approved, denied, or pending); the person’s country of citizenship; and country from which their entry visa was issued. For a summary of the methods used and any limitations encountered in conducting data matches and related analysis, see Section 4 of this appendix. To understand trends in the number of complaints (known as charges) that are filed with the Department of Justice (Justice) regarding the H-1B program over time, we obtained data on the number of H-1B-related charges Justice received from fiscal year 2006 through March 2010. Specifically, we analyzed information on all inactive cases, including on whether the matter was Justice-initiated or complaint-driven, the number of charges per fiscal year, initial investigation and completion dates, the alleged violation committed by the company cited in the charge, and the outcome of each charge. For cases that were resolved from fiscal year 2006 to March 2010, we analyzed summaries provided by Justice describing the nature and resolution of all cases on which Justice took action. To determine the number of H-1B and L-1 visas issued from 2000 to 2009, we reviewed and compiled data on visa issuances published by the Department of State (State). To learn more about the top 150 H-1B hiring companies in fiscal year 2009 (beyond the information available in agency administrative databases), we gathered additional information on these companies through Mergent Online and LexisNexis’s Dossier databases. We used these databases to obtain information on country of incorporation, country of operations (location), primary North American Industry Classification System (NAICS), number of employees, net income, operating income, total assets, and business description for each company. For several companies for which neither database had available information or we wanted additional information (i.e., a more detailed business description), we downloaded and saved information from company Web sites. For each of the datasets described above, we conducted a data reliability assessment of selected variables by conducting electronic data tests for completeness and accuracy, reviewing documentation on the dataset, or interviewing knowledgeable officials about how the data are collected and maintained and their appropriate uses. For the purposes of our analysis, we found the variables that we reported on from these datasets to be sufficiently reliable. In several instances, we identified inconsistencies with the reporting of particular data fields. In these instances, we took steps to address these inconsistencies by using criteria to create decision rules. For example, a given H-1B employer might have reported different industry codes on their H-1B petition applications in a given year. When this occurred, except if a company had multiple industry codes listed the same number of times, we identified the industry that the employer most frequently listed on the petition. If more than one industry code appeared the same number of times, the company was double counted to reflect equally relevant industries. In other instances, when it was not possible to apply a reasonable decision rule, we did not include the data field in our analysis. To determine how the H-1B cap and program affects the costs, R&D, and offshoring decisions of firms doing business in the United States, we spoke to a nongeneralizable sample of 34 companies that employed H-1B workers in fiscal year 2008. For 31 of these companies, we conducted structured interviews with representatives of the company. For the remaining 3 companies, we spoke with a company representative in two separate focus groups. Of the 31 firms with whom we conducted structured individual interviews, 22 were selected randomly from a stratified sample of all H-1B hiring firms in fiscal year 2008. The universe of H-1B hiring firms (excluding nonprofits and universities) was stratified into three groups according to the number of approved H-1B petitions. Anticipating a high refusal rate from companies we asked to participate in the structured interview, we over- sampled for each of these groups. Ultimately, of the 150 companies we contacted, 22 agreed to speak with us. The following table summarizes the population of companies and the number of companies contacted. The remaining firms with which we conducted additional structured interviews were selected by GAO based on referrals from industry contacts. Some of these firms were chosen because they were known leaders in key sectors of the economy, while others were chosen because they represented firms from sectors that were difficult to contact and whom we expected would not be well represented by the random sample (including one start-up company and one small H-1B-dependent staffing firm). Ultimately, we conducted structured individual interviews with 9 additional firms selected based on referrals from industry contacts, for a total of 31 individual interviews; and we conducted focus group interviews with 3 additional firms selected based on referrals from industry contacts. This selection of a total of 34 firms constitutes a nongeneralizable sample and cannot be used to make inferences beyond the specific firms selected. The firms we spoke with were located throughout the country and reflected six industrial sectors and a range of sizes (from a few workers based in one location to thousands of workers positioned around the globe). Through these interviews, we spoke with key executives in a variety of technology-intensive industries, including information technology (IT); semiconductor manufacturing and other manufacturing and engineering firms; and pharmaceuticals and biotechnology. Regarding technology-intensive industries, we spoke with representatives of several large multinational companies, including four of the top U.S.-based H-1B employers (i.e., U.S.-based companies that were among the top 50 companies with the highest number of approved H-1B petitions), as well as several small and emerging technology companies who use H-1B workers for highly specialized positions. We also spoke with 10 IT services firms, including three large (meaning they employed at least 100 H-1B workers) foreign-owned H-1B staffing and outsourcing companies, and several smaller IT staffing and consulting firms. In addition, we spoke with companies in several other sectors, including two financial organizations, a health care provider, and a consumer retail firm. To develop our structured interview questionnaire, we took several steps. First, we conducted two focus groups with H-1B employers and representatives of major industry organizations. In these focus groups, we tested preliminary versions of our interview questions, and used the discussion to revise the questions. We also conducted six tests of the structured interview with individual companies. The responses from the three companies that participated in our focus groups were not included in the tabulated results based on interviews with individual firms, because the structure of our individual firm interviews differed significantly from the structure of our focus groups. However, the responses from all six of our test interviews were included in our tabulated results of company interviews. To analyze the data we collected from the company interviews, we conducted a content analysis of company responses. This analysis involved coding the interview responses and conducting frequency analyses of the topics and themes that were raised by the company representatives. Two analysts coded the responses. Any discrepancies in coding were discussed and resolved before finalizing the resulting data set. During our interviews, employers and experts offered a number of suggestions for how the program could be improved. Although it was not possible to publish all of the suggestions, those that are mentioned in the report were chosen on the basis of the following factors: (1) frequency of suggestion, (2) feasibility, (3) potential for economic efficiency, and (4) corroboration with other information sources. To understand the (1) H-1B certification, adjudication, and enforcement processes; (2) the responsibilities of each agency involved; (3) the effectiveness of the H-1B program’s protections for U.S. workers; and (4) the reliability of the datasets we used, we conducted three site visits and conducted numerous interviews with agency officials, labor advocates, and academics. To understand the processing of applications, we visited Labor’s LCA processing center in Illinois and Homeland Security’s I-129 processing center in California. We also conducted interviews with State’s Kentucky Service Center, where I-129 petitions that have been approved by Homeland Security are entered into a State database that is accessible to consular offices around the world. To understand investigations related to approved H-1B visa holders, we visited Labor’s Wage and Hour Division’s Northeast Regional Office in Philadelphia, the regional office that had received the highest number of H-1B-related complaints in the country. In addition, we conducted interviews with officials from Labor, Homeland Security’s USCIS and US-VISIT offices, State, and Justice with regard to their roles in all phases of the H-1B program. We also reviewed agency documentation and the laws and regulations related to the H-1B program. To deepen our understanding of the role of the H-1B program for businesses in specific segments of the economy, and the impact of the H-1B program on U.S. workers, we interviewed a number of academics and business advocates. Specifically, we interviewed leading academics in the areas of business, economics, demography, international relations, and labor relations. To better understand the specific issues facing start-up companies and high-tech organizations, we conducted interviews with venture capital companies and immigration law firms that work with start-up companies. To better understand the specific issues facing firms in the IT staffing and services industry, we interviewed industry advocacy organizations. We also conducted an extensive review of the academic literature, which included articles and studies on the impact of migration on U.S. workers, trends in international business, and trends in the education of foreign students in science and technology fields. Finally, to address all objectives, we reviewed relevant federal laws and regulations; news media articles, and the temporary immigration programs of several other countries that we selected based on our literature review and our discussions with experts. Table 7 provides a summary of how the information sources described were used to answer each of the reporting objectives. As part of our examination of the impact of the H-1B program on domestic employment, we used data from the 2009 March supplement of the CPS to estimate the number of U.S. citizen workers in 2008, their age distribution, and their education levels for five occupational categories that received the most H-1B approvals in fiscal year 2009. Ideally we would have compared U.S. workers to actual H-1B workers; however, data on actual H- 1B workers do not exist. The data we analyzed (CLAIMS 3), as explained above, pertain to prospective H-1B workers (those whose petitions were submitted in a given year and approved by Homeland Security). To help ensure that we were comparing workers in the same occupational categories, we had to combine some occupational categories in the CPS to better match those in the CLAIMS 3 data, as shown in table 8. In addition, we compared salaries of U.S. workers with those of H-1B workers, although this comparison had limitations. Specifically, we compared the CPS median salary estimates for the 2009 March supplement to median salary figures reported in CLAIMS 3 salary data for the approved H-1B workers whose petitions were submitted in 2008 for three of the occupations of interest overall and by age group. Although several of the comparisons we were able to make did show a statistically significant difference between the CLAIMS 3 H-1B workers’ median salary and the “comparable” CPS estimate, these analyses have several limitations: Within each occupational group, there can be variation in the types of jobs and work performed. Our data do not account for these subtleties. Therefore, it is possible that H-1B workers may have been working in relatively more or less sophisticated jobs than U.S. workers within the same occupational group. For example, H-1B workers and U.S. workers in the occupation “college and university education” may have different fields of education and work in different types of institutions. The measures of median annual salaries for U.S. citizens could include bonuses, but the median annual salaries reported in the CLAIMS 3 database most likely do not. Neither median salary includes noncash benefits such as health insurance or pensions. CPS salary reported in the 2009 March supplement was for the longest held position actually worked in 2008, as reported by workers themselves (or knowledgeable members of their household). In contrast, the salaries reported in the CLAIMS 3 database are reported by prospective H-1B employers and reflect what the employer intends to pay the H-1B worker in fiscal year 2008 or fiscal year 2009, a time period covering October 1, 2007, through September 30, 2009. We identified patterns in the H-1B worker salary data that raise concerns about the validity of that data. Specifically, the frequency distributions we ran on the salaries of H-1B workers in the five key occupations showed that employers reported a number of very low and very high salaries for the “annual rate of pay” on the petition application. We had no basis for determining whether the high and low salaries were data entry errors, estimated payments for an employment period of more or less than a year, or were very high or low for some other reason. To minimize the influence of these outliers, we used median salary rather than mean. In light of these limitations, caution should be used in interpreting differences found in comparing estimated 2008 median U.S. citizen worker salaries and the median salaries for H-1B worker petitions submitted in 2008. To determine how raising the H-1B cap might affect the employment and wages of U.S. workers, we examined labor market indicators (employment levels, unemployment rates, and usual weekly earnings) of U.S. workers in three occupations approved to receive the largest proportion of approved H-1B petitions relative to the total U.S. workforce in those occupations: (1) systems analysis, programming, and other computer-related occupations; (2) electrical and electronics engineers; and (3) college and university education. For this analysis, we relied on three sets of published CPS estimates of annual averages based on data collected through CPS basic monthly surveys: (1) median weekly earnings (at last week’s primary job) of full-time wage and salary workers by detailed occupation and sex, 2000 to 2009 annual averages; (2) employed persons by detailed occupation and sex, annual averages 2000 to 2009; and (3) unemployment levels and rates by detailed occupation, 2000 to 2009 annual averages. These data were provided to us by staff at BLS. In addition to presenting estimates of the employment levels, unemployment rates, and median usual weekly earnings for each occupational group, we also calculated and presented estimates of the change over the decade in the unemployment rate and median usual weekly wage for each occupational group, and the growth rate relative to year 2000 for the employment level for each occupational group. In order to better understand trends identified in our analysis and the specific issues facing workers in segments of the economy that may not be apparent from national labor force statistics, we also spoke with several labor advocates who work with and advocate for computer scientists and computer programmers, as well as academic researchers who do research on the U.S. science, engineering, and technology workforce. To examine the long-term immigration outcomes of H-1B workers, GAO obtained data from Homeland Security’s US-VISIT ADIS database. The ADIS data provided was based on matching ADIS data with 302,550 records from Homeland Security’s CLAIMS 3 database submitted by GAO to US-VISIT for this purpose. The CLAIMS 3 records used for matching consisted of approved initial H-1B petitions that were valid to start work in H-1B status between January 1, 2004, and September 30, 2007. US-VISIT matched the submitted records by first name, last name, and date of birth to the ADIS data system. US-VISIT’s matching returned a total of 5,091,369 event records from the ADIS system, containing information about 375,641 persons in the ADIS system. These event records, which US- VISIT provided GAO, contained data on the following events for foreign nationals: entry into the country; exit from the country; petition to convert to permanent residence status (I-485); and status of petition to convert to permanent residence (approved, denied, or pending.) Each event record has an associated event date. US-VISIT also provided the following identifying information: ADIS person identifier, first name, last name, date of birth, country of citizenship, country of issuance, I-94 number, and CLAIMS 3 receipt number, where available. GAO took a number of steps to identify reliable matches between CLAIMS 3 and ADIS data. We determined that the match was reliable if at least one of the following three conditions held: (1) the ADIS person had an H-1B visa status at some point during their history; (2) the CLAIMS 3 receipt number in the CLAIMS 3 data matched at least one of the CLAIMS 3 receipt numbers recorded in the ADIS system; (3) the I-94 number recorded with the petitioners’ I-129 form matched at least one of the I-94 numbers on file in the ADIS system. In the case of one-to-many and many- to-many matches, we selected the match that met criteria 2 or 3 over criteria 1. We determined that 169,349 records met these criteria. There are various reasons why a visa might not be used or a beneficiary might not be in the ADIS system for an approved H-1B petition. For example, an H-1B visa might not be used for an approved petition when the employer decides not to offer the beneficiary the job; the beneficiary decides not to accept the job; or the beneficiary obtains a different U.S. visa status, such as through marriage, student visas, or other work visas. An H-1B visa might be used, but the beneficiary would not be in the ADIS system for several reasons. First, the ADIS system became fully operational in January 2004; those who were already in the United States at the time they submitted their H-1B petition (such as students enrolled in U.S. universities) and did not enter or exit the country after January 2004 may not have been entered in the ADIS system. In addition, some beneficiaries who entered the United States by land may have entered the country without going through an official border station where they submit an I-94 form. Finally, a beneficiary’s record may not be in the ADIS system due to data quality problems. For example, if an H-1B beneficiary changed their name through marriage and subsequently had an I-485 submitted on their behalf, US-VISIT may at times be unable to link the I-485 submission to the H-1B beneficiary due to the name change. We conducted this performance audit from May 2009 through January 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix provides additional analyses of the characteristics of U.S. and approved H-1B workers in five occupational groups between 2000 and 2009. For five occupational groups with large numbers of approved H-1B workers over the last decade, we compared newly approved H-1B workers with the stock of U.S. citizen workers in those occupations and found that the relative number and proportion of newly approved H-1B workers varied over the decade for all five occupations, but decreased overall. The five occupations we examined and that represent occupations with the highest concentration of newly approved H-1B workers included (1) systems analysts, programmers, and other computer-related workers; (2) electrical and electronics engineers; (3) accountants and auditors; (4) college and university educators; and (5) physicians and surgeons. Despite some fluctuations over time, the overall number of newly approved petitions for H-1B workers across these five occupational groups declined from 137,371 in 2000 to 44,946 in 2009, and the proportion relative to estimates of U.S. citizen workers fell from about 2.5 percent to less than 1 percent between 2000 and 2009. For specific occupations, as shown in figure 15, the highest proportion of newly approved H-1B workers as compared to U.S. citizen workers was in the systems analysis, programming, and other computer-related occupations, averaging about 3 percent across the decade, while the lowest proportion was in the accounting occupations and physicians and surgeons occupations, averaging less than 1 percent. Further, for electrical and electronics engineering and systems analysis, programming, and other computer- related occupations, the declines in the proportion of newly approved H- 1B workers as compared to U.S. citizen workers seem to coincide with the economic downturn of 2002. We did not examine potential reasons behind this relative decline in the proportion of H-1B workers over time; however, fluctuations in the economy and H-1B cap were likely contributing factors. In 2008, approved H-1B workers (initial and extensions) were generally younger and more educated as compared to their U.S. citizen counterparts in similar occupations, although this varied by the particular occupation (see fig. 16). In these five occupations, we generally found that a higher percentage of approved H-1B workers had earned an advanced graduate degree (including master’s, Ph.D., or professional degree) than U.S. citizen workers, as shown in figure 17. Across the five occupations, 56 percent of approved H-1B workers had graduate degrees, as compared to an estimated 29 percent of the total stock of U.S. citizen workers. For this comparison, all U.S. citizen estimates are for the population of U.S. citizens aged 18 to 50 years old in private, full-time employment, excluding those in government employment and the self-employed. Appendix III: L Wabor Force Trends for U.S. This appendix presents analyses of median earnings growth, unemployment rates, and employment levels for the three occupations with the highest proportion of approved petitions for H-1B workers over the past decade. To shed light on the U.S. workforce most likely to have been affected by the H-1B program over the past decade, we reviewed 10 years of data on the employment, unemployment, and earnings of U.S. workers in the three occupations with the largest proportion of approved H-1B petitions relative to the stock of U.S. workers over the past decade. We found that U.S. workers in all three occupations, in every year, had significantly higher median earnings levels compared to U.S. workers in all professional occupations. We also found that one of the three occupations—systems analysts and computer programmers—had significantly higher earnings growth compared to all professional U.S. workers. However, unemployment rates were cyclical for two groups and employment levels varied among the three groups (i.e., declining for electrical and electronics engineers, growing for college and university educators, and remaining essentially unchanged among systems analysts and computer programmers). Real earnings growth among systems analysts, programmers, and other computer-related U.S. workers was relatively strong over the decade—12 percent—and was significantly larger than real earnings growth among all professional workers over the decade, which was about 4 percent. Among electrical and electronics engineers, real earnings growth was about 8 percent over the decade; however, the difference between this increase and that of all professional workers was not statistically significant. Among college educators, real earnings did not grow significantly over the decade. As can be seen in figure 18, in every year over the past decade all three occupations had median weekly earnings levels that were significantly higher than the median earnings among all professional workers. However, real earnings growth among college and university educators was essentially flat over the decade. Unemployment rates among (1) electrical and electronics engineers and (2) system analysts, programmers, and other computer-related workers showed greater cyclical variation than did the unemployment rate for all U.S. workers in professional occupations. In contrast, the unemployment rate among college and university educators was somewhat less sensitive to business cycle fluctuations, and in most years was close to or lower than the unemployment rate for all professional occupations (see fig. 19). Employment among electrical and electronics engineers declined by 29 percent over the decade, and there was no significant change in the level of employment among systems analysts, programmers, and other computer-related workers. In contrast, employment grew by 15 percent among all professional occupations over the past decade. Employment among college and university educators grew by 26 percent over the decade (see fig. 20). This appendix provides additional analyses of H-1B employers in fiscal year 2000 through fiscal year 2009 and more detailed analyses of the top 150 H-1B hiring companies in fiscal year 2009. Over a third of employers approved to hire H-1B workers between fiscal year 2000 and fiscal year 2009 were employers that provided scientific, professional, or technical services (see fig. 21). For example, in fiscal year 2009, at least 38 percent of employers approved to hire one or more H-1B workers indicated that they were in one industry—the professional, scientific, and technical services industry. Services within this industry include legal services; accounting, bookkeeping, and payroll services; architectural, engineering, and specialized design services; computer services; consulting services; and research services. Also in fiscal year 2009, the manufacturing, health care and social assistance, educational services, and finance and insurance sectors received the next-highest share of one or more H-1B approvals—that is, 11, 10, 7, and 6 percent of companies approved to hire H-1B workers, respectively. While H-1B hiring employers were located throughout the continental United States in fiscal year 2009, they tended to be concentrated in several high-technology pockets of the country such as Silicon Valley, Southern California, and the Tri-State area of New York, New Jersey, and Connecticut (see fig. 22). GAO also reviewed data from Labor’s LCAs on the top 150 employers of H-1Bs in fiscal year 2009, including whether the employer is H-1B- dependent; whether the employer is a willful violator; and the number of petitions requested at each of the four possible skill levels. As indicated in table 9, among the 150 companies for which Labor provided data, 24 were H-1B dependent, and 9 of which were also deemed “willful violators.” The remaining 126 firms were neither H-1B dependent nor willful violators. In addition, on average, these firms indicated that they would pay workers at the prevailing wage for skill-level one 52 percent of the time; the prevailing wage for skill-level two 30 percent of the time; the prevailing wage for skill-level three 12 percent of the time; and the prevailing wage for skill- level four 6 percent of the time. Regarding industry, we found that, similar to the universe of all H-1B hiring employers, most of the top H-1B employers were in the professional, technical, and scientific services industry, although many were also in the manufacturing industry or the educational services industry. While the top 150 H-1B-hiring employers spanned a range of industries, these employers were distinctly concentrated in a few, more specific industry groups, including electronic computer manufacturing; software publishing; custom computer programming services (firms that write, modify, and test software for clients); computer systems design services; and colleges, universities, and professional schools. In terms of the type of employers, a relatively large number (44 of the 150 employers) were universities, compared to 6 percent of H-1B hiring employers in fiscal year 2009. We also found that at least 33 employers could be categorized as information technology (IT) services—those that either provide staff or full project teams to other companies for IT projects (see table 10). This appendix provides a chronological list of major laws and descriptions of certain key provisions related to the H-1B program. Laws identified may contain additional provisions related to the H-1B program not described here, and there may be additional laws not included here that have made various changes in the H-1B program. This is not intended to be an exhaustive summary of all laws and provisions related to the H-1B program. Immigration and Nationality Act, ch. 447, §§ 101(a)(15)(H) and 214(c), 66 Stat. 163, 168 and189- 90 (1952). Authorized H-1B visas for aliens with a residence in a foreign country that the alien had no intention of abandoning, who were of distinguished merit and ability, and were coming to the United States to perform temporary service of an exceptional nature requiring such merit and ability. Immigration Reform and Control Act of 1986, Pub. L. No. 99-603, § 102, 100 Stat. 3359, 3474-80. Makes it an unfair immigration-related employment practice for most employers to discriminate against any individual (other than an unauthorized alien) with respect to hiring, recruitment, firing, or referral for fee because of such individual’s origin or citizenship status. States that it is not an unfair immigration-related employment practice to hire a U.S. citizen or national over an equally qualified alien. Requires that complaints of violations be filed with the Special Counsel for Immigration-Related Unfair Employment Practices (established by the act) within the Department of Justice. Authorizes the Special Counsel to (1) investigate complaints and determine (within 120 days) whether to bring such complaints before a specially trained administrative law judge and (2) initiate investigations and complaints. Permits private actions if the Special Counsel does not file a complaint within such 120-day period. Immigration Act of 1990, Pub. L. No. 101-649, § 205, 104 Stat. 4978, 5019-22. Removed requirement that alien have a residence in a foreign country and no intention of abandoning it, and revised statute to authorize H-1B visas for aliens coming temporarily to the U.S. to perform services in a “specialty occupation,” which was defined as one that requires, at a minimum, theoretical and practical application of a body of highly specialized knowledge and the attainment of a bachelor’s or higher degree in the specific specialty (or its equivalent). Established the LCA process, to be administered by Labor, that requires employers to make certain attestations. Limited the number of H-1B visas that could be issued during a fiscal year to 65,000 beginning in fiscal year 1992. Limited the period of authorized admission as an H-1B nonimmigrant to 6 years. Established “dual intent” provision, under which H-1B visa holders could also pursue permanent residency. Assigned responsibility to Labor to enforce program rules by investigating complaints made by H-1B workers or their representatives against employers, and by making referrals to Justice and imposing civil monetary penalties where it finds a failure by the employer to meet certain required conditions or the misrepresentation of material fact. Immigration Technical Corrections Act of 1991, Pub. L. No. 102-232, tit. III, § 303(a)(7)(B)(iii), 102 Stat. 1742, 1747. Restricted Labor to reviewing LCAs only for completeness and obvious inaccuracies. American Competitiveness and Workforce Improvement Act of 1998, Pub. L. No. 105-277, div. C, tit. IV, §§ 411-418, 112 Stat. 2681-641, 2681-642 – 2681-657. Temporarily raised the cap on H-1B visas for fiscal years 1999 to 2001 to a high of 115,000; returned the cap to 65,000 for the following years. Defined “H-1B-dependent employer” as employer that has 25 or fewer full-time equivalent employees in the U.S. and employs more than seven H-1B nonimmigrants; 26 to 50 full-time equivalent employees in the U.S. and employs more than 12 H-1B nonimmigrants; or At least 51 full-time equivalent employees in the U.S., of whom at least 15 percent are H-1B nonimmigrants. Required H-1B-dependent employers and those that committed a willful failure or misrepresentation during the 5 years preceding filing of an LCA to include additional attestations. Provided that H-1B-dependent employers and such willful violators are not required to make these additional attestations with respect to H-1B nonimmigrants receiving annual wages of at least $60,000 or those with a master’s or higher degree (or its equivalent) in a specialty related to the job. Required that H-1B workers waiting for final adjudication of their requests for permanent residence status be given 1-year extensions of their H-1B visas until their requests have been adjudicated. Provided Labor increased authority to investigate and enforce program compliance and assess civil monetary penalties against employers found to be in violation of certain program requirements. Required that steps be taken to maintain accurate count of the number of aliens issued H-1B or other nonimmigrant visas. American Competitiveness in the Twenty-first Century Act of 2000, Pub. L. No. 106-313, §§ 102- 106, 114 Stat. 1251, 1251-55. Temporarily raised the cap on H-1B visas for fiscal years 2001 to 2003 to 195,000; cap returned to 65,000 for the following years. Exempts an alien from the H-1B cap if he or she is employed (or has received an offer of employment) at an institution of higher education or its related or affiliated nonprofit entity; a nonprofit research organization; or a governmental research organization. Created increased portability of H-1B visas by authorizing H-1B workers to accept new employment upon the filing by the prospective employer of a new petition on his or her behalf. The H-1B worker’s employment authorization may be extended until the petition is adjudicated. United States-Chile Free Trade Agreement Implementation Act, Pub. L. No. 108-77, § 402(b)(2)(B), 117 Stat. 909, 940 (2003) and the United States-Singapore Free Trade Agreement Implementation Act, Pub. L. No. 108-78, § 402(1), 117 Stat. 948, 970-71. Created a new nonimmigrant classification available each fiscal year to up to 1,400 professionals from Chile and 5,400 professionals from Singapore, known as H-1B1. These H-1B1 visas count against the H-1B cap. H-1B Visa Reform Act of 2004, Pub. L. No. 108-447, div. J, tit. IV, subtit. B, §§ 422, 424 and 425(a) 118 Stat. 3353, 3353-56. Provided Labor increased authority to initiate investigations in cases where the Secretary personally certifies there is reasonable cause and approves the investigation. Information providing the basis for the investigation must originate outside Labor unless it was lawfully obtained in the course of another Labor investigation. In addition, receipt of information submitted to Justice or Labor to secure employment of an H-1B worker cannot provide the basis for such investigation. Exempted the first 20,000 petitions received for individuals who have earned a master’s degree or higher from a U.S. institution of higher education. Raised the fee imposed on most employers when filing an H-1B visa petition to $750 or $1,500 and imposed an additional fraud prevention and detection fee of $500. Increased the fees by $2,000 for petitions filed between August 13, 2010, and October 1, 2014, if the petitioner has 50 or more employees in the U.S. and more than fifty percent of those U.S. employees are in H-1B or L nonimmigrant status. uired attestations were as follows: (1) employer will pay H-1B workers the employer’s actual wage for the position or the prevailing wage in the area, whichever is higher; (2) employer will provide working conditions for H-1B employees that will not adversely affect the working conditions of workers similarly employed; (3) no strike or lockout exists in the course of a labor dispute in the occupational classification at the place of employment; and (4) the employer has provided notice that it is filing an LCA application to the bargaining representative (if any) of its employees in the occupational classification and area for which aliens are sought, or if there is no bargaining representative, by posting notice of the filing in conspicuous locations at the place of employment. uired whether the other employer has displaced or intends to displace one of its U.S. workers within 90 days before or 90 days after the placement; and (3) employer has taken good faith steps, prior to filing the LCA, to recruit in the United States using procedures that meet industrywide standards and offering compensation at least as great as that reuired to be offered to H-1B nonimmigrants, U.S. workers for the job and has offered it to any U.S. worker who applies and is eually or better ualified for it. Michele Grgich (Assistant Director) and Erin Godtland (Economist-in- Charge) managed this engagement. Core team-members included: Nisha Hazra, Melissa Jaynes, and Jennifer McDonald (Education, Workforce and Income Security); and Hiwotte Amare and Rhiannon Patterson (Applied Research and Methods). In addition, the following people made significant contributions to this work: James Bennett and Susan Bernstein (Education Workforce, and Income Security Issues) and Susan Baker, Melinda Cordero, Namita Bhatia-Sabharwal, Mark Ramage, and Shana Wallace (Applied Research and Methods) and Ashley McCall (Library and Information Services). Stakeholders included: Barbara Bovjberg (Education, Workforce, and Income Security); Tom McCool (Applied Research and Methods); Ronald Fecso (Chief Statistician); Sheila McCoy and Craig Winslow (General Counsel); Richard Stana and Mike Dino (Homeland Security and Justice); Loren Yager and Jess Ford (International Affairs and Trade); and Muriel Forester (Strategic Planning and External Liaison). Referencers included Jamie Whitcomb (lead), Alison Grantham, and Karen Brown (Education, Workforce, and Income Security) and DuEwa Kamara and Courtney LaFountain (Applied Research and Methods).
Congress created the H-1B program in 1990 to enable U.S. employers to hire temporary, foreign workers in specialty occupations. The law capped the number of H-1B visas issued per fiscal year at 65,000. Since then, the cap has fluctuated with legislative changes. Congress asked GAO to assess the impact of the cap on the ability of domestic companies to innovate, while ensuring that U.S. workers are not disadvantaged. In response, GAO examined what is known about (1) employer demand for H-1B workers; (2) how the cap affects employer costs and decisions to move operations overseas; (3) H-1B worker characteristics and the potential impact of raising the cap; and (4) how well requirements of the H-1B program protect U.S. workers. GAO analyzed data from 4 federal agencies; interviewed agency officials, experts, and H-1B employers; and reviewed agency documents and literature. In most years, demand for new H-1B workers exceeded the cap: From 2000 to 2009, demand for new H-1B workers tended to exceed the cap, as measured by the numbers of initial petitions submitted by employers who are subject to the cap. There is no way to precisely determine the level of any unmet demand among employers, since they tend to stop submitting (and the Department of Homeland Security stops tracking) petitions once the cap is reached each year. When we consider all initial petitions, including those from universities and research institutions that are not subject to the cap, we find that demand for new H-1B workers is largely driven by a small number of employers. Over the decade, over 14 percent of all initial petitions were submitted by cap-exempt employers, and only a few employers (fewer than 1 percent) garnered over one-quarter of all H-1B approvals. Most interviewed companies said the H-1B cap and program created costs, but were not factors in their decisions to move R&D overseas: The 34 H-1B employers GAO interviewed reported that the cap has created some additional costs, though the cap's impact depended on the size and maturity of the company. For example, in years when visas were denied by the cap, most large firms reported finding other (sometimes more costly) ways to hire their preferred job candidates. On the other hand, small firms were more likely to fill their positions with different candidates, which they said resulted in delays and sometimes economic losses, particularly for firms in rapidly changing technology fields. Limitations in agency data and systems hinder tracking the cap and H-1B workers over time: The total number of H-1B workers in the U.S. at any one time--and information about the length of their stay--is unknown, because (1) data systems among the various agencies that process such individuals are not linked so individuals cannot be readily tracked, and (2) H-1B workers are not assigned a unique identifier that would allow for tracking them over time--particularly if and when their visa status changes. Restricted agency oversight and statutory changes weaken protections for U.S. workers: Elements of the H-1B program that could serve as worker protections--such as the requirement to pay prevailing wages, the visa's temporary status, and the cap itself--are weakened by several factors. First, program oversight is fragmented and restricted. Second, the H-1B program lacks a legal provision for holding employers accountable to program requirements when they obtain H-1B workers through a staffing company. Third, statutory changes made to the H-1B program have, in combination and in effect, increased the pool of H-1B workers beyond the cap and lowered the bar for eligibility. Taken together, the multifaceted challenges identified in this report show that the H-1B program, as currently structured, may not be used to its full potential and may be detrimental in some cases. This report offers several matters for congressional consideration, including that Congress re-examine key H-1B program provisions and make appropriate changes as needed. GAO also recommends that the Departments of Homeland Security and Labor take steps to improve efficiency, flexibility, and monitoring of the H-1B program. Homeland Security disagreed with two recommendations and one matter, citing logistical and other challenges; however, we believe such challenges can be overcome. Labor did not respond to our recommendations.
CMS (formerly HCFA), an agency within the Department of Health and Human Services (HHS), is responsible for administering much of the federal government’s multibillion dollar investment in health care— including the Medicare program. Medicare is a health insurance program for people aged 65 years and older, some disabled people under 65 years of age, and people with end-stage renal disease—which is permanent kidney failure treated with dialysis or a transplant. Medicare covers a variety of services. Part A services include inpatient hospital, skilled nursing facilities (SNF), certain home health, and hospice care, while part B services include physician and outpatient hospital services, diagnostic tests, mental health services, and outpatient physical and occupational therapy, including speech-language therapy, ambulance and other medical services and supplies. Each year, Medicare serves about 40 million elderly and disabled Americans and processes about 900 million claims submitted by nearly 1 million hospitals, physicians, and other health care providers. In fiscal year 2000, the program spent over $200 billion—about 11 percent of the federal budget. The Medicare program has two components—the traditional fee-for- service program and Medicare+Choice—its managed care option. Most Medicare beneficiaries participate in the traditional program and receive their health care on a fee-for-service basis, in which providers are reimbursed for each covered service they deliver. CMS contracts with about 50 insurance companies to process and pay these claims. The other principal component—Medicare+Choice—covers about 14 percent of beneficiaries who have enrolled in about 180 prepaid health plans that contract with the government to receive monthly payments in exchange for providing needed Medicare services for enrollees. As the agency that administers Medicare, CMS performs a wide array of management activities. Principal among these are setting prices for services and health plans based on legislatively prescribed guidelines, ensuring prompt and accurate payment to providers and health plans, educating beneficiaries and providers about the Medicare program, ensuring the quality of fee-for-service and managed care services paid by the program, and operating the Medicare+Choice program. See table 1 for examples of these activities. Tasked with administering a highly complex program, HCFA has earned mixed reviews from us and others on its performance in managing Medicare. On one hand, the agency presides over a program that is unparalleled in its popularity with beneficiaries and the general public. HCFA has implemented a variety of payment methods that have helped constrain the growth of program costs. It has also succeeded in ensuring that Medicare claims are paid quickly and at little administrative cost. On the other hand, HCFA has had difficulty making needed refinements to its payment methods. The agency has also fallen short in its efforts to oversee its Medicare claims administration contractors and to ensure that claims are paid accurately and beneficiaries receive quality services. While in the early 1990s HCFA came under increasing criticism for not adequately protecting program payments, some providers have complained recently that its safeguard efforts are unduly burdensome. The size and nature of the Medicare program make it inherently challenging to develop payment methods that prudently reimburse providers while protecting beneficiary access to services. As Medicare’s steward, CMS cannot passively accept what providers want to charge the program. However, because of its size, Medicare profoundly influences health care markets. The agency is often the dominant payer for services or products, and in such cases, it cannot rely on market prices to determine appropriate payment amounts because its share of payments distorts the market. In addition, HCFA has had difficulty relying on competition to determine prices, because finding ways of encouraging competition without excluding some providers has been problematic.This means that HCFA has had to administratively set payment amounts for thousands of services in ways that encourage efficient delivery of, and ensure beneficiary access to, needed health care services and equipment. Adding to the complexity of setting payment amounts is Medicare’s status as a highly visible public program with certain obligations that may not be consistent with efficient business practices. For example, the agency is constrained from acting swiftly to reprice services and supplies even when prevailing market rates suggest that payments should be modified. As Medicare is a public program, its enabling legislation provides that any changes require public input. This minimizes the potential for policymaking to have unintended consequences. However, seeking and responding to public interests, including various provider and supplier groups, can be a time-consuming process that can sometimes thwart efficient program management. Recent changes in provider payment methods, as mandated by the Congress, have constrained rates paid to some providers and slowed the growth of payments to others. This has raised provider concerns about payment adequacy. As Medicare’s payments have become less generous in the aggregate, payment adjustments for cost differences of providers and services become more important. HCFA’s successes in more closely aligning payments to these differences have sometimes been obscured by the concerns of those providers affected, who are adapting to a new payment environment. Despite these challenges, over the last two decades HCFA has had broad experience, and significant success, in developing payment methods that seek to control spending by rewarding provider efficiency and discouraging excessive service use. HCFA’s experience began in 1983 when the Congress passed legislation requiring the development of a hospital inpatient prospective payment system (PPS), a method that pays providers, regardless of their costs, fixed, predetermined amounts that vary according to patient need. This approach, designed to reward hospitals that could deliver care at lower cost than the predetermined payment, succeeded in slowing the growth of Medicare’s inpatient hospital expenditures. Growth in Medicare inpatient hospital expenditures averaged over 15 percent per year prior to 1983, but was generally under 10 percent in subsequent years. HCFA’s next major effort to break the link between providers’ charges and Medicare payments was implementing a fee schedule for physicians, which was phased in during the 1990s. This schedule was not designed to reduce the overall expenditure level, but to redistribute payments for services based on the relative resources used by physicians to provide different types of care. Its development and implementation was complex because HCFA had to calculate payment amounts for over 7,000 procedures, accounting for the three categories of resources used to perform each procedure—physician work, practice expenses, and malpractice insurance expenses. While beneficiary access to physician care was generally not affected, the fee schedule, as intended, led to a shift in payments from surgical and nonsurgical services to primary care and other evaluation and management services. HCFA’s next challenge was to expand use of prospective payment methods for postacute care services, such as those provided by SNFs and home health agencies. In 1997, the Balanced Budget Act (BBA) mandated that HCFA develop and implement four new PPS from fiscal year 1998 through fiscal year 2001—a heavy workload for the agency. For each new PPS, HCFA had to (1) design the payment system—which was based on data-intensive studies—including factors that adjust payments based on the health status of beneficiaries receiving care, (2) develop and issue regulations that incorporated public comment, and (3) plan and program computer system changes. Adding to its challenge, HCFA and its contractors needed to make significant systems changes to implement the new payment methods at the same time that they were renovating information technology (IT) systems for Year 2000 (Y2K) date changes. As a result of the priority HCFA had to give to Y2K systems changes, HCFA moved more slowly than the law required to phase in its new PPS methodologies for home health and hospital outpatient services. Each of these payment methods was an improvement over cost- and charge-based methods, which often rewarded inefficient delivery and excessive provision of unnecessarily costly services. PPS methods reward providers for keeping their costs down, which in turn has helped constrain the overall growth of Medicare payments. However, slower payment growth requires further adjustment to better account for differences in patient needs and the special circumstances of particular providers or facilities to ensure that the program is paying appropriately and adequately. HCFA has had mixed success in refining some of its payment methods. For example, HCFA partially addressed problems with its initial methodology for introducing a resource-based practice expense component into the physicians’ fee schedule when it issued a new methodology in 1998. Overall, we considered HCFA’s new methodology to be acceptable. The new methodology better defined practice expenses by specialty and used a more straightforward and simple-to-understand approach. Although HCFA developed the new methodology using the best available data, the agency had limited data on resource use by some specialties, and HCFA made a series of assumptions and adjustments without confirming their reasonableness. As a result, questions remain about whether payment is appropriate for certain procedures. To address these issues, we recommended that HCFA refine its relative value payments by identifying and then focusing on the areas where the data and methodology weaknesses have the greatest effect, but HCFA has done little to target its refinement efforts. Similarly, we have pointed out design flaws in the new payment methodology for SNFs and home health agencies that could allow providers to increase payments by “gaming” these payment methods. HCFA has begun to address some, but not all, of these weaknesses. HCFA has been successful in performing one of its principal missions— ensuring that claims are generally paid quickly and at little administrative cost to the taxpayer. Medicare contractors process over 90 percent of Medicare claims electronically and pay “clean” claims on average within 17 days after receipt. In contrast, commercial insurers generally take longer to pay provider claims. Costs for processing Medicare claims are roughly $1 to $2 per claim—much less than $6 to $10 or more per claim for private insurers, or $7.50 per claim paid by TRICARE—the Department of Defense’s managed health care program. Nevertheless, some Medicare contractors’ performance has been less than exemplary, and HCFA’s lax and uneven oversight allowed performance problems to continue undetected. In the 1990s, several contractors defrauded the government or settled cases alleging fraud for hundreds of millions of dollars, following allegations of serious problems. These included deleting or destroying claims, failing to conduct proper audits, falsifying documentation needed to prove claims were for medically necessary services, and switching off the toll-free beneficiary inquiry lines when staff members were unavailable to answer calls within the prescribed amount of time. Many of these problems were discovered, not through HCFA’s routine oversight efforts, but through whistleblowers whose information sparked federal investigations that led to criminal and civil settlements. HCFA’s oversight of its contractors’ activities had several failings. The agency relied on unverified performance information provided by contractors and limited checking of each contractor’s internal management controls. Furthermore, the agency’s reviews of its contractors’ performance and treatment of identified performance problems were inconsistent. To address these and other weaknesses, we made a number of recommendations to improve the rigor and consistency of HCFA’s oversight. HCFA has taken steps to improve its management and oversight of contractors. It has adopted a more consistent and strategic approach for overseeing contractor performance, which is directed by a management board composed of senior executives. In addition, the agency has clarified accountability for contractor oversight, assigned additional staff to monitor and oversee contractors, and separated responsibility for contractor management from contractor evaluation. However, some of our recommendations for improvement have not been fully implemented, including those to establish a policy for systematic validation of essential contractor-reported data and to strengthen controls over accountability and financial management, including improving debt collection activities. While HCFA has focused on specific contractor functions that it believes need improvement, others may also need attention. For example, Medicare contractors handle nearly 15 million telephone inquiries from beneficiaries annually, but HCFA has not been able to adequately oversee contractor performance in this area because it lacked performance data on beneficiaries’ access to telephone customer service, the accuracy of responses to inquiries, and caller satisfaction. To better measure performance, the agency has begun to develop measures for telephone service, set standards, and monitor contractor performance. In addition to sharing information with beneficiaries, contractors also play a major role in communicating with providers. How well they do this has become more of a concern, which is understandable given that providers have had to adjust to numerous program changes and increased attention is focused on potential improper payments. We have begun reviewing how CMS and other parts of HHS communicate with physicians to assess how Medicare program instructions are conveyed and whether communication efforts could be improved. Medicare is one of the federal government programs that we consider at high risk of improper payment because of its size and complex administrative structure. Safeguarding Medicare program payments has become an increased focus of HCFA’s activities in the last few years. Although HCFA and its contractors have taken a number of steps to address improper payment, program vulnerabilities remain. Recent concerns have focused on three program integrity issues—improperly paid claims, the integrity of HCFA’s new payment methods, and difficulties that providers face in understanding and complying with payment rules. Since 1996, the Office of Inspector General (OIG) in HHS has repeatedly estimated that Medicare contractors inappropriately paid claims worth billions of dollars annually. These claims successfully passed through Medicare’s highly automated claims processing systems because the claims appeared valid on their face. Claims were disputed only after the OIG obtained the underlying patient medical records from providers and reviewed them in detail. The OIG and contractor staff could then determine that some services were not properly documented to support the claims, not medically necessary, coded improperly, or not covered. Such labor-intensive and detailed review of even a significant fraction of the millions of fee-for-service claims is not practical or efficient. It would involve significant administrative cost and impose a considerable burden on providers required to submit patient medical records. As more than 90 percent of the improper payments the OIG identified were for claims that contained no visible errors and individual fee-for-service claims typically involve small amounts of money, the returns from an investment in such a review may not be cost effective. Nevertheless, these large improper payment estimates reinforce the importance of having the agency and its contractors develop and implement effective strategies to prevent or detect such payments. The Congress aided HCFA in this effort by creating the Medicare Integrity Program (MIP) and giving HCFA a stable source of funding for program safeguard activities as part of the Health Insurance Portability and Accountability Act of 1996 (HIPAA). In fiscal year 2000, HCFA used its $630 million in MIP funding to support a wide range of efforts. These included conducting antifraud activities, provider and managed care organization audits, targeted medical review of claims, and awarding a competitive contract to a coordination of benefits contractor, which will help safeguard Medicare dollars by identifying when other companies should pay claims as the primary insurer instead of Medicare. Concentrating audit efforts on providers and reimbursement areas in which program dollars are most at risk has been a cost-effective approach in identifying overpayments. Based on HCFA’s estimates, in fiscal year 2000, MIP saved the Medicare program more than $16 for each dollar spent. In addition to activities funded through the MIP program, HCFA has been conducting a range of other stewardship activities, such as revising its process for enrolling providers in Medicare to ensure that only legitimate providers are billing the program. The agency now has additional options for conducting safeguard activities because HIPAA gave it new authority to contract with entities other than the Medicare claims administration contractors to perform specific payment safeguard functions. Through a competitive bidding process, HCFA selected 12 entities to act as its program safeguard contractors (PSC) and has assigned them a variety of tasks. These have ranged from doing specific focused assignments to supplement the work of the claims administration contractors to conducting most of the program safeguard activities for a contractor. PSCs are also conducting nationwide safeguard activities. This incremental approach to assigning work to PSCs is a prudent first step that will allow the agency to test how best to integrate these specialized contractors into Medicare program integrity efforts. The agency has faced difficulties, however, in determining where its safeguard activities could be improved. The reason is that it lacked detailed information on payment accuracy by claims administration contractor and by type of provider or service. To develop a more refined understanding of how and why payment errors occur, the agency has an initiative to measure the error rate for each claims administration contractor. A PSC “validation” contractor has begun to randomly sample claims paid by contractors and to recheck the processing and payment decisions made. From the results, CMS will be able to target contractors whose best practices should be emulated by others and those that need improvement. Moving a larger share of program payments to methods that pay a global fee for a set of services creates new integrity challenges. Under global payment methods, providers face the risk of financial loss if their costs exceed their payments, while those who can furnish care for less than the global fee retain the difference. This provides incentives for providers to skimp on services, which may compromise patients’ quality of care. For example, managed care organizations participating in Medicare+Choice have incentives to inappropriately maximize the gains from their global payment by skimping on the delivery of services. Similarly, home health agencies are now paid a global payment for services provided during a 60- day episode of care, rather than being paid for each individual service. Thus, home health agencies can increase profits by reducing the number of visits provided during the payment period. In addition, no standards exist for what is the right amount of home health care for specific types of patients—particularly for home health aide care—a major share of home health visits. To reduce the system’s vulnerability to exploitation, we have recommended that HCFA adopt a risk-sharing provision, whereby the government shares in a home health agency’s excessive losses, but protects the program from an agency’s excessive gains. However, HCFA was concerned that any additional change to payment policy would be too confusing for home health agencies and has not agreed to implement the recommendation. Depending on their design, these global payment methods are not immune to being gamed by increasing services provided. This is because the link between amount of service provided—as determined by a provider—and payment has not been entirely broken. For example, payments to SNFs for serving beneficiaries are adjusted by a number of factors, including the amount of therapy services provided. This gives facilities incentives to raise their payment rates by providing more therapy services to beneficiaries than they would otherwise. Similarly, home health agencies have incentives to inappropriately increase the number of episodes of care provided, which could escalate, rather than constrain, Medicare spending. To protect program dollars, CMS needs information to monitor provider responses to payment changes and their effect on beneficiaries. Monitoring global payment methods is particularly important to ensure that providers do not skimp on services in ways that could negatively affect beneficiaries’ health. However, HCFA’s efforts to systematically gather and evaluate program data to monitor the impact of its SNF and home health payment reforms on providers and beneficiaries have not been sufficient to identify desirable or undesirable consequences. Furthermore, in Medicare+Choice, rather than developing proactive methods to monitor beneficiaries’ access to services, HCFA sometimes relied on complaints as the main indicator that enrolled beneficiaries may be experiencing problems in getting access to needed care. This is a weak mechanism because beneficiaries do not always understand the benefits that plans are expected to provide. We have made several recommendations that HCFA improve plan marketing and the appeals process literature so beneficiaries can understand their benefits and appeal rights. The agency has implemented some of our recommendations and has established work groups to consider others. While we and the OIG have continued to encourage the agency to close programmatic loopholes that can lead to improper payment, CMS’ safeguard efforts are viewed differently by some provider groups. Providers whose claims are in dispute have complained about the burden of medical review audits and about the fairness of some specific steps the contractors follow. CMS faces a difficult task in finding an appropriate balance between ensuring that Medicare pays only for services allowed by law while making it as simple as possible for providers to treat Medicare beneficiaries and bill the program. While an extensive claims review is undoubtedly vexing for the provider involved, relatively few providers actually undergo them. In fiscal year 2000, HCFA’s contractors conducted medical claims review audits of only three tenths of 1 percent of physicians—or 1,891 out of a total of more than 600,000 physicians who billed Medicare that year. We are beginning work to review several aspects of the agency’s auditing and review procedures for physician claims. Providers’ concerns about fairness may also emanate from the actions of others who oversee federal health care—such as the HHS OIG and the Department of Justice (DOJ)—which, in the last several years, have become more aggressive in pursuing possible health care fraud and abuse. In the mid-1990s, the OIG initiated a series of audits that targeted the billing practices of physicians at teaching hospitals. As we reported, the OIG intended to audit the major teaching hospital or facility practice plan affiliated with each of the nation’s 125 medical schools. The OIG chose these institutions because, of the nation’s 1,200 teaching hospitals, they had the largest number of residents and had received the most Medicare revenue—not because the OIG had reason to suspect that their billing activities were inappropriate. The medical community considered the audits costly and burdensome. We suggested to the OIG that a risk-based approach that focused on the most problem-prone institutions would be a more effective use of federal resources and less burdensome to compliant institutions. The OIG agreed, but said that the office could not do so in its ongoing work because it did not have techniques for narrowing the selection to the most problem-prone institutions. Providers have also charged that DOJ was overzealous in its use of the False Claims Act—a powerful enforcement tool with substantial damages and penalties. DOJ’s efforts included a series of nationwide investigations of hospitals known as national initiatives. These initiatives—particularly the Laboratory Unbundling initiative—which began in 1994, have provoked considerable controversy. For example, the hospital community alleged that DOJ subjected many of the nation’s hospitals to unwarranted investigations, resulting in large penalties for unintentional errors. Concerns with the Laboratory Unbundling initiative centered on the basis for selecting hospitals for audit, the reliability of the data used by the U.S. Attorneys’ Offices, and the manner in which hospitals were treated. Ultimately, several of these offices acknowledged that the data they had relied on contained errors that could not be corrected. As a result, these offices withdrew from the initiative, and all the hospitals in these areas that had entered into settlement agreements had their settlement amounts returned. In June 1998, DOJ issued guidance to all its attorneys, including those in its U.S. Attorneys’ Offices, that emphasizes fair and responsible use of the act in all civil health care matters. It instructs DOJ attorneys to determine—before they allege violations of the act—that the facts and the law sufficiently establish that a claimant knowingly submitted false claims. At first, as we reported in August 1999, implementation of the guidance varied among U.S. Attorneys’ Offices and some had taken steps in their investigations prior to the issuance of DOJ guidance in June 1998 that were, to varying degrees, inconsistent with the issued guidance.However, U.S. Attorneys’ Offices had largely addressed their shortcomings in implementing the guidance by 2000. In our more recent March 2001 report, we found that DOJ’s two newer initiatives are being conducted consistent with the guidance and that DOJ had improved its oversight of its U.S. Attorneys’ Offices. A major responsibility of CMS is to oversee federal quality standards for the services delivered to Medicare beneficiaries. Because many of these quality checks are actually carried out by the states, a key CMS mission is working with the states to oversee the care provided by nursing homes, home health agencies, end-stage renal dialysis centers, and psychiatric and certain Medicare-certified hospitals. We and the OIG have been studying the effect of HCFA’s oversight of nursing home quality for several years and have found significant weaknesses in federal and state survey and oversight activities designed to detect and correct quality problems in nursing homes. For example, in 1999, we reported that about 1 in 4 of the nation’s 17,000 nursing homes—an unacceptably high number—had care problems that caused actual harm to residents or placed them at risk of death or serious injury. Complaints by residents, family members, or staff alleging harm to residents remained uninvestigated for weeks or months. State surveys understated the extent of serious care problems, both because of procedural weaknesses in the surveys and their predictability. Federal mechanisms for overseeing state monitoring of nursing home quality were limited in their scope and effectiveness. In addition, when serious deficiencies were identified, federal and state enforcement policies did not ensure that they were corrected and remained corrected. We have made a number of recommendations to address these problems.HCFA generally concurred with our recommendations, and, in response, in 1998 the Administration introduced a series of initiatives focused on federal and state efforts to improve nursing home care quality. Certain initiatives seek to strengthen the rigor with which states conduct their required annual surveys of nursing homes. Others focus on the timeliness and reporting of complaint investigations and the use of management information to guide federal and state oversight efforts. To realize the potential of these nursing home quality initiatives, sustained efforts by CMS and the states are essential. Because the agency is phasing in the initiatives and states began their efforts from different starting points, much unfinished work remains. In September 2000, we reported that—following state efforts to use new survey methods to better spot serious deficiencies—the proportion of nursing homes nationwide with such deficiencies increased slightly. This could be due to better identification of problems by surveyors, but it could also be due to facility staff shortages during that period. Better detection and classification of serious deficiencies through the standard survey process will require further refinement of survey methods and more unpredictability in survey dates, which would limit the opportunities for nursing homes to prepare for them. States whose nursing home inspection activities we most recently reviewed had improved investigation and follow-up to complaints, but were still not meeting HCFA’s standard of investigating certain serious complaints within 10 days. These states also differed in how far they had progressed in establishing procedures to make it easier to file complaints or developing tracking systems to improve their oversight of investigations by local district offices. As for the application of strengthened federal enforcement policies, more time must elapse before progress in this area can be assessed, although referral of problem homes to the agency is on the rise. Similarly, with respect to improved federal oversight, the effectiveness of recent internal agency reorganizations to ensure more consistent oversight and management information reporting enhancements can only be judged in the months to come. While recent attention has focused on quality of care in nursing homes, they generally get more scrutiny than other providers do. Nursing homes are generally surveyed at least yearly. Other facilities are surveyed much less frequently. For example, home health agencies were once reviewed annually, but now are reviewed every 3 years. The OIG has also documented gaps in surveillance of psychiatric hospitals and kidney dialysis facilities. In addition, our work has shown that the number of HCFA-funded inspections of dialysis facilities has declined significantly. These unannounced inspections, which are the agency’s primary tool for ensuring that facilities meet standards protecting health and safety, were conducted at only 11 percent of the dialysis facilities eligible for Medicare recertification in 1999, compared with 52 percent in 1993. When such surveys were conducted, they showed that noncompliance was a problem. To illustrate, in 1999, 15 percent of the facilities surveyed had deficiencies severe enough, if uncorrected, to warrant terminating their participation in Medicare. No examination of HCFA’s record of Medicare management successes and shortcomings would be complete without recognizing the importance of the agency having the necessary tools to carry out its mission. Critical to the agency’s success are an organizational focus on results and accountability, coupled with adequate resources and the flexibility to effectively deploy them. CMS has not yet developed an effective performance-based culture—a key factor that limits ongoing efforts to manage effectively. Managing for results is fundamental to an agency’s ability to set meaningful goals for performance, measure performance against those goals, and hold managers accountable for their results. It is part of the direction set for federal agencies by the Congress through the Government Performance and Results Act of 1993. In May 2001, we reported on the results of our survey of federal managers at 28 departments and agencies on strategic management issues. Overall, HCFA fared poorly on this survey. For example, HCFA was the second lowest among the agencies we surveyed in the percentage of managers who reported that they were held accountable for results to at least a great extent. In addition, the percentage of the agency’s managers who reported having performance measures for the programs they were involved with was significantly below that of other government managers. The agency ranked lowest in terms of the percentage of managers who reported having four key performance measures—output, efficiency, quality, and outcome measures—and it ranked second lowest in having a customer service measure. Measuring performance in assessing a program’s efforts to achieve its goals is essential to fostering a performance-based culture and managing for results. For example, such measures could be used to demonstrate whether intended results are being achieved and to gauge if programs are operating efficiently. In addition to an organizational focus on managing for results, sufficient resources—in terms of both dollars and human capital—are vital to fulfilling the agency’s multiple management responsibilities. These responsibilities include key oversight and stewardship activities and modernization of the agency’s IT systems. However, CMS faces many competing priorities when trying to fund and staff Medicare-related activities. Over the years, HCFA’s administrative dollars have been stretched thinner as the agency’s mission has grown. For many years, budget pressures forced the Congress to make difficult decisions to limit discretionary spending. Like many other federal agencies, the agency has been operating with a discretionary administrative budget that has increased slowly. But, during the last decade, mandatory spending on Medicare benefit payments has doubled. Further, this was a period when the agency’s workload increased appreciably as it sought to fulfill BBA Medicare mandates and to take on new non-Medicare programmatic responsibilities, such as implementing the State Children’s Health Insurance Program (SCHIP). We and others have contended that too great a mismatch between the agency’s administrative capacity and its designated mandate has affected HCFA’s responsiveness and will leave the agency unprepared to handle Medicare reforms and future enrollment growth. In fiscal year 2000, Medicare’s operating costs represented less than 2 percent of the program’s benefit outlays. Although private insurers seek to earn a profit and incur other costs, such as those for advertising, they would not attempt to manage such a large and complex program with so comparatively small an administrative budget. Examples from the recent past show that sufficient resources are particularly important to support key oversight activities, such as ensuring proper payment of claims. In recent years, we have found that because of resource limits, claims administration contractors checked a smaller percentage of claims, audited a smaller percentage of cost reports from institutional providers, and were unable to identify and collect some overpayments promptly. In order to ensure that program safeguards were strengthened, the Congress created MIP, which provided—among other things—stable funding of safeguard activities. Although MIP began in fiscal year 1997, funding for safeguard activities did not increase until fiscal year 1998, when the MIP budget increased from $440 million to $550 million. Total program safeguard appropriations are slated to increase annually until fiscal year 2003, when the appropriation will total $720 million. Resource issues have affected other oversight activities. In the area of nursing home quality, HCFA has made negligible use of its most effective oversight technique—an independent survey performed by HCFA employees following completion of a state’s survey—for assessing state agencies’ abilities to identify serious deficiencies in nursing homes. Conducting a sufficient number of these comparisons is important because of concerns that some state agencies may miss significant problems, but HCFA lacked sufficient staff and resources to perform these checks. In addition, limited resources affected HCFA’s ability to oversee Medicare contractors. In fiscal year 2001, the agency requested and received funding for 100 additional positions to focus on key activities such as overseeing claims processing activities, monitoring payments to providers and suppliers, and using computer-based auditing techniques. Resource issues have also affected HCFA’s ability to make capital investments in its information systems for managing Medicare. For example, partly because resources were funneled to Y2K and other high- priority activities, HCFA has had to postpone much-needed IT enhancements that could help the agency and its contractors conduct Medicare program monitoring and policy development activities more efficiently. Resource limitations have delayed HCFA from developing a database using modern technology that could help the agency monitor health care quality and the appropriateness of provider payments. Some of Medicare’s vital information systems are decades old and operate on software no longer commonly used. The agency has recently begun to focus on developing systems that are easier to maintain and that can increase the agency’s ability to translate its data into useful management information. The agency’s current and planned IT projects include developing a set of databases using more modern technology, consolidating Medicare’s claims processing systems, and improving the systems that maintain the program’s managed care enrollment and payment data. However, the immediate pressing priorities to maintain systems, keep the program operating, and respond to congressional mandates leave less to spare for IT investments that could help the agency better manage Medicare. CMS’ capacity for managing Medicare is also closely tied to the quality and strength of the agency’s human capital. CMS has a reservoir of staff who are highly skilled in many aspects of health care and its financing. However, our prior and current work suggests that the agency lacks sufficient staff with expertise in some key areas, such as managed care arrangements, financial management, data analysis, rate-setting methodology, and IT. These shortages have affected the agency’s ability to take on new and challenging tasks. For example, although GAO has identified information security as a governmentwide risk that has been recognized as a particular problem for CMS, the agency’s Chief Information Officer told us that some IT security projects have been delayed primarily because of a lack of staff with requisite skills. Furthermore, the agency has faced the challenge of dealing with increased responsibilities with fewer people. The BBA had 335 provisions requiring HCFA to make substantial changes to the Medicare program, and during 1998—-a key implementation year—the agency was doing this work with about 1,000 fewer employees than it had in 1980. Compounding human capital concerns, CMS has a total of 49 senior executives to manage program activities accounting for billions of dollars in annual spending. In fiscal year 2002, federal benefit outlays for Medicare, Medicaid, and SCHIP are expected to reach approximately $400 billion. In fact, CMS’ corps of senior executives is smaller than that of most other civilian agencies that have significantly smaller annual expenditures. CMS’ senior-level executives play a vital role in focusing staff on current mission priorities and guiding the agency on a strategic path to its future. They manage about 4,600 agency employees and also oversee the efforts of Medicare claims administration contractors, who have about 22,000 employees. However, despite Medicare’s size and importance, there is no official whose sole responsibility is to run the program. In addition to Medicare, top-level managers have oversight, enforcement, and credentialing responsibilities for other major health- related programs and initiatives, such as the Medicaid and SCHIP programs, and for all of the nation’s clinical laboratories. These other programmatic responsibilities naturally require time and attention that would otherwise be spent meeting the demands of the Medicare program. Adding to concerns about current staffing, CMS is facing a potential loss of human capital with managerial and technical expertise through an impending wave of retirements. The agency has estimated that about 35 percent of its current workforce will be eligible to retire over the next 5 years. Upcoming retirements heighten concerns we raised in both 1998 and 1999 about HCFA’s loss of technical and managerial expertise due to its aging workforce. For example, in the 5 years prior to 1998, almost 40 percent of HCFA’s employees had left the agency. To its credit, to respond to this human capital challenge, CMS is working on a human resources planning effort to support the agency in strategic staffing, development, and recruitment planning decisions. Part of CMS’ challenge for planning its future workforce is to determine the right balance between work performed by CMS employees and work contracted out. In addition to its resource challenges, CMS faces statutory constraints that inhibit the agency from modernizing its management of fee-for-service claims administration—the bulk of its Medicare business. At Medicare’s inception in the mid-1960s, the Congress authorized the government to use existing health insurers to process and pay claims. It also permitted professional associations of hospitals and certain other institutional providers to “nominate” their claims administration contractors on behalf of their members. When the program began, the American Hospital Association nominated the national Blue Cross Association to serve as its fiscal intermediary. Currently, the association is one of Medicare’s three intermediaries and serves as a prime contractor for 26 local member plan subcontractors that process about 86 percent of all benefits paid by fiscal intermediaries. Under the prime contract, when one of the local Blue plans declines to renew its Medicare contract, the association—rather than CMS—nominates the replacement contractor. This process effectively limits CMS’ flexibility to choose the contractors it considers most effective. The agency has also considered itself constrained from contracting with nonhealth insurers for the various functions involved in claims administration. The Congress gave HCFA specific authority to contract separately for payment safeguard activities and for claims administration for home health and durable medical equipment. Nevertheless, for a number of years the agency has sought more general authority for functional contracting and other Medicare contracting reforms. We recently testified that Medicare could benefit from the Congress’ removal of limitations on CMS’ contracting authority and use of full and open competition in the selection of claims administration contractors. We have also suggested that, should the Congress modify the Medicare claims administration contracting authorities, it should consider requiring that HCFA report on its progress in implementing these new authorities. Further, we recommended that HCFA develop a strategic plan for managing claims administration contractors in this new contracting environment. In June 2001, the Administration proposed legislation to modify the Medicare claims administration contracting authority that, among other things, would permit—but not require—full and open competition. The proposal would allow CMS to select any entity it chooses, award separate contracts to perform specific claims administration functions, and use other than cost contracts. However, CMS would not have to use competitive procedures to select initial claims administration contractors or to renew contracts under the proposal. We are concerned that if CMS is not required to use such competition, it may not identify and contract with the best entities to perform claims administration services. Certain innovative approaches in contracting for services could be difficult to implement in a public program such as Medicare. Medicare was designed so that beneficiaries would have the freedom to choose among providers and that any qualified provider who was willing to serve Medicare’s beneficiaries could do so. Even though approaches such as developing a network of providers chosen for their quality and willingness to accept discounted fees could be advantageous for beneficiaries and taxpayers, CMS would face obstacles in implementing them. In a 1998 study, an expert panel concluded that the agency could benefit from a more focused effort to test and adapt such innovations in the program. However, broadly implementing the experimental innovations that prove successful may require new statutory authority. Considering Medicare’s complexity, size, and statutory constraints, some contend that HCFA’s management of Medicare has—on balance—been satisfactory, while others argue that it has not been acceptable. There is evidence that HCFA’s success has been mixed and that the agency’s challenges are growing. Effective governance of Medicare depends on finding a balance between flexibility and accountability—that is, granting the agency adequate flexibility to act prudently while ensuring that it can be held accountable for its decisions and actions. Moreover, because Medicare’s future will play such a significant role in the nation’s fiscal future, we believe it prudent to make an adequate investment to ensure that Medicare is professionally and efficiently managed. Achieving such a goal will require that the day-to-day operations of Medicare’s traditional program are modernized and maintained, and that achieving program efficiency and effectiveness remains paramount. In written comments on a draft of this report, CMS said it was pleased that we had recognized the agency’s progress in a number of key areas, including developing and implementing payment systems and strengthening oversight of Medicare contractors. However, CMS disagreed with our contention that—despite Medicare’s size and importance—there is no official whose sole responsibility it is to run the program. The agency noted that the Administrator of CMS has that responsibility. However, as we have pointed out, the Administrator also has many far-reaching responsibilities for oversight, enforcement, and credentialing for other major programs and initiatives. CMS has reorganized to centralize the management of the Medicare fee-for-service and managed care programs into two centers. Nevertheless, under the reorganization discussed in CMS’ comments, CMS did not indicate that it planned to designate one senior official whose sole responsibility will be the management of the Medicare program. In its comments, CMS agreed that more could be done to strengthen management of the Medicare program. CMS also discussed its plans for increasing emphasis on responding to beneficiaries and providers, improving the quality of care for Medicare and Medicaid beneficiaries, as well as how restructuring the agency based on the its major lines of business could help it achieve its mission. In addition, CMS provided technical comments, which we incorporated as appropriate. CMS’ written comments are reprinted in appendix I. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to the Secretary of the Department of Health and Human Services, the Administrator of the Centers for Medicare and Medicaid Services, appropriate congressional committees, and others who are interested. We will also make copies available to others on request. If you or your staffs have any questions, please call me at (312) 220-7600 or Sheila Avruch at (202) 512-7277. Other key contributors to this report were Hannah Fein and Sandra Gove. Medicare Contracting Reform: Opportunities and Challenges in Contracting for Claims Administration Services (GAO-01-918T, June 28, 2001). Medicare Management: Current and Future Challenges (GAO-01-878T, June 19, 2001). Medicare: Opportunities and Challenges in Contracting for Program Safeguards (GAO-01-616, May 18, 2001). Medicare Fraud and Abuse: DOJ Has Improved Oversight of False Claims Act Guidance (GAO-01-506, Mar. 30, 2001). Medicare: Higher Expected Spending and Call for New Benefit Underscore Need for Meaningful Reform (GAO-01-539T, Mar. 22, 2001). Major Management Challenges and Program Risks: Department of Health and Human Services (GAO-01-247, Jan. 2001). High-Risk Series: An Update (GAO-01-263, Jan. 2001). Nursing Homes: Sustained Efforts Are Essential to Realize Potential of the Quality Initiatives (GAO/HEHS-00-197, Sept. 28, 2000). Medicare Home Health Care: Prospective Payment System Could Reverse Recent Declines in Spending (GAO/HEHS-00-176, Sept. 8, 2000). Medicare+Choice: Plan Withdrawals Indicate Difficulty of Providing Choice While Achieving Savings (GAO/HEHS-00-183, Sept. 7, 2000). Medicare: Refinements Should Continue to Improve Appropriateness of Provider Payments (GAO/T-HEHS-00-160, July 19, 2000). Medicare Payments: Use of Revised “Inherent Reasonableness” Process Generally Appropriate (GAO/HEHS-00-79, July 5, 2000). Medicare: 21st Century Challenges Prompt Fresh Thinking About Program’s Administrative Structure (GAO/T-HEHS-00-108, May 4, 2000). Medicare Contractors: Further Improvement Needed in Headquarters and Regional Office Oversight (GAO/HEHS-00-46, Mar. 23, 2000). Medicare: HCFA Faces Challenges to Control Improper Payments (GAO/T- HEHS-00-74, Mar. 9, 2000). Medicare: Lessons Learned From HCFA’s Implementation of Changes to Benefits (GAO/HEHS-00-31, Jan. 25, 2000). Nursing Home Care: Enhanced HCFA Oversight of State Programs Would Better Ensure Quality (GAO/HEHS-00-6, Nov. 4, 1999). Medicare Post-Acute Care: Better Information Needed Before Modifying BBA Reforms (GAO/T-HEHS-99-192, Sept. 15, 1999). Medicare: Program Safeguard Activities Expand, but Results Difficult to Measure (GAO/HEHS-99-165, Aug. 4, 1999). Medicare Contractors: Despite Its Efforts, HCFA Cannot Ensure Their Effectiveness or Integrity (GAO/HEHS-99-115, July 14, 1999). Balanced Budget Act: Any Proposed Fee-for-Service Payment Modifications Need Thorough Evaluation (GAO/T-HEHS-99-139, June 10, 1999). Medicare+Choice: New Standards Could Improve Accuracy and Usefulness of Plan Literature (GAO/HEHS-99-92, Apr. 12, 1999). Medicare Managed Care: Greater Oversight Needed to Protect Beneficiary Rights (GAO/HEHS-99-68, Apr. 12, 1999). Medicare Physician Payments: Need to Refine Practice Expense Values During Transition and Long Term (GAO/HEHS-99-30, Feb. 24, 1999). HCFA Management: Agency Faces Multiple Challenges in Managing Its Transition to the 21st Century (GAO/T-HEHS-99-58, Feb. 11, 1999). Medicare: HCFA’s Use of Anti-Fraud-and-Abuse Funding and Authorities (GAO/HEHS-98-160, June 1, 1998). Medicare: HCFA Faces Multiple Challenges to Prepare for the 21st Century (GAO/T-HEHS-98-85, Jan. 29, 1998).
Considering the complexity, the size, and the statutory constraints affecting the Medicare Program, some contend that the Health Care Financing Administration's (HCFA)--recently renamed the Centers for Medicare and Medicaid Services--management of Medicare has, on balance, been satisfactory. Others argue that HCFA's management has been unacceptable. HCFA's record has been mixed and the agency's challenges are growing. Effective management of Medicare depends on finding a balance between flexibility and accountability--that is, granting the agency adequate flexibility to act prudently while ensuring that it can be held accountable for its decisions and actions. Moreover, because Medicare will play such a significant role in the nation's fiscal future, it is prudent to make an adequate investment to ensure that Medicare is professionally and efficiently managed. Achieving this goal will require the modernization and maintenance of Medicare's traditional day-to-day operations.
This section of the report describes the paper- and electronic-based check collection processes, presents statistics on the use of electronic and nonelectronic payments and types of check processing, and describes the Federal Reserve’s role in check collection. Interbank checks are cleared and settled through an elaborate check- collection process that includes presentment and final settlement. Check presentment occurs when the checks are delivered or images transmitted to the paying banks for payment and the paying banks must decide whether to honor or return the checks (see fig. 1). Settlement of checks occurs when the collecting banks are credited and the paying banks are debited, usually through accounts held at either the Federal Reserve or correspondent banks. In the paper-based check collection process, banks of first deposit generally sort deposited checks by destination and dispatch them for collection. Banks of first deposit physically can collect a paper check through several methods: Direct presentment of the paper check to the paying bank; Exchange of the paper check at a clearing house in which the bank of first deposit and the paying bank are members; Collection of the paper check through an intermediary, such as a correspondent bank or a Federal Reserve Bank; or Some combination of the above methods. When a paying bank decides not to pay a check, the bank typically returns the dishonored check to the bank of first deposit. Under the Uniform Commercial Code, the paying bank generally has until midnight of the day following presentment (“midnight deadline”) to return dishonored checks or send notices of dishonor. The paying bank may return a dishonored check, commonly referred to as a return item, directly to the bank of first deposit through a clearing house association, if applicable, or through a returning bank (a bank handling a returned check), including the Federal Reserve. Regulation CC was promulgated by the Federal Reserve Board in 1988 to implement the Expedited Funds Availability Act of 1987 (EFAA), which establishes the maximum periods of time that banks can hold funds deposited into accounts before those funds must be made available for withdrawal. Among other things, the EFAA and its implementing Regulation CC generally require banks to make funds from local checks available by the second business day after the day of deposit; funds from nonlocal checks must be available by the fifth business day after the day of deposit. At each step, the check must be processed physically and then shipped to its destination by air or ground transportation. Some have suggested that truncating paper checks, or stopping them before they reach the paying bank, could result in lower costs to process checks and benefits to both the banking industry and the public. Under Regulation CC, the term “truncate” means to remove an original check from the collection or return process. Instead, the recipient receives a substitute check; or by agreement, information relating to the original check (including data taken from the magnetic ink character recognition line of the original check or an electronic image of the original check), whether with or without the subsequent delivery of the original check (see fig. 2). Do not endore or write below thi line. Essentially, check imaging is a process through which a paper check is scanned and a digital image is taken of the front and back of the paper check. The paper check may then at some point be destroyed and the images may then be stored in an archive maintained by the bank for retrieval if needed. When a paper check is imaged depends on the structure of a bank’s back office operations. Some banks have the capability to image a paper check at their branches, while others transport the paper to centralized locations where the paper is imaged. Once the images are taken, an image cash letter (ICL) is assembled and sent to the paying bank directly or to an intermediary (such as the Federal Reserve, a correspondent bank, or an image exchange processor) for ultimate presentment to the paying bank (see fig. 3). Since Check 21 was enacted, imaging technology has been further refined so that it is possible for a bank to image a paper check at its branches or automated teller machines (ATM)—commonly referred to as branch or ATM capture. In addition, some banks are beginning to offer a service to their customers called remote deposit capture where merchants can scan the paper checks they receive and electronically deposit those images at the bank. As discussed in the introduction to this report, electronic check processing was hampered by certain legal impediments that Check 21 addressed. Moreover, as we reported in 1998, perceptions about consumer preferences for receiving canceled checks also deterred electronic check processing. Because, under Check 21, checks drawn on any particular bank can be truncated by any bank across the country, banks cannot return the original canceled paper checks to their customers once they are imaged. At the time of our 1998 report, Federal Reserve officials and bank officials with whom we spoke expressed a belief that many consumers wanted their canceled checks returned. The popularity of the paper check as a retail payment instrument in the United States is waning. The Federal Reserve has estimated that the number of checks used in the United States peaked during the mid-1990s at around 50 billion checks per year. In its 2007 study the Federal Reserve highlighted the decline in check usage as a retail payment instrument. It reported that both the number of checks written and checks paid declined from 2003 through 2006. In 2006, 33.1 billion checks were written compared with 37.6 billion checks in 2003 and paid checks decreased from 37.3 billion checks to 30.6 billion checks in the same period. The number of checks written differs from checks paid because paper checks that have been converted into automated clearing house (ACH) payments were included in the figure for checks written. Additionally, the Federal Reserve concluded that the share of retail payments made electronically was growing, while the share of check payments of total noncash payments was declining. Electronic payments, including debit and credit cards, ACH payments (including check conversions), and electronic benefit transfers (EBT) amounted to two-thirds of the total number of noncash payments, which in 2006 totaled 93.3 billion. The share of check payments declined from 46 percent in 2003 to 33 percent in 2006 (see fig. 4). While check use has declined, check processing increasingly has become electronic. As shown in figure 5, from June 2006 through June 2008, the number of imaged checks deposited by collecting banks and received by paying banks has grown steadily. In June 2006 banks deposited 206 million checks as images compared with June 2008, when banks deposited 1.1 billion checks. Similarly, the number of checks received as images by the paying banks has grown. In June 2006, paying banks received 89 million items; by June 2008, they received almost 852 million items. However, the number of substitute checks has not declined, but has increased from 117 million in June 2006 to 283 million in June 2008. These checks represent paper that must be presented physically to paying banks through the collection system. The Federal Reserve operates a comprehensive, nationwide system for clearing and settling checks drawn on banks located throughout the United States. These offices accept paper check deposits and transport the paper checks to the paying bank. Since the effective date of Check 21, the Federal Reserve sends and receives images between banks. The Federal Reserve offers imaged check products—commonly referred to as the Check 21 products (Fed Forward, Fed Receipt, and Fed Return)—for a fee to banks that use its check collection services. According to the Federal Reserve Board’s 2007 Annual Report, of the approximately 10 billion checks (about one-third of the total 30.6 billion paid checks) processed through the Federal Reserve in 2007, 42.2 percent were deposited as images and 24.6 percent were received using Check 21 products. Further, in the month of July 2008, the proportion of checks deposited and presented as images using the Federal Reserve’s Check 21 products increased to 77.8 percent and 54.4 percent, respectively. As a result of the declining check volumes, the Federal Reserve developed a long-term plan for restructuring its check processing operations. In 2003, the Federal Reserve had 45 check offices. Since then, the Federal Reserve has closed a number of offices or gradually eliminated its check processing operations. In June 2007, the Federal Reserve announced that its check services system would be consolidated into four regional check processing sites. As of September 30, 2008, the Federal Reserve had 15 check offices and was working toward the objective of maintaining four offices at Atlanta, Cleveland, Dallas, and Philadelphia by the end of the first quarter of 2010. Given the significant declines in paper check deposit volumes, the Federal Reserve’s Retail Payments Office believes that the Federal Reserve likely will accelerate the consolidation schedule even further, reducing its check processing offices to perhaps one office by mid-2010. Check truncation has not resulted yet in overall gains in economic efficiency for the Federal Reserve or for the banks we surveyed, but Federal Reserve and bank officials expect efficiencies in the future. The expectation for electronic processing of checks was that it would lead to gains in economic efficiency—that is, removing paper from the payment stream would lead to lower costs. Our analysis of Federal Reserve cost accounting data suggests that its costs may have increased since the passage of Check 21, which may reflect concurrent maintenance of its paper processing infrastructure, investments in equipment and software for electronic check processing, and incurred costs associated with closing check processing sites. Estimates varied on whether costs were lower for private banks as the result of the check truncation that Check 21 facilitated, reflecting differences in the ways in which different banks handle checks and payments and differences among cost accounting systems. For example, several of the 10 largest banks noted that maintaining a dual paper-electronic infrastructure to date had prevented them from achieving overall lower costs, although they had seen reduced transportation and labor costs. Check imaging and the use of substitute checks appear to have had a neutral impact on banks’ fraud losses. We found and the Federal Reserve’s budget documents report that check truncation has not decreased Federal Reserve costs, although it contributed to decreased labor hours and transportation costs in Federal Reserve check services. To distinguish the effects of check truncation from other factors influencing the Federal Reserve’s total costs for check clearing services, we modified econometric cost functions that Federal Reserve economists have used to assess the effects of check volumes on total costs. In particular, we sought to distinguish the effect of the increased use of check truncation following passage of Check 21 on total costs from the concurrent effects of the decrease in the number of checks written in the United States, changes in the volume of checks processed by the Federal Reserve, the Federal Reserve’s consolidation of its check services, and costs of labor, software, and other expenses associated with the check processing services. With this consolidation of check offices, the Federal Reserve has incurred an estimated $115 million in costs from 2003 through 2007, including severance and other payments, which would increase total check services costs. However, the Federal Reserve did recover all costs for its check services from 2005 through 2007. Consistent with our results, the Federal Reserve’s annual budget reports from 2006 through 2008 reported that the Federal Reserve’s budget for check services experienced cost overruns. Most recently, the 2008 annual budget review reported that the expense overrun was due mainly to greater systemwide costs in preparation for additional restructuring of check services (costs included $34.0 million for accrual of severance, equipment impairments, and other expenses). The 2007 annual budget review noted total expenses for check services were to increase to $11.0 million reflecting higher costs for Check 21-related supplies and equipment, as well as additional resources necessary to facilitate further consolidation into five regional check-adjustments sites. “Total check service expenses were budgeted to increase by $5.7 million, or 0.9 percent from the 2005 estimate. The increase reflects one-time costs to prepare further consolidations of check operations, as well as other initiatives underway to improve the efficiency of check operations, including investments in Check 21 technology to accommodate increased volumes.” The Planning and Control System (PACS) is the Federal Reserve’s cost accounting system for recording expenses, which includes the costs of its check operations. We analyzed PACS data on check processing to determine whether electronic check processing had an effect on total processing costs. Our analysis builds on previous research by economists in the Federal Reserve. The analysis includes estimation of econometric cost functions using quarterly data from first quarter of 1994 through the fourth quarter of 2007. We chose 1994 as the beginning point for the analysis based on conversations with Federal Reserve officials about the data and in order to provide adequate coverage for the period before and after enactment of Check 21. These cost functions estimate the effects that different explanatory variables may have on total Federal Reserve costs for check services. Explanatory variables include the total volume of checks processed, the introduction of electronic processing or the volume of checks processed electronically, the number of return items, the number of Federal Reserve check processing offices, whether Check 21 was in effect, and wage and price indexes. The cost functions permit isolation of the effect of Check 21 from the effects of other variables on the Federal Reserve’s total costs for check services. The results do not demonstrate any gains in economic efficiency as measured by lower costs in the Federal Reserve’s check operations for the period since the passage of Check 21 through 2007. In particular, the variable that would measure a change in total costs following the effective date of Check 21 did not have a statistically significant effect on total costs. See appendix II for a more detailed discussion of the estimated cost functions. In part, the results reflect costs associated with the concurrent closing of the Federal Reserve’s check processing sites. While these closings should reduce costs in the long run, restructuring expenses incurred as part of the closings (such as severance pay for workers) represent up-front costs. The need to maintain dual infrastructures for paper and electronic check services also may explain the results. While Check 21 removed a barrier to electronic processing by creating the substitute check, Check 21 did not require that paper be removed from the process. So, the Federal Reserve continues to process paper checks and must maintain the infrastructure to process paper checks as it invests in new equipment to electronically process checks. Further, the creation of the substitute check also required investment in new equipment to print those instruments. For instance, a Federal Reserve Retail Payment Office official noted that the high-speed printing machines for substitute checks cost approximately $200,000 each and the Atlanta processing site had purchased about 12 of these machines. Although the move to electronic check services apparently has not led yet to overall cost savings, the Federal Reserve has seen decreases in transportation costs and work hours. With reduced paper volumes accompanying check truncation, the Federal Reserve’s transportation costs for check services decreased approximately 11 percent from the fourth quarter of 2001 through the fourth quarter of 2007 (see fig. 6). The Federal Reserve also has seen a decrease in the number of work hours for check services. Total work hours dropped from 2.6 million in the fourth quarter of 2001 to 1.3 million in the fourth quarter of 2007, a decrease of approximately 48 percent (see fig. 7). Since the transition to imaging has been gradual throughout the banking industry, the 10 largest U.S. banks still are maintaining paper-based processing systems. As previously noted, Check 21 did not require banks to take any action other than the acceptance of the substitute check. The 10 largest banks in the United States, based on deposit size, generally have large national branch networks and process large volumes of checks; consequently, they have a financial incentive to reduce the amount of paper they have to sort and transport. In 2007, these banks individually had at least 350 million paper checks deposited by their customers and some of them had considerably higher deposits, up to approximately 5 to 7 billion checks. But, the 10 banks have achieved various levels of electronic processing. Two of the 10 banks have not converted their check processing systems to imaging, but plan to do so by early 2009 and 7 banks have migrated to check imaging to some extent, but with imaging volumes at various levels. As of 2007, on the basis of our data collection instrument, the check volume of the seven banks that sent electronic check images ranged from almost 4 to 60 percent of their overall check deposits, although imaged volumes have been growing for some of the seven banks. However, the seven imaging banks are maintaining dual processing systems to collect on checks deposited at their institutions. If a bank cannot receive an image, a bank or an intermediary must either print a substitute check of the image or present the original paper check. Officials from four banks provided us with information on how the continued use of paper presentment has affected their transition to check imaging and their level of cost savings. Federal Reserve officials noted that the willingness of private banks to invest in the equipment needed to process check electronically demonstrated the bank’s expectation of lower costs. One bank official told us that the bank still has to print substitute checks for presentment to the small institutions that cannot receive images, which adds to the bank’s costs. Another bank noted that for banks that would prefer to receive only paper, it will deposit the image with either the Federal Reserve or another intermediary that then will print the substitute check to present for payment. An official representing this bank stated that the bank has to incur the additional cost of printing a substitute check or, if it goes through an intermediary, to pay the intermediary’s prices. The same bank official added that maintaining paper operations has delayed the ultimate potential savings from electronic check processing because the bank had to keep in place its transportation network to continue delivering paper checks. A third bank official reported to us that fees paid to clear checks would be reduced as more and more banks converted to imaging. Finally, a bank official from the fourth bank advised us that mid-size and regional banks were behind in their conversion to imaging because they are too large to outsource their check business, but not large enough to have a financial incentive to invest in check imaging technology. Thus, they continued to use local clearinghouses where they could exchange their checks at very low costs. This official noted that these banks need a reasonable business case for investing in check imaging. The declining volumes of paper checks also may be inhibiting the migration of some banks to check imaging. As previously noted, from 2003 through 2006, the number of checks paid had declined from about 37 billion to over 30 billion checks. According to one bank trade association, some banks are still undecided about converting to imaging because they recognize that check volume is declining and wonder why they should invest in check processing technology. During our interviews, some of the seven imaging banks raised the issue of declining check volumes as an additional complication preventing some banks from converting to check imaging. Officials from the Federal Reserve acknowledged while the volume of checks is declining, paper checks would continue to be used long enough to warrant banks’ investments in the technology for a more efficient check processing method. In both the paper-based and the image- based check processing systems, the bank of first deposit bears most of the cost of check collection; thus, it has the most financial incentive to convert to an image-based system. In addition, under EFAA, the bank of first deposit is required to release funds to the depositor within specified time periods; thus, it has an additional incentive for speeding up processing. The paying bank has the least market incentive to migrate to imaging because it does not incur the costs for collection, such as transportation and clearing fees. Officials representing some of the four banks with the highest volumes of check image deposits and receipts raised concerns with us that some banks are refusing to migrate to the new imaging technology and some action may be needed to encourage them to do so. One official told us that paying banks should be paying more of the cost of check processing so that they would have a financial incentive to receive images. The official specifically stated that a group of banks has refused to implement the technology and accept images. Another bank official said that from approximately 5 to 7 percent of banks have refused to convert to imaging and may need regulatory pressure to adopt the technology. Under a paper-based check system, paper checks have to be sorted and transported at every step until they are presented to paying banks; as a result, transportation and labor are among the banks’ highest costs. From our analysis of responses to our data collection instrument, officials from largest banks told us that labor was their largest category of expenditures related to check processing followed by transportation. However, none of the seven banks that process checks electronically expect transportation to be a large expenditure category for future processing operations if imaging technology is fully implemented. According to our bank interviews, air transportation networks of some of the largest U.S. banks have been reduced. Four banks (those with the highest volumes of check image deposits and receipts) have reduced intrabank and interbank transportation routes for checks, particularly air routes. By the end of 2009, two of the four will have eliminated their air transportation networks entirely. However, three of the four banks have not reduced costs for couriers and local transportation to the same extent as for air transportation because they still transport paper to central processing offices or to local clearinghouses. We were told by two bank officials we interviewed that as more paper checks are imaged at the branch level, the ground transportation costs of banks should be reduced. One bank official advised us that the earlier the bank can transmit the check information to its processing system and capture the checks as images, the lower the bank’s costs. The official added that the bank is working toward implementing branch “capture” (that is, conversion to an image) because the institution achieves better float management and eliminates courier transportation from its cost equation. Another bank official told us that because his bank’s transportation costs (for paper checks going from the branches to the central processing office) would not be reduced until the branches could capture check images; the bank had developed a pilot program for capture in a few branches. Although imaging was expected to result in savings in labor and transportation, the costs associated with installing and maintaining imaging equipment and the need to continue to maintain paper processing and clearing capabilities has prevented the realization of cost savings. According to a third bank, it is unclear when it will recover its significant investment in imaging equipment, image archives, and image exchange enhancements, if ever, due in part to the absence of universal adoption of check imaging. In contrast, we were told that transportation costs for banks that have not migrated to electronic processing may increase because as the overall volume of paper checks declines (due to check imaging and consumer preference) transporting the remaining checks will become more expensive on a per check basis. According to Federal Reserve officials, when fewer banks require the services of a particular transportation network, per-check transportation costs will increase for those banks still using the services because the network is transporting a smaller number of checks. The costs for the last bank on a specific route will be very expensive. According to one Federal Reserve official, in the future overnight mail may be the only practical option for these banks. In congressional testimony, the Director of the Federal Reserve Board’s Division of Reserve Bank Operations and Payment Systems stated, “As banks improve their technological capabilities, they can reduce their reliance on air and ground transportation, especially shared transportation arrangements. The banks that remain tied to paper checks will continue to bear the costs of those arrangements.” Furthermore, bank officials told us that they had additional technology costs when they converted to a check imaging system. To exchange checks electronically with other banks, banks needed to adapt their systems both to send and receive images. The technologies required for electronic check processing include hardware and software to image checks, archive images, and transmit image cash letters for collection. From the analysis of responses to our data collection instrument, six banks projected that the technology costs would continue to be in the “great” or “greatest” range for the foreseeable future. On the basis of our interviews, the two largest imaging banks have recovered or will recover the investments they made for check imaging by 2009. An official representing one of the three banks stated that the bank recovered its investment in imaging mostly through savings in labor and transportation. Moreover, the bank had less equipment, lower maintenance costs on the remaining equipment, and needed less back office space because of electronic processing. The banks that have not recovered their investments still were investing in image archive and image exchange enhancements. Similar to the Federal Reserve, banks have to deal with substitute checks and, thus, may be required to invest in the printing of substitute checks. From the analysis of responses to our data collection instrument, officials representing banks that have deposited images categorized expenditures for the printing of substitute checks in the “some” to “very great” range. In a follow-up interview, one bank official told us that the bank decided to outsource the printing because it decided not to make the investment since substitute checks were a temporary measure and would not be used once all institutions were image-enabled. Thus, this investment did not make sense for the bank. Another bank official acknowledged that substitute check printing has cost the bank hundreds of thousands of dollars to implement. Smaller banks also have been migrating to electronic check processing. But, according to our interviews with three smaller banks (in this case, one bank and two credit unions), they have migrated all of their volumes to electronic processing rather than operating two processing systems, as the largest banks have been doing. In addition, the three smaller banks told us that they typically will use a third-party processor, an image exchange processor like Endpoint Exchange, the Federal Reserve, or another intermediary, such as a correspondent bank. For example, a credit union deposited and received images through the Federal Reserve Banks, while a medium-size bank, with assets of $4.4 billion, deposited and received images through an image processor and correspondent. Officials representing the smaller banks told us that it may be easier for small banks to completely migrate to imaging because their check volumes are minuscule in comparison to the volumes of the largest banks and their back offices generally are less complicated than those of the largest banks. The bank with $4.4 billion in assets received approximately 15 million checks for deposit in 2007, compared with the 10 largest banks in which the bank with the lowest volume of check deposits had 350 million checks deposited. Moreover, generally when these institutions migrate to check imaging, they acquire the imaging services of their intermediary or processor rather than creating their own. In our interviews, representatives of the smaller banks described how check imaging had affected their operations and costs. The bank with assets of $4.4 billion reduced its costs by reducing its transportation network. According to a bank official, the bank also expects to secure cost savings from its local courier routes in the future. But, the bank had to invest in software to transfer check images to its correspondent bank. An official from a small credit union told us that check imaging allowed it to reduce its labor costs by half, after spending almost $6,000 for technology. Another credit union told us that they were able to eliminate three full- time equivalent positions because check processing and related operations (such as researching customer issues on payments) became more efficient. According to an official at the credit union, while the institution made some investments in technology and software, it had recovered the investment costs because of the staff reductions. Based on a recent American Bankers Association’s (ABA) survey of their members about fraud in deposit accounts, the analysis of responses to our data collection instrument, and our interviews with banks, we found that the use of substitute checks and check imaging has had a neutral effect on fraud losses. In 2007, the ABA reported in its survey of members, more than 92 percent of the bank respondents answered that they had not incurred any losses from substitute checks in 2006. Of the 8 percent of banks that responded that they had incurred both fraud and non-fraud losses from substitute checks, more than 80 percent also responded that these losses did not occur because the instruments were substitute checks instead of original checks. From the analysis of our responses to our data collection instrument, the six largest banks that have migrated to electronic check processing noted that check imaging and the use of substitute checks had not affected the prevalence of losses from bad checks and that imaging has had a neutral or minimal effect on check fraud. Officials representing two of these banks explained in subsequent interviews that in the post-Check 21 world, since checks are being processed faster banks can catch a fraudulent item sooner. A third official told us that he had seen a slight decline in fraud losses since Check 21. Finally, from the analysis of the responses to our data collection instrument, four of the largest banks noted that they had not taken additional actions to alleviate the potential threat of losses from images of bad checks. On the basis of our structured bank consumer interviews, we found only a small percentage of consumers who preferred to receive canceled checks with their checking account statement. Of the bank consumers we interviewed, 12 (or about 11 percent) wanted their canceled checks returned, while 37 (or about 35 percent) preferred to use online banking capabilities to review their check payment activity. In general, consumers expressed a variety of preferences for how banks should provide them with the most complete information about their check payments activity. Also, most of the consumers were not concerned significantly about being able to demonstrate proof of payment using a substitute check or check image rather than a canceled check. Few of the consumers reported that they suffered errors from the check truncation process. In addition to conducting consumer interviews, we reviewed consumer complaint data provided by federal banking regulators and found relatively few consumer complaints relating to Check 21. We found that a small percentage of bank consumers in our structured interviews preferred receiving canceled checks, while the remaining consumers preferred reviewing their check payments activity online or in a less paper-intensive format, such as image statements. As we reported in an earlier report, perceptions about consumer preferences for the receipt of their canceled checks deterred the adoption of electronic check processing. Based on the bank consumers we interviewed, it appears that their preference for canceled checks is diminishing. In our interviews, consumers expressed a variety of preferences for how banks should provide them with the most complete information about their check payments activity (see fig. 8). In particular, 12 of the 107 consumers, or about 11 percent, told us that they preferred receiving their canceled checks with their checking account statement. Some of these consumers believed that canceled checks were better for recordkeeping and more secure than electronic images in terms of protecting their privacy. Others in this group stated they wanted to be able to review their handwriting and other details of the canceled paper check to ensure that the checks were not counterfeit or the signatures forged. However, most bank consumers we interviewed accepted the use of online banking to review their check payments activity. Specifically, 37 of the 107 consumers, or about 35 percent, told us that they preferred reviewing check information and images online. Several consumers stated that they did not need the “extra paper” from canceled checks and image statements and that online reviewing was more secure than receiving canceled checks. Some consumers stated that they enjoyed the convenience of reviewing their check payments activity online at any time. Twenty-eight of the 107 consumers, or 26 percent, preferred a combination of the various methods (check images, online review, paper checks, and substitute checks). Most bank consumers reported that they were not concerned significantly about demonstrating proof of payment despite the changes to their checking accounts resulting from check truncation. For example, a consumer might pay a debt using a check, but the creditor might not properly record the payment, and then ask the consumer to demonstrate proof that he or she paid. Under the check truncation process, the consumer most likely would have access only to a substitute check or an image of the canceled check and not the original, canceled check. In our structured interviews, we asked consumers about their experience with demonstrating proof of payment. We found that 33 of the 108 consumers, or about 31 percent, had never been required to demonstrate proof of payment using canceled checks, substitute checks, or an image statement. We found that 58 of the 108 consumers, or about 54 percent, had used a canceled check to demonstrate proof of payment. We also found that 33 of the 108, or about 31 percent, had used a substitute check or image statement to demonstrate proof of payment. Most of these consumers reported that they had no difficulty using a substitute check or image statement, but some consumers reported that creditors would not accept an image showing only the front of the check so the consumer had to get copies of the front and back of the check from the bank. We then asked consumers whether they were concerned about having to demonstrate proof of payment using a substitute check or image statement rather than a canceled check. We found that 53 of the consumers, or about 49 percent, were “slightly” or “not at all” concerned about their ability to demonstrate proof of payment using a substitute check or image statement (see fig. 9). In particular, many of these consumers were confident that a substitute check or image statement contained all of the information necessary to demonstrate proof of payment. However, 35 of the consumers, or 32 percent, were “extremely” or “very” concerned about using a substitute check or image statement. Many of these consumers were concerned that having an image of only the front of the check might not be sufficient, particularly if they had experienced such difficulty in the past. Few of the bank consumers we interviewed reported that they suffered errors from the check truncation process. We asked consumers whether they had experienced errors such as double-posting of an item, a forged signature on a check, a counterfeit check, or some other error involving canceled checks, substitute checks, and image statements. The consumers reported more errors involving canceled checks than substitute checks or image statements. Specifically, 28 of the 108 consumers, or about 26 percent, reported an error involving a canceled check and using it to resolve the error. In contrast, only one consumer we interviewed reported suffering an error related to double-posting of a debit and using a substitute check to resolve the error. Also, 7 of the 74 consumers who reported that they received image statements, or about 9 percent, reported errors involving an image statement and using it to resolve errors they experienced. See figure 10 for the distribution of reported errors involving canceled checks and image statements. Based on interviews with trade association and service vendor officials, we found that some banks have been correcting errors associated with double-posting of a check before consumers experience them. They told us that double-posting initially was a significant problem for banks as they adopted check truncation technology. However, they also noted that many banks have now incorporated protection in their computer system to identify duplicates before they reach the consumer, so that many consumers never see them when they review their bank statements. We found that a small percentage of consumers complained to the federal banking regulators about matters relating to Check 21. In its April 2007 report, the Federal Reserve Board found that less than 1 percent of all complaints received by federal banking regulators related to Check 21. The results of our review of consumer complaint data on Check 21 corroborated the Federal Reserve Board’s conclusion. Specifically, we reviewed consumer complaint data from the four federal banking regulators from October 28, 2004, through March 31, 2008, and found 172 complaints were submitted about Check 21. In comparison, in each year from 2005 through 2007, the regulators received approximately 35,000 consumer complaints overall. Of the 172 complaints relating to Check 21, we found that 78, or about 45 percent, were from consumers who wanted to continue receiving canceled checks. The federal banking regulators responded to such complaints by noting that banks have no legal requirement to return canceled checks to consumers and that the return of canceled checks was dependent on the contractual agreement between consumers and their banks. However, in these instances, the data showed that the interested banks generally agreed to send canceled checks to consumers whenever possible. In addition, another 30 of the 172 complaints, or about 17 percent, were from consumers concerned about the quality or clarity of image statements. Some of the banks we interviewed also mentioned image quality as a prominent consumer complaint, but we learned that they continue to seek a solution to image quality problems. To the extent that banks have implemented electronic check processing, bank consumers have realized both benefits and costs relating to faster processing and access to information about their checking accounts. Faster check processing has helped some banks extend the cut-off time for same-day credit on deposits, which can result in faster availability of deposited funds. In addition, bank industry officials and some of the consumers we interviewed believe it is beneficial to receive simpler checking account statements with check images rather than canceled checks. Also, bank industry officials cited benefits to consumers from immediate access to information about checking account activity and improved customer service. In addition, consumers can benefit specifically from a provision of Check 21 because they have the right to expedited re- credit of their checking accounts if banks make certain errors associated with substitute checks. However, on the basis of our consumer and bank interviews, the extent to which consumers have benefited from expedited re-credit is unclear. We also found that some consumers may incur fees related to receiving canceled checks and check images with their checking account statements. Based on our review of available data from 2001 through 2006, it appears that fees for canceled checks have increased and fees for check images have remained relatively flat. In addition, the amount of the fees can vary depending on the type of checking account the consumer maintains. We found that banks may have extended the cut-off time for accepting deposits for credit on the same business day, due to the check truncation process and other check-system improvements. Generally, banks had established a cut-off hour of 2:00 p.m. or later for receipt of deposits at their main or branch offices and a cut-off of 12:00 p.m. or later for deposits made at ATMs and other off-premise facilities. These cut-off times provided the banks with necessary time for handling checks and transporting them overnight to paying banks. The check truncation process and check imaging provide collecting banks with additional time to present checks to paying banks. As a result, banks may be able to establish a later cut-off hour, which would give consumers more time to deposit funds at the bank for same-day credit. Bank officials told us that they have started to adjust their cut-off times in some geographic areas in response to the growth of check truncation. Of the seven largest U.S. banks that have started to migrate to check imaging, five told us that they have extended some of their deposit cut-off times at certain branches. For instance, one bank on average extended its cut-off time by 2 hours in the Northeast, and another bank had plans in place to make a similar 2-hour extension in selected markets. A third bank told us that it has extended the cut-off time for accepting deposits for credit on the same business day at certain ATMs to 8:00 p.m. in several major cities such as Atlanta, Chicago, Los Angeles, and New York. Although some consumers may have additional time for making deposits, they may not be able to withdraw their funds any sooner because the funds availability schedules of Regulation CC have not been amended following enactment of Check 21. The Federal Reserve Board recently concluded that much broader adoption of new technologies and processes by the banking industry must occur before check return times can decline appreciably and thereby permit a modification of the funds availability deadlines. The Federal Reserve Board found that the banks of first deposit learn of the nonpayment of checks faster than they did when EFAA was enacted, but banks still do not receive “most” local or nonlocal checks before they must make funds available for withdrawal. However, the Federal Reserve’s decision to consolidate its check- processing regions has had a direct effect on consumers in terms of the availability of their deposited funds under Regulation CC. Specifically, the consolidations have increased the proportion of local checks and thereby reduced the maximum permissible hold period from 5 business days to 2 business days for many checks. As previously noted, the Federal Reserve’s check-processing regions are being consolidated into four check- processing regions by the first quarter of 2010. Because the processing regions are larger (and will become even more so), the number of local checks has been increasing. In addition, based on the Federal Reserve Board’s study and our own research, it appears that banks are making depositor funds available earlier than EFAA-established funds-availability schedules. Specifically, the Federal Reserve’s Check 21 study found that banks make about 90 percent of all consumer deposits of local and nonlocal checks available more promptly than required by EFAA. Moreover, it found that banks make funds available from the majority of consumer check deposits within 1 business day. We reviewed the customer account agreements for 5 of the 10 largest U.S. banks and found that the general policy for each bank is to make funds available to consumers on the business day after the day of deposit. Bank industry officials and some consumers we interviewed noted that consumers may realize other benefits relating to access to information about check payments. For example, bank consumers may receive simpler checking account statements using image technology. So-called “image statements” include a sheet of paper with multiple pictures or images of checks that were written by the consumer and processed since the last statement. In our interviews with 108 bank consumers, 75 consumers, or about 69 percent, stated that they received image statements. When asked about their preferred method of receiving information about check payments, 11 of the 108 consumers interviewed, or about 10 percent, stated that they preferred receiving image statements over canceled checks or online review of check payments activity. Some of the 11 consumers told us that they preferred receiving image statements because, while they wanted a paper record of their check payments activity, they preferred not to handle and store canceled checks. Bank consumers who prefer to manage their checking account electronically also might realize benefits from immediate access to information about check payments. With the check imaging process and online access to their checking accounts, consumers can review check payments and images of their paid checks as soon as they are posted to the account and may recognize a problem sooner. With paper check processing, consumers must wait until the checking account statement arrives in the mail to review their check payments activity. Also, improved access to information can be beneficial to consumers when they need to work with the bank to resolve a problem. Bank industry officials and some consumers we interviewed noted that consumers may realize other benefits relating to access to information about check payments. One of the expected consumer benefits of Check 21 is the right to expedited recredit, but the extent to which consumers have benefited is unclear. The expedited recredit provision is considered a benefit to consumers because other banking laws governing checks do not prescribe specific amounts or time frames by which banks must recredit a customer’s account. On the basis of our bank consumer and bank interviews, it appears that a small number of bank consumers have filed expedited recredit claims. The right to expedited recredit exists if the consumer asserts in good faith that the bank charged the consumer’s account for a substitute check provided to the consumer and either the check was not properly charged to the consumer’s account, or the consumer has a warranty claim pertaining to the substitute check. The bank must recredit the customer’s account unless it has provided the customer the original check or a copy of the original check that accurately represents all information on the original check and demonstrated to the consumer that the substitute check was properly charged to the consumer’s account. On the basis of our consumer and bank interviews, it appears that a small number of bank consumers have filed expedited recredit claims. In our interviews with 108 consumers, 9 or about 8 percent of the consumers we interviewed, stated that they had received substitute checks with their main checking account statement, and none had exercised the right to expedited recredit. On the basis of the data provided to us by the 10 largest banks through the data collection instrument (which are not representative of the entire industry), we found 3 banks received a small number of claims related to expedited recredit in 2007. Specifically, one bank reported that it fielded less than 1,000 claims; one received less than 10 claims; and the third bank reported that it received 1 claim. In an interview, a representative of another bank told us that the bank had not received any claims. Six other banks did not report any information on the number of claims received. Some bank consumers can incur fees for receiving canceled checks and image statements, and the amount can depend on the type of checking account the consumer maintains. We reviewed data regarding bank fees for canceled checks and image statements acquired from Informa Research Services in conjunction with a report on bank fees. The data indicated that the average amount of fees for obtaining canceled checks generally increased from 2001 through 2006, and the average amount of fees for obtaining image statements remained relatively flat. For example, as shown in figure 11, the average check enclosure fee more than doubled from $1.42 to $3.11. During the same period, the average check imaging fee rose from $0.40 to $0.49. The Informa data also indicated that banks may charge different amounts for check enclosures and check imaging depending on the type of checking account. Specifically, the Informa data indicated that primarily non-interest, free checking accounts had the highest fees for check enclosures and check imaging. The lowest check enclosure and check imaging fees were found primarily with senior checking accounts. For example, in 2006 the average check enclosure fees for a non-interest, free checking account and a senior checking account were $3.75 and $2.45, respectively, compared to $3.11—the average check enclosure fee of all accounts Informa surveyed. Furthermore, the average check-imaging fee for a non-interest, free checking account in 2006 was $0.84, and the average check-imaging fee for a senior checking account was $0.18, compared to $0.49—the average check imaging fee of all accounts Informa surveyed. A relatively small number of the bank consumers we interviewed reported that their bank charged a fee for obtaining canceled checks or image statements, and some of the banks we interviewed reported that they charged a fee for providing canceled checks. Specifically, 23 bank consumers, or about 21 percent of the consumers we interviewed, told us that their bank charged a fee for obtaining canceled checks. Two consumers stated that they switched to online review of their check payments activity to avoid paying a fee for receiving canceled checks. Also, as we reported above, 12 of the 108 bank consumers we interviewed preferred receiving canceled checks to review their check payments activity. Moreover, 18 bank consumers, or about 17 percent, reported that their bank charged a fee for obtaining image statements. Two of the banks we interviewed charged a fee if consumers wanted to receive canceled checks. For example, one bank stated that its customers paid $2 for receiving canceled checks if they also paid a monthly service fee, but other bank officials we interviewed stated that their banks did not charge a fee for image statements. In addition, faster check processing may cause consumers to lose “float.” Float is the time between the payment transaction and the debiting of funds from a bank consumer’s account. The check truncation process may result in checks clearing a consumer’s account more quickly than under traditional check processing. However, deposited funds may not be available to consumers more quickly because, as noted above, Regulation CC’s funds availability deadlines have not changed. According to our recent report on bank fees, consumer groups and bank representatives believe that the potential exists for increased incidences of overdrafts if funds were debited from a consumer’s account faster than deposits were made available for withdrawal. However, we identified little research on the extent to which check truncation has affected occurrences of overdrafts and nonsufficient funds fees. We provided a copy of a draft of this report to the Federal Reserve Board, which provided us with written comments that are reprinted in appendix III. The Federal Reserve Board agreed with our overall conclusion that, over the past four years, the banking industry has made substantial progress toward establishing an end-to-end electronic check-processing environment. In commenting on this report, the Federal Reserve Board noted that the Federal Reserve Banks expect that by year-end 2009, more than 90 percent of their check deposits and presentments will be electronic. They also commented that the ongoing transformation to electronic check-processing environment has not been without cost. As noted in our report, the Federal Reserve Banks have reduced their transportation costs and work hours associated with their check services. And, according to the Federal Reserve Board, they earned a net income of $326 million for providing check services from 2005 through 2007. The Federal Reserve Board concurred with a number of consumer benefits identified in the report: faster funds availability on check deposits due to later deposit deadlines, quicker access to account information, and improved customer service. In addition, they provided us with technical comments, which we incorporated as appropriate. We also sent a draft of this report to the Federal Deposit Insurance Corporation, Office of the Comptroller of the Currency, and Office of Thrift Supervision. Only the Office of the Comptroller of the Currency provided us with technical comments, which we incorporated as appropriate. We provided sections of the draft of this report to bank officials for their technical review and several of them provided us technical comments, which we incorporated as appropriate. We are providing copies of this report to other interested Congressional committees. We are also providing copies of this report to the Chairman, Board of Governors of the Federal Reserve System; Chairman, Federal Deposit Insurance Corporation; Comptroller of the Currency, Office of the Comptroller of the Currency; Director, Office of Thrift Supervision; and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions regarding this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. The Check Clearing for the 21st Century Act of 2003 (Check 21) mandated that GAO evaluate the implementation and administration of Check 21. The report objectives are to: (1) determine the gains in economic efficiency from check truncation and evaluate the costs and benefits to banks and the Federal Reserve System (Federal Reserve) from check truncation, (2) assess consumer acceptance of the check truncation process resulting from Check 21, and (3) evaluate the costs and benefits to consumers from check truncation. To estimate the gains in economic efficiency from check truncation and evaluate the costs and benefits to banks from check truncation, we separately analyzed costs for the check operations of the Federal Reserve and for a selected group of banks. We used data from the Federal Reserve cost accounting system, known as the Planning and Control System or PACS, for the period beginning 10 years prior to the effective date of Check 21 (1994) through 2007. We modeled the Federal Reserve’s total check processing costs as different functions of variables, such as the volume of checks processed, the volume of returned checks, the number of Federal Reserve check processing offices, and the general indexes on wage and price. The specified cost functions allowed us to use standard econometric methods for estimating the effects of the variables on the Federal Reserve’s total check processing costs for 1994 through 2007. Because data on prices of input factors associated with Federal Reserve’s check processing operations are not available, we also used in our estimation data from the Department of Commerce’s Bureau of Economic Analysis (BEA) and the Department of Labor’s Bureau of Labor Statistics (BLS) as alternative measurements for the prices of these input factors. For example, we used average hourly earning for all private sectors from BLS as an alternative measurement for the Federal Reserve’s labor cost, BEA’s price deflator for equipment and software by nonresidential producers as an alternative measurement for communications equipment and transit cost, and BEA’s Gross Domestic Product price deflator as an alternative measurement for costs of all other input factors. We assessed the quality of all the above data and found them to be sufficiently reliable for our purposes. We also discussed Federal Reserve check processing costs and our econometric cost model with staff at the Federal Reserve. See appendix II for a detailed discussion of our econometric cost functions. While the Federal Reserve has consistent cost accounting data, cost accounting varies throughout the banking industry, preventing a similar analysis for private-sector costs. To evaluate the costs and benefits to banks from check truncation, we focused our data collection and analysis on the 10 largest banks in the United States, based on deposit size as of March 25, 2008. The check volume at the 10 largest U.S. banks represents a significant segment of the check paid volume. In 2007, these banks presented almost 13 billion checks for collection out of approximately 30 billion checks, which were paid in 2006. Thus, we determined that these banks should have a financial incentive to reduce the amount of paper that has to be sorted and transported. We created a data collection instrument to obtain qualitative cost information about the following issues: (1) the extent to which the banks deposited and received checks as images; (2) the primary costs related to paper check processing; (3) the extent of the investment that banks made to exchange check images; (4) the level of cost savings banks achieved, if any, including changes in labor and transportation costs through the use of image technology; and (5) the impact of check imaging and the use of substitute checks on the prevalence of bank losses from fraudulent checks. Officials from the Electronic Check Clearing House Organization, commonly known as ECCHO, also reviewed the data collection instrument. We sent it to the 10 banks and received a response from 9. At an early stage of our engagement, we also interviewed an official representing the bank that did not provide a response. We conducted follow-up interviews with a number of the banks requesting clarification of their responses. We also sent the data collection instrument to 12 smaller institutions, which included credit unions, to understand the small bank experience with check imaging. These banks’ assets ranged from less than $500 million to $5 billion and were selected from ECCHO’s list of participating members. In addition, our selection criteria included whether these smaller institutions were located in metropolitan or nonmetropolitan areas. We received completed forms from five of these institutions, but two had not migrated any of their volume to check imaging. We conducted subsequent interviews with the three institutions that had. We made several attempts to contact the nonrespondents through e-mail messages and follow-up telephone calls. In addition, we interviewed officials from a corporate credit union and a banker’s bank. To assess consumer acceptance of the check truncation process resulting from Check 21, we conducted in-depth structured interviews with a total of 108 adult consumers in three locations (Atlanta, Boston, and Chicago) in May 2008. We contracted with NuStats, Inc., a private research and consulting firm, to recruit a sample of consumers who generally represented a range of demographics within the U.S. population in terms of age, education level, and income. However, the consumers recruited for the interviews did not form a random, statistically representative sample of the U.S. population; therefore, we could not generalize the results of the interviews to the relevant total population. Additionally, the self-reported data we obtained from consumers are based on their opinions and memories, which may be subject to error and may not predict their future behavior. Consumers had to speak English and meet certain other conditions: having primary responsibility in the household for balancing the financial account that allows paper check writing; having received canceled original checks in paper form with the checking account statement at some point since 2000; and not having participated in more than one focus group or similar in-person study in the 12 months before the interview. We achieved our sample recruitment goals for all demographics, with the exception of the age category “65 plus” and the education category “some high school or less.” In addition, our sample comprised 64 women and 43 men. We considered that the impact of not achieving these goals on our work was minimal. See table 1 for further demographic information on the consumers we interviewed. During these interviews, we obtained information about the experience of consumers with, and their opinions about, changes to their checking accounts resulting from the check truncation process. Our interviews included a number of standardized questions, and more tailored follow-up questions as necessary to more fully understand their answers. All consumers were asked about their current experience with their checking accounts and preferred method of making retail payments. The interview focused on consumer experience with canceled checks, substitute checks and check images, and the possible changes to their checking accounts since Check 21. More specifically, the structured interview of the 108 consumers included questions on the following issues: (1) bank fees charged to them to receive canceled checks, substitute checks or image statements; (2) instances and subsequent resolution of errors involving their checking accounts; (3) their preferred method of receiving information from their bank about check payments activity (such as receiving their canceled checks, reviewing information online, or reviewing an image statement); (4) instances in which they had to demonstrate proof of payment using a canceled check or a check image and their resolutions; (5) their level of concern about using a check image as a proof of payment; and (6) whether their bank had extended its cut-off time for accepting deposits and the consumer’s opinion about the merits of such an action. In addition, we asked nine questions about the consumers’ experience submitting complaints to banks and federal banking regulators. This report does not contain all the results from the consumers’ interviews. We reproduced the text from our structured interview instrument and tabulated the results from the questions in Questions for Consumers about Check 21 Act (GAO-09-09SP). To evaluate the benefits and costs to consumers from check truncation, we interviewed staff from the federal banking regulators—the Board of Governors of the Federal Reserve, the Federal Deposit Insurance Corporation, the National Credit Union Administration, the Office of the Comptroller of the Currency, and the Office of Thrift Supervision—and collected consumer complaints about the implementation of Check 21 that were submitted to these agencies from October 28, 2004, through March 31, 2008. Our analysis of the consumer complaint data helped us identify the issues that we pursued in our structured interviews of 108 consumers. While the regulators’ consumer complaint data may be indicative of the relative levels of different types of complaints, we did not rely solely on these data because these voluntary reporting systems rely on complainants to self-select themselves; therefore, the data may not be representative of the experiences of the general public. We also interviewed representatives from consumer advocacy groups, including Consumers Union, the Consumer Federation of America, and the U.S. Public Interest Research Group. Furthermore, we interviewed officials from the American Bankers Association and third-party processors. The data collection instrument discussed above also included questions about the potential benefits and costs of Check 21 for consumers. For example, we asked the banks for information about (1) their policies on returning canceled checks before and after Check 21; (2) the fees they charged to consumers for the return of canceled checks and image statements; (3) their assistance to customers in showing proof of payment using a canceled check, a substitute check, or a check copy; (4) the instances of expedited claims they received on substitute checks and their resolution; and (5) the complaints they have received about matters relating to Check 21 and whether they had changed their cut-off times for deposits at automated teller machines or branches in the last 2 years. In addition, we analyzed the conclusions and the methodology applied in the Federal Reserve Board’s Report to the Congress on the Check Clearing for the 21st Century Act of 2003, published in April 2007, to determine whether we could use the results in our report. The study constituted the Federal Reserve Board’s assessment of the banking industry’s implementation of Check 21 to date, as well as the continued appropriateness of the funds availability requirements of Regulation CC. We interviewed staff from the Federal Reserve Board about the methodology and conclusions in the report and we examined the design, implementation, and analysis of the survey instrument used for the study. We considered the overall strengths and weaknesses of the Federal Reserve’s data collection program, as well as specific questionnaire items relating to Regulation CC. On the basis of our review, we concluded that we could use the results in this report. To determine whether consumers may incur fees for receiving canceled checks and check images since the implementation of Check 21, we reviewed and analyzed data purchased from Informa Research Services (Informa) that included summary-level fee data from 2001 through 2006. The data included information on check enclosure and imaging fees. Informa collected its data by gathering the proprietary fee statements of banks, as well as making anonymous in-branch, telephone, and Web site inquiries for a variety of bank fees. It also received the information directly from its contacts at the banks. The data are not statistically representative of the entire population of depository institutions in the country because the company collects fee data for particular institutions in specific geographical markets so that these institutions can compare their fees against their competitors. That is, surveyed institutions are self-selected into the sample or are selected at the request of subscribers. To the extent that institutions selected in this manner differ from those which are not, results of the survey would not accurately reflect the industry as a whole. Informa collects data on more than 1,500 institutions, including a mix of banks, thrifts, credit unions, and Internet-only banks. The institutions from which it collects data tend to be large ones that have a large percentage of the deposits in a particular market. Additionally, the company has access to individuals and information from the 100 largest commercial banks. The summary-level data Informa provided us for each data element included the average amount, the standard deviation, the minimum and maximum values, and the number of institutions for which data were available to calculate the averages. They also provided these summary- level data by institution type (banks and thrifts combined, and credit unions) and size (as shown in table 2). In addition, Informa provided us with data for nine specific geographic areas: California, Eastern United States, Florida, Michigan, Midwestern United States, New York, Southern United States, Texas, and Western United States. We interviewed representatives from Informa to gain an understanding of their methodology for collecting the data and the processes they had in place to ensure the integrity of the data. Reasonableness checks were conducted in 2007 on the data and identified any missing, erroneous, or outlying data and Informa Research Services representatives corrected any mistakes that were found. Also, in 2007, we compared the average fee amounts that Informa had calculated for selected fees for 2000, 2001, and 2002 with the Federal Reserve’s “Annual Report to the Congress on Retail Fees and Services of Depository Institutions.” The averages were found to be comparable to those derived by the Federal Reserve. While these tests did not specifically include check enclosure and check image fees, they did confirm our assessment of the Informa data system. Because the assessment conducted for our January 2008 report encompassed the checking fee data we used, we determined that the Informa Research Services data were sufficiently reliable for our current report. We conducted this performance audit from September 2007 to October 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Check Clearing for the 21st Century Act of 2003 (Check 21) was intended to make the check payment system more efficient and less costly by facilitating wider use of electronic check processing without demanding that any bank change its current check collection practices. Prior to Check 21, a bank was required to present an original paper check to the paying bank for payment unless the paying bank agreed to accept presentment in some other form. This required the collecting bank to enter into agreements with all or nearly all of the banks to which it presented checks. Because of these impediments, banks were deterred from making the necessary electronic check processing investments. Check 21 addressed these impediments by authorizing a new paper negotiable instrument (a substitute check), which is the legal equivalent of the original check. Other than accepting the substitute check, the act does not require banks to adopt electronic check processing, but it enables banks that want to truncate or remove the original paper checks from the check- collection system to do so more easily. Check 21 facilitates electronic check processing by allowing banks to use imaging technology for collection and create substitute checks from those images for delivery to banks that do not accept checks electronically. To assess the implications for economic efficiency in the Federal Reserve System’s (Federal Reserve) check processing since Check 21 took effect in October 2004, we conducted a standard econometric analysis of the Federal Reserve’s quarterly accounting cost and volume data for the period from 1994 through 2007. This approach allowed us to model total check operating costs as a function of the total check presentment volume and the timing of Check 21, while separating cost effects from other relevant factors such as check return volume, number of check clearing offices, and labor wages. For this report, we refer to banks, thrifts, and credit unions collectively as banks. Many microeconomic textbooks have detailed discussions on cost function. For example, see Hal R. Varian, Microeconomic Analysis, 3rd edition (New York, N.Y.: W.W. Norton & Company, 1993), chapter 5. The total check operating cost at time t (C) depends on the number of checks (items) processed during that period (N) and the number of return items (R). Total operating cost is expected to have a positive relationship with both the total number of items processed and the number of return items; that is, positive α and α in equation (1).coefficient of O [α in equation (1)] is ambiguous; it may be positive in the case of a cost savings or negative in the case of an increase in total costs. The coefficient of primary interest is that of Dc21, (α in the estimation. Consistent with microeconomic theory, we expect an increase in input prices (p) will lead to an increase in total cost. For example, higher labor wage rates are expected to lead to higher total cost, seen as positive coefficients for input prices in the estimation. Based on econometric studies, including some that specifically considered economies of scale for check processing, we modified the basic approach of equation (1) to control for quarterly fluctuations and trends over time, and to consider the potential effects of Check 21 on the presence of scale economies in check clearing operations. Some of these studies include Ernst R. Berndt, The Practice of Econometrics: Classic and Contemporary (Mass: Addison-Wesley Publishing, 1996), chapter 3; Robert M. Adams, Paul W. Bauer, and Robin C. Sickles, Federal Reserve Bank of Cleveland, “Scope and Scale Economies in Federal Reserve Payment Processing,” Working Paper 02-13 (November 2002); David B. Humphrey, “Scale Economies at Automated Clearing House,” Journal of Bank Research (Summer 1981), 71-81; and Paul W. Bauer and Dianna Hancock, “Scale Economies and Technological Change in the Federal Reserve ACH Payment Processing,” Federal Reserve Bank of Cleveland Economic Review (1995) vol. 31, no. 3, 14-29. We estimated equation (2) with quarterly data from the Federal Reserve’s Planning and Control System (PACS) for the period from 1994 through 2007. Table 3 shows the summary statistics for selected variables. We estimated the logarithm of total check processing cost against the logarithms of total presentment items—image, paper, legacy, and substitute—and other related variables. Table 4 presents the results. The basic specification in table 4, which does not account for a possible different cost structure in check processing, yields mostly statistically insignificant coefficients. However, the coefficient for the total number of items presented is significant and positive, implying that a 1 percent increase in total presentment will result in a 1.34 percent increase in total cost. However, the coefficient for the Check 21 dummy variable (Check21), while negative, is not statistically significant. This result does not provide any support for the hypothesis that the introduction of Check 21 led to a decrease in Federal Reserve costs, although it is not possible to determine the extent to which this may be driven by the concurrent consolidation of Federal Reserve check services sites. Table 4 also shows the results of the estimation incorporating a structural break in the cost function for periods before and after the act as described in equation (2). Though insignificant, the coefficient of total presentment is positive and less than 1, and the coefficient of the interacted variable of total presentment and Check 21 dummy is negative (-0.25). If significant, the sum of two coefficients would imply that the cost structure for the check operation in the post-Check 21 period would be different from the pre-Check 21 period. However, the relatively short time series data for the post-Check 21 period increase the standard errors for all the coefficients of the interacted variables. Also, although insignificant, the coefficient of the Check 21 dummy is positive, implying that the total cost, on average, is lower in periods before Check 21 than after. In addition to the estimation results shown in table 4, we estimated alternative functional forms used in other similar studies for the relationships in equation (2). Because these functional forms generally require constructing a substantial number of interacted variables, the subsequent multicollinearity and the limited data available make the results subject to high estimation errors and thus difficult from which to draw clear inferences. We also tested the effects on the estimates of imposing a constraint suggested by economic theory. The standard errors for most of the coefficient estimates decrease, suggesting a decrease in multicollinearity, but the results are otherwise similar to the results without the constraint in table 4. k=1 To impose this constraint, we made some adjustments to the total costs and input price. See William H. Green, Econometric Analysis (Prentice Hall, N.J.: 1993), 503-507. given the changes in technology embodied in electronic presentment and check truncation. These results are likely to change with additional quarters of data and the expected continuing increase in the electronic presentment as a share of the Federal Reserve’s check processing. Also, as previously mentioned, the Federal Reserve’s ongoing effort to close check clearing office facilities has resulted in one-time consolidation and reorganization charges. These charges are included in the total cost operating costs, and although we try to control for their effect by including the number of offices variable, it is plausible that the positive sign of the Check 21 dummy in our estimations may be a result of these charges included in the total costs. Similarly, our analysis implicitly assumes that the Federal Reserve’s consolidation decisions are independent of the volume of checks that it processes. However, the data are not sufficient to explicitly model a relationship between the volume of checks and expectations about future volumes. We appreciate the opportunity to comment on the GAO’s report titled Check 21 Act: Most Consumers Have Accepted and Banks Are Progressing Towards Full Adoption of Check Truncation. We agree with the GAO’s overall conclusion that, over the past four years, the banking industry has made substantial progress towards establishing an end-to-end electronic check-processing environment. Today, more than three-quarters of checks deposited with the Federal Reserve Banks for collection are deposited electronically, and more than half are presented electronically. The Federal Reserve Banks expect that by year-end 2009, more than 90 percent of their check deposits and presentments will be electronic. This ongoing transformation to an end-to-end electronic check-processing environment has not been without cost. The banking industry and the Federal Reserve Banks have made significant technological investments to facilitate an electronic check-clearing system and have incurred incremental transition costs associated with processing both paper and electronic checks. The Federal Reserve Banks’ investments, however, have enabled them to significantly reduce their transportation costs and paper check-processing infrastructure. These cost reductions have been critical to the Reserve Banks’ ability to recover all of their actual and imputed costs of providing check services from 2005 through 2007 and earn a net income of $326 million. processing region, many checks that were previously classified as nonlocal checks subject to a five-day maximum permissible hold are now classified as local checks subject to a maximum two-day hold period. It is likely that within the next several years, all checks will be classified as local, subject to the shorter permissible hold period. Again, we appreciate the opportunity to review and comment on the GAO’s report and the efforts and professionalism of the GAO’s team in conducting this study. The following individuals made key contribution to this report: Debra R. Johnson, Assistant Director; Joanna Chan; Philip Curtin; Nancy Eibeck; Terence Lam; James McDermott; Carl Ramirez; Barbara Roesmann; and Paul Thompson.
Although check volume has declined, checks still represent a significant volume of payments that need to be processed, cleared, and settled. The Check Clearing for the 21st Century Act of 2003 (Check 21) was intended to make check collection more efficient and less costly by facilitating wider use of electronic check processing. It authorized a new legal instrument--the substitute check--a paper copy of an image of the front and back of the original check. Check 21 facilitated electronic check processing by allowing banks to use electronic imaging technology for collection and create substitute checks from those images for delivery to banks that do not accept checks electronically. Check 21 mandated that GAO evaluate the implementation and administration of the act. The report objectives are to (1) determine the gains in economic efficiency from check truncation and evaluate the benefits and costs to the Federal Reserve System (Federal Reserve) and financial institutions; (2) assess consumer acceptance of the check truncation process resulting from Check 21; and (3) evaluate the benefits and costs to bank consumers from check truncation. GAO analyzed costs for the check operations of the Federal Reserve and a group of banks, interviewed consumers about their acceptance of and costs and benefits of electronic check processing, and analyzed survey data on bank fees. The Federal Reserve agreed with the overall findings of the report. Check truncation has not yet resulted in overall gains in economic efficiency for the Federal Reserve or for a sample of banks while Federal Reserve and bank officials expect efficiencies in the future. GAO's analysis of the Federal Reserve's cost accounting data suggests that its costs for check clearing may have increased since Check 21, which may reflect that the Federal Reserve must still process paper checks while it invests in equipment and software for electronic processing and incurs costs associated with closing a number of check offices. However, GAO found that the Federal Reserve's work hours and transportation costs associated with check services declined from the fourth quarter of 2001 through the fourth quarter of 2007. Several of the 10 largest U.S. banks reported to GAO that maintenance of both paper and image-based check processing systems prevented them from achieving overall lower costs, although they had reduced transportation and labor costs since Check 21 was enacted. Check imaging and the use of substitute checks appear to have had a neutral or minimal effect on bank fraud losses. Most bank consumers seem to have accepted changes to their checking accounts from check truncation. In interviews with bank consumers, the majority of them accepted not receiving their canceled checks and being able to access information about their checking account activity online. Several reported that they did not need the "extra paper" from canceled checks and that image statements and online reviewing was more secure than receiving canceled checks. Eleven percent of the 108 consumers still preferred to receive canceled checks. Most consumers reported that they were not significantly concerned about their ability to demonstrate proof of payment using a substitute check or check image rather than a canceled check and few reported that they suffered errors from the check truncation process. Also, GAO found that the federal banking regulators reported few consumer complaints relating to Check 21. To the extent that banks have employed check truncation, bank consumers have realized benefits and costs relating to faster processing and access to account information. GAO found that some banks have extended the hours for accepting deposits for credit on the same business day, which can result in faster availability of deposited funds for consumers. Based on consumer interviews, consumers have benefited from receiving simpler imaged account statements and immediate access to information about check payments. Check 21's expedited recredit (prompt investigation of claims that substitute checks were improperly charged to accounts and recrediting of the amount in question) also is considered a consumer benefit. However, based on our consumer and bank interviews, it appears that a small number of consumers have filed expedited recredit claims. Based on analysis of survey data on bank fees, GAO found some consumers may incur fees related to receiving canceled checks and images. Since 2004, fees for canceled checks appear to have increased, while fees for images appear to have remained relatively flat.
The United States and many of its trading partners have long used laws known as “trade remedies” to mitigate the adverse impact of certain trade practices on domestic industries and workers, notably dumping (i.e. sales at below fair market value), and foreign government subsidies that lower producers’ costs or increase their revenues. In both situations, U.S. law provides that a duty intended to counter these advantages be imposed on imports. Such duties are known as AD/CV duties. The process involves the filing of a petition for relief by domestic producer interests, or self- initiation by the U.S. Department of Commerce (Commerce), followed by two separate investigations: one by Commerce, which determines if dumping or subsidies are occurring, and the other by the ITC, which determines whether a domestic U.S. industry is materially injured by such unfairly traded imports. If both agencies make affirmative determinations, Commerce issues an order to CBP directing it to collect the additional duties on imports. These are known as AD/CV duty orders. No later than 5 years after publication of these orders, Commerce and the ITC conduct a “sunset review” to determine whether revoking the order would likely lead to the continuation or recurrence of dumping and/or subsidization and material injury. Congress enacted CDSOA on October 28, 2000, as part of the Agriculture, Rural Development, Food and Drug Administration and Related Agencies Appropriations Act to strengthen the remedial nature of U.S. trade laws, restore conditions of fair trade, and assist domestic producers. Congress noted in its accompanying findings that “continued dumping and subsidization . . . after the issuance of antidumping orders or findings or countervailing duty orders can frustrate the remedial purpose” of U.S. trade laws, potentially causing domestic producers to be reluctant to reinvest or rehire and damaging their ability to maintain pension and health care benefits. Consequently, Congress enacted the CDSOA, reasoning that “U.S. trade laws should be strengthened to see that the remedial purpose of those laws is achieved.” CDSOA instructs Customs to distribute AD/CV duties directly to affected domestic producers. Previously, CBP transferred such duties to the Treasury for general government use. Two agencies are involved in CDSOA implementation. The law gives each agency—ITC and CBP—specific responsibilities for implementing CDSOA. The ITC is charged with developing a list of producers who are potentially eligible to receive CDSOA distributions and providing the names of these producers to CBP. CBP has overall responsibility for annually distributing duties collected to eligible affected domestic producers. CDSOA also makes CBP responsible for several related actions. Specifically, it charges CBP with establishing procedures for the distribution of payments and requires that CBP publish in the Federal Register a notice of intent to distribute payments and, based on information provided by the ITC, a list of affected domestic producers potentially eligible for the distribution. Both agencies had some start-up challenges and have made improvements in response to reports by their Inspectors General (IG). In September 2004, ITC’s IG found that the ITC had effectively implemented its part of the act but made several suggestions for enhancing the agency’s CDSOA efforts. For example, it suggested that the ITC better document its policies and procedures for identifying and reporting eligible producers to CBP and improve its communication with companies regarding eligibility. In response, the ITC implemented these suggestions to, among other things, formalize and strengthen its procedures for identifying eligible producers, developing a list of potentially eligible producers, and transmitting the list to CBP. For example, the ITC updated its desk procedures, clarified certain responsibilities to support the staff responsible for maintaining the ITC list, and added additional guidance on CDSOA requirements to its website. In June 2003, the Treasury’s IG issued a report finding several major deficiencies in CBP’s implementation of CDSOA and made several recommendations. The Treasury’s IG found that CBP was not in compliance with the law because it did not properly establish special accounts for depositing and disbursing CDSOA payments, did not pay claimants within the required time frame, and did not institute standard operating procedures or adequate controls for managing the program. Specifically, Treasury’s IG noted that the absence of proper accounts, accurate financial data, and adequate internal controls had resulted in “overpayments of at least $25 million, and likely more.” Treasury’s IG also emphasized that several other issues warranted attention, including no routine verification of claims and significant amounts of uncollected AD/CV duties. In response, CBP consolidated the processing of claims and payments by establishing a CDSOA team in Indianapolis, Indiana; instituted procedures for processing claims and disbursements, and for conducting claim verification audits; and started proceedings to secure reimbursements from the companies that had received overpayments. Despite these efforts, CBP still faces issues raised by the Treasury IG, such as the issue of uncollected duties. The United States has an obligation that its trade remedy actions conform to its legal commitments as part of the WTO, an international body based in Geneva, Switzerland. The WTO agreements set forth the agreed-upon rules for international trade. The WTO provides a mechanism for settling disputes between countries, and serves as a forum for conducting trade negotiations among its 148 member nations and separate customs territories. WTO trade remedy rules involve both procedural and substantive requirements, and a number of U.S. trade remedies have been challenged at the WTO. WTO members that believe other members are not complying with their WTO obligations can file a dispute settlement case. The resulting decisions by a dispute settlement panel, once adopted, are binding on members who are parties to the dispute, and WTO rules create an expectation of compliance. Under WTO rules and U.S. law, however, compliance is not automatic. WTO dispute settlement panels cannot order the United States to change its law. Alternatively, the United States may choose not to comply with WTO agreements and instead may choose to offer injured members mutually-agreed upon trade compensation or face retaliatory suspension of trade concessions by the complainant members. A new round of global trade talks aimed at liberalizing trade barriers is now underway and includes discussions of possible clarifications and improvements to the WTO rules on antidumping and on subsidies and countervailing measures. U.S. trade with members of the WTO totaled $2.1 trillion in 2004, giving the United States a considerable stake in these WTO negotiations, which aim to liberalize trade in agriculture, industrial goods, and services. Three key features of CDSOA guide and affect agency implementation. These features (1) determine company eligibility to receive CDSOA disbursements, (2) shape the allocation of CDSOA disbursements among companies based on their claimed expenditures, and (3) specify milestones that agencies must achieve when implementing the act, including a tight time frame for disbursing funds. CDSOA establishes criteria that restrict eligibility for CDSOA disbursements. As guidance for agency implementation, these criteria raise issues because (1) two-thirds of the orders in effect predate CDSOA, (2) ITC investigative procedures were not designed to, and do not result in, collecting information on support of petitions from all industry participants, and (3) other factors further limit company eligibility. Some companies deemed ineligible regard these criteria as unfair, and several have initiated legal action to secure eligibility. The law restricts eligibility to “affected domestic producers”—namely, any “manufacturer, producer, farmer, rancher, or worker representative (including associations of these persons)” that (1) petitioned the ITC or supported a petition for relief to the ITC that resulted in an AD/CV duty order and (2) remains in operation. The law also requires the ITC to prepare a list of potentially eligible producers for CBP, which publishes it in advance of each annual distribution. The law only applies to orders in effect on or after January 1, 1999. CDSOA further specifies that support must be communicated to the ITC through a letter from the company or a response to an ITC questionnaire. Successor companies or members of an association may also be eligible for CDSOA distributions. Conversely, CDSOA deems as ineligible those companies that (1) opposed a petition, (2) ceased production of a product covered by an order, or (3) were acquired by companies that opposed a petition. These eligibility criteria create special problems when older AD/CV orders are involved. Our analysis of ITC data reveals that roughly two-thirds of (234 out of 351) AD/CV duty orders in effect as of April 15, 2005, precede CDSOA. The application of CDSOA to orders that predate the law’s enactment raises concern. This is because, for AD/CV relief petitions that were investigated before CDSOA was enacted, producers had no way of knowing that their lack of expression of support for the petition would later adversely affect their ability to receive CDSOA disbursements. Moreover, firms that began operations or entered the U.S. market after the ITC’s original investigation are not eligible to receive CDSOA distributions. For petitions that have been investigated since CDSOA was enacted, producers would likely be aware of this linkage. The ITC and CBP told us that in a recent case involving shrimp, industry associations reached out broadly to ensure producers were aware of the need to communicate support to the ITC. Similarly, officials from a law firm that works with importers told us they were aware of such industry association efforts in cases involving live swine. However, in examining seven industries, we spoke to several ineligible companies that were frustrated because they had not expressed support during, or in some cases had not even known about, AD/CV investigations conducted before CDSOA’s adoption. The ITC relies on company data that is sometimes incomplete, and this further limits eligibility. CDSOA’s criteria link companies’ eligibility to a process the ITC has long followed in investigating AD/CV petitions by U.S. domestic industry interests for relief from unfair imports. However, the ITC’s investigative process does not result in collecting information from all industry participants, because it is intended for purposes other than CDSOA. The ITC’s primary role in AD/CV investigations is to define the scope of the industry that is affected by competition from imported goods and to determine whether the industry has suffered or been threatened with material injury as a result of dumped or subsidized imports. The ITC collects information from U.S. producers, primarily by surveying them. ITC officials told us that they generally strive to cover 100 percent of industry production in their surveys and usually receive responses from producers accounting for a substantial share of production. In situations with a relatively small number of producers, ITC officials said they often succeed in getting coverage of 90 percent of the domestic industry. However, in certain circumstances, such as with agricultural products, which have a large number of small producers, ITC surveys a sample of U.S. producers instead of the entire industry. In these situations, it is not uncommon for the share of production reflected in the ITC’s questionnaire responses to account for 10 percent or less of production. The following four factors additionally define the list of eligible producers: The questionnaires that the ITC sends to domestic producers during its investigations have only asked respondents to indicate their position on the petition since 1985. For cases prior to 1985, only petitioners and producers who indicated support of the petition by letter in the ITC’s public reports or documents have been considered “affected domestic producers.” The ITC considers the most recent answer a company provides as the one that determines eligibility. In its investigations, the ITC sends out both preliminary and final surveys in which producers are asked about support for petitions. Presently, producers have the option of checking one of three boxes: (1) support, (2) take no position, and (3) oppose. According to ITC officials, because the statute requires support, only those firms that check the “support” box are considered eligible. Moreover, ITC’s practice has been to look to the most recent clear expression of a company’s position on the petition to determine its CDSOA eligibility. For example, if a company’s response was “support” on the preliminary survey but “take no position” on the final survey, the ITC interprets “take no position” as non-support, and considers the company ineligible for CDSOA disbursements. The ITC limits its list of potentially eligible producers to those who indicate their support can be made public. The ITC is required by statute to keep company information, including positions on petitions, confidential, unless the company waives its confidentiality rights. CDSOA requires CBP to publish the list of potentially eligible producers; as a result, the list the ITC provides CBP only includes companies who have affirmatively indicated willingness (in the original investigation or after) to have their support be made public. Because of CDSOA’s interpretation of the phrase “support of the petition,” the ITC only considers evidence of support during its initial investigation to satisfy CDSOA requirements. Once an investigation is over, a producer that has not communicated its support to the ITC cannot later become eligible for CDSOA disbursements, even if it supports the continuation of an existing order at the time of the 5-year “sunset review.” Several companies have brought legal action challenging agency decisions that rendered them ineligible to receive disbursements, but none of these challenges have been successful. The following examples illustrate challenges to agency decisions: A case was brought by candle companies to compel the payment of CDSOA distributions to them. The companies were not on the ITC’s list of potentially eligible producers and did not file timely certifications with CBP. The companies asserted that the ITC had violated CDSOA by failing to include them on the list of affected domestic producers and that this omission excused their failure to timely file their certifications. A federal appellate court held that the ITC properly excluded the two producers from the list of affected domestic producers because the producers provided support for the AD petition in a response to a confidential questionnaire and failed to waive confidentiality. The court also held that when the ITC properly excludes a producer from the list, the producer still must file a timely certification with CBP to obtain retroactive consideration for CDSOA distributions. As a result, the court found that the firms were not entitled to CDSOA disbursements for the years in question. Another set of candle companies, which had opposed the relevant petition and subsequently acquired companies in support of the same petition, brought a case seeking to obtain CDSOA disbursements on behalf of the acquired companies. An appellate court held that CDSOA bars claims made on behalf of otherwise affected domestic producers who were acquired by a company that opposed the investigation or were acquired by a business related to a company that opposed the investigation. The court also found that the acquired companies are also barred from claiming disbursements for themselves. A seafood producer brought a case seeking an evidentiary hearing and/or inclusion of affidavits in the agency record where the producer was excluded from the list of affected domestic producers because the ITC had no record of the producer’s support for the petition. The producer claimed that it had mailed a questionnaire response indicating support to the ITC on time and wanted to have its affidavits in support of the contention included in the agency’s records. The U.S. Court of International Trade held that because the producer failed to allege the proper reasons for amending the agency record, affidavits concerning the timely mailing of a questionnaire could not be added to the agency record and considered when reviewing the producer’s eligibility for a CDSOA distribution. Two other legal challenges are still pending and involve claims that CDSOA violates the First Amendment of the U.S. Constitution (“free speech”) by conditioning the distribution of benefits on a company’s expression of support for an AD/CV relief petition. The second key CDSOA feature provides for CDSOA funding and a pro rata mechanism for allocating funds among the companies that claim disbursements based on a broad definition of qualifying expenditures. Partly as a result of the incentive this creates, company claims approached $2 trillion in fiscal year 2004. Each fiscal year’s duty assessments on all AD/CV duty orders that were in effect for that year fund annual CDSOA disbursements. Each fiscal year, CBP creates a special account that acts as an umbrella over multiple holding accounts used to track collections by specific active AD/CV duty orders and deposits collected duties under an order into its respective account. Within these accounts, CBP indicates that the dollar amounts attributable to each specific case are clearly identifiable. For example, a total of 351 AD/CV duty orders were in effect as of April 15, 2005, covering 124 products from 50 countries. In other words, as of that date, CBP intended to allocate CDSOA disbursements not from “one CDSOA pie” but from “351 CDSOA pies.” Each of these accounts constitutes a separate fund from which CBP makes annual distributions. After the fiscal year closes, CBP distributes the duties collected and interest earned under a given order that year to the affected eligible producers filing timely claims related to the specific order. The agency cannot distribute funds collected from one order to producers that were petitioners under other orders. For example, funds collected from the order on pineapples from Thailand cannot be used to pay producers covered by the frozen fish from Vietnam order. As a result, in fiscal year 2004, the one U.S. producer of pineapples received all the money collected under that order, but CBP did not make CDSOA disbursements to U.S. producers of frozen fish because the agency had not collected any funds under that order. CDSOA’s definition of expenses companies can claim is very broad. The law defines ten categories of qualifying expenditures, such as health benefits and working capital expenses, incurred during the production of the product under the order. According to CBP officials we spoke with, this broad definition means companies can include a wide range of expenses in their certifications. Moreover, CDSOA allows companies to claim any expenses incurred since an order was issued, a period that may span as far back as the early 1970s for some orders. Indeed, 68 of the 351 orders in effect have been in place for 15 years or more. Companies can also make claims under multiple AD/CV orders. For example, in fiscal year 2004, one of the top recipient companies filed claims for different products under 89 AD/CV orders. Finally, the law allows companies to submit claims for qualified expenditures that have not been reimbursed in previous fiscal years. However, CBP implementing regulations require that producers relate claimed expenditures to the production of the product that is covered by the scope of the order or finding. CDSOA uses a pro rata formula to allocate disbursements under a given order among the eligible companies filing claims, with percentages determined according to the claims of qualifying expenditures submitted. If the amount collected under an order is insufficient for all claims to be paid in full, as is often the case, each company receives its pro rata share of the amount collected. This pro rata formula creates an incentive for producers to claim as many expenses as possible relative to other producers so that their share of the funds available under an order is as large as possible. CBP officials cited the increase in claims—from $1.2 trillion in fiscal year 2001 to just under $2 trillion in fiscal year 2004—as an indication of this incentive. The third key feature of CDSOA is that it sets a strict deadline by which CBP must distribute payments for a fiscal year. Most disbursement-related activities cannot begin until the fiscal year ends. As a result, CBP has a significant workload in October and November and cannot perform all the desired quality controls prior to disbursement. CDSOA gives CBP a flexible time frame for processing claims and the CBP has used its discretion to give itself more time. Specifically, the law directs CBP to publish a Federal Register notice of its intent to distribute payments, and the list of affected domestic producers potentially eligible to receive payments under each order, at least 30 days before distributions are made. However, CBP has scheduled the publication, which is the first step in processing claims, at least 90 days before the end of the fiscal year for which distributions are being made. For the fiscal year 2004 disbursements, CBP actually published the notice on June 2, 2004—about 120 days before the end of the fiscal year. CBP requires producer claims/certifications to be submitted within 60 days after this notice is published. The fiscal year 2004 deadline for submitting claims was August 2, 2004. This gave CBP the months of August and September to examine certifications, seek additional information from the producers, send acceptance or rejection letters to producers, and finalize a list of recipients. The law is not flexible in the time frame allowed for processing disbursements for a given fiscal year, specifying that payments must be made within 60 days after the first day of the following fiscal year. Because of the need to calculate funds available based on a completed fiscal year, CBP cannot commence these calculations until the following fiscal year. This tight time frame means that during October and November, CBP must perform the bulk of the tasks associated with calculating the funds available for disbursement under each order and the funds that will be distributed to each recipient company under an order. In discussions with us, CBP officials said CDSOA’s 60-day time frame for disbursing payments was tight, posing the biggest risk associated with running the program. For instance, in fiscal year 2002, the program missed this deadline by about 2 weeks and, in the process, overpaid some producers. Efforts to collect these overpayments have yielded some results but are still continuing. An extension of 30 days in the disbursement deadline would give CBP additional time to undertake desired quality control measures before sending the instructions to Treasury and issuing payments. The present schedule does not allow sufficient time for quality control, forcing CBP to ask companies for repayment if errors are subsequently detected. CBP faces three key problems in implementing CDSOA. First, despite some recent improvements, CBP’s processing of CDSOA claims and disbursements is labor intensive, and the agency is facing a dramatic increase in its 2005 workload. Second, the agency does not systematically verify claims and thus cannot be sure it appropriately distributes disbursements. Third, CBP disbursed only about half the funds that should have been available in fiscal year 2004 because of ongoing problems collecting AD/CV duties. Figure 1 depicts how the various units of CBP and Treasury interact when processing claims, verifying claims, and making payments. Following the consolidation of CBP’s CDSOA program within the Revenue Division at Indianapolis in 2004, the division is now fully responsible for processing claims and disbursements. The division issues payment instructions for Treasury’s Financial Management Service, which actually issues CDSOA disbursement checks to U.S. companies. CBP’s Regulatory Audit Division may selectively perform claims verifications upon request of the CDSOA program. In addition to these offices within CBP, the Office of Regulations and Rulings addresses legal matters, the Office of the Chief Counsel addresses litigation, the Office of Information Technology provides necessary reports, and the Office of Field Operations is responsible for liquidations. The CDSOA program’s efforts to process claims and disbursements are cumbersome and likely to become more challenging with impending workload increases. The processing of claims and disbursements requires intensive manual efforts, in part because CBP does not require companies to file claims using a standardized form. Also, existing computer systems do not have the capabilities to produce the data needed to calculate amounts available for distribution. CBP’s guidance for filing claims is not sufficiently specific and causes confusion, requiring extra effort by CBP staff to answer questions from companies. CBP officials are concerned that, despite recent staffing increases, the number and experience level of staff may not be sufficient to handle the dramatic workload increase in fiscal year 2005. Despite being aware of these problems, CBP’s CDSOA program lacks plans for improving its processes, staff, and technology. CDSOA claims processing is cumbersome and labor intensive. Through fiscal year 2004, CBP only received updates to the list of potentially eligible companies from the ITC in hard copy. As a result, CBP had to manually update its electronic database of potentially eligible producers. During the course of our review, ITC officials took the initiative to provide the list to CBP in hard copy and in electronic format to facilitate CBP’s processing of this information. CBP officials noted that getting the file electronically was very helpful. However, because CBP still needed to perform considerable data re-entry to get the list into the format they preferred, ITC and CBP officials told us they are exploring whether to formalize and improve this file exchange in the future. Because CBP does not require companies to submit claims electronically using a standardized form, program staff scan all the documents received for electronic storage and subsequently archive all paper copies of the documents. CDSOA program staff must review each claim to ensure it contains the required information, contact claimants to clarify basic information, and send out letters concerning rejected claims. Staff must manually enter information from accepted claims into a “standalone” database, and perform repeated checks to ensure that they followed the prescribed procedures and that their data entries are valid and accurate. The payments processing component is also labor intensive because existing computer systems do not have the capabilities to provide precise information on the amounts available for disbursement under each order or the amounts to be disbursed to each claimant. CBP’s CDSOA program continues to face a risk in this area because its staff must manually perform the calculations and any inaccurate calculations can result in over or underpayments. Multiple data elements are required to determine the amounts available for disbursement, and these come from different computer systems. In some instances, the computer systems produce conflicting information, and program staff must manually reconcile these differences. While internal control procedures are in place to ensure the validity and accuracy of the calculations, the process is nonetheless subject to human error. Program officials told us that the new computer system being implemented agencywide will not have the financial component needed to perform this task for several more years. Claims processing is further complicated because the guidance about how to file CDSOA claims is very general and open to interpretation. As a result, CDSOA program staff field many phone calls from claimants regarding their claims, including clarification questions on how to file claims. Respondents to GAO’s questions generally praised CBP for its handling of these calls. However, a recent CBP verification of a company’s claims raised various claims-related questions. For example, CDSOA provides that companies can receive disbursements for qualifying expenditures not previously reimbursed, but officials involved in the verification said it was not clear whether companies must subtract all past disbursements when making claims under multiple orders, or only those disbursements related to a particular order. Also, one CDSOA recipient company reported that, because of uncertainty about whether cumulative expenses could be claimed, it claimed only 1 year’s expenses. As a result, it received a much smaller share of disbursements than it otherwise could have. Although the number of staff assigned to process claims and payments has grown, program officials noted that this increase may not be sufficient to handle the dramatic workload increase expected in fiscal year 2005. Specifically, the number of eligible claimants has grown by 500 percent between fiscal years 2004 and 2005, and the number of claims might increase more than 10-fold, from 1,960 to over 29,000. This growth is largely due to AD duty orders on certain warm-water shrimp or prawns recently coming into effect. Table 1 shows the number of program staff for fiscal years 2003-2005 and the program’s responsibilities and workload during those years. Program officials are concerned about fiscal year 2005 processing activities because only about half of the staff has processed claims and payments before. The rest are new and not experienced with the procedures. Moreover, if the workload becomes unmanageable, CBP may be unable to quickly bring new staff on board and up to speed. This is because new employees must undergo a 4 to 6 month background check and initial training of entry-level college graduates takes 3 to 4 months. New staff attains full proficiency only after they complete a full annual cycle of processing claims and payments. Despite these challenges, the CDSOA program does not have formal plans for improving its processes, technology, and staff. In our efforts to help improve the federal government’s performance, we regularly emphasize that processes, staff, and technology are vital to agency performance and that planning is central to managing and improving these three organizational components. For instance, our work on human capital issues throughout the government has revealed the importance of having a human capital plan in place to address problems, such as those faced by the CDSOA program, and ensure that staff with the right skills and abilities is available continuously and can meet changing organizational needs. Claims verification poses another implementation problem for CBP. Companies are not held accountable for the claims they file because CBP does not require them to provide any supporting documentation for their claims and does not systematically verify company claims. The only comprehensive verification conducted to date found significant issues. Although CBP has put in place procedures for verifying CDSOA claims, it does not plan to implement them on a systematic or routine basis. Program officials told us they basically accept the information in company claims and rely on complaints from competitors to initiate verifications. In reviewing certain claims and CBP’s procedures, we found that claims are generally not questioned even though top CDSOA recipient companies have claimed over $2 trillion since fiscal year 2001 (see app.II). CBP normally does not take steps to determine that companies are still in business and producing the item covered by the order under which they are making a claim. Neither CDSOA nor CBP require companies to explain their claims, provide supporting documentation about their claims, or follow a format when listing their qualifying expenditures. For example, in reviewing the 2004 claims filed by top CDSOA recipients, we found that most companies did not provide any details about their claimed expenditures. Indeed, one company listed all of its claimed expenditures under the category of raw materials. CDSOA and CBP do not require that companies have their claims reviewed by a certified public accountant or a party outside of the company. CBP has only verified the claims of a handful of claimants. One of these verifications was comprehensive and revealed significant problems. In the first 3 years of the CDSOA program, staff in CBP’s Office of Regulations and Rulings conducted four, 1-day site visit verifications that revealed no substantive issues. Subsequently, CBP’s Regulatory Audit Division decided to conduct a fifth verification using the detailed verification procedures the division developed in mid-2004. This verification, which took about a year and was completed in June 2005, revealed significant problems, including substantial overstatement of claimed expenses. According to CBP, the primary cause of the CDSOA expenditure overstatement was the company’s failure to maintain an internal control system to prepare and support its CDSOA claims. This prevented the company from identifying the non-qualifying products and costs associated with them. As a result, the company included expenditures incurred in the production of products not covered by the scope of the AD/CV orders. The company acknowledged that it had wrongly claimed expenditures and subsequently took corrective action. CBP does not plan to change its present reactive approach or to systematically target more companies for verifications. Although the law does not require verification of claims, CBP has recognized over time the need for them but has always stopped short of implementing a systematic verification plan. In the third year of CDSOA implementation, a CBP working group under the direction of the Deputy Commissioner’s office developed a statement of work to, among other things, verify claims according to a risk-based plan. However, CBP does not have any evidence that this plan was ever developed or implemented. Despite having new claim verification procedures in place and having performed an in-depth verification as a prototype review to determine the extent of work involved in the verification, Regulatory Audit Division officials told us they do not plan to verify claims systematically or on a routine basis. Instead, CBP will continue to rely on complaints from competitors to select companies for verification. According to CBP officials, this approach is logical because the pro rata formula for allocating disbursements among firms creates an incentive for other companies to police their competitors. Although CBP has an agencywide risk-based plan for targeting companies for audits, this plan does not target the CDSOA program’s recipients because the agency does not consider the program a high risk to revenue or a high priority for policy reasons. CBP’s current position is at odds with its own Inspector General’s (IG) position and our work on financial management, which highlights the importance of verifying claims. In its audit of the CDSOA program, Treasury’s IG emphasized the need for more robust claim verification. In the report, the IG questioned why CBP was not reviewing CDSOA claims on an annual basis, and particularly the expenditures claimed. The IG went on to note that certifications are legally subject to verification and that these certifications would serve as a deterrent against the submission of deceptive claims. Moreover, it emphasized that untimely verifications could result in the loss of revenue for other deserving companies if, in fact, deception was later discovered. Our overall work on claims and disbursements throughout the government shows that the systematic verification of claims before they are processed (or after they are paid) is key to ensuring the validity of transactions and to avoid disbursement problems such as improper payments. This work also reveals the importance of internal controls, such as verification, to ensure that only valid transactions are initiated in accordance with management decisions and directives. Collecting AD/CV duties has been another problem for CBP, compromising the effectiveness of AD/CV trade remedies generally and limiting funding available for distribution under CDSOA. CBP reported that the problem has grown dramatically in the last couple of years. For example, it distributed about half of the money that should have been available under CDSOA in fiscal year 2004. CBP’s efforts to date to address the causes of its collections problems have not been successful, leading CBP to pledge further steps in a July 2005 report to Congress. Customs collections problems have been evident since mid-2003 and have two distinct components. Specifically, the 2003 report on CDSOA by Treasury’s IG highlighted CBP’s collections problems, raising particular concerns about the following two AD/CV collection issues: Unliquidated entries make the eventual collection of duties owed less certain. Liquidation is the final determination of duties owed on an import entry. Liquidation of import entries subject to AD/CV duties only occurs after Commerce issues a final order, determines final dumping margins or final net countervailable subsidies (i.e. duty), and issues liquidation instructions to CBP. Upon receipt of liquidation instructions, CBP calculates and seeks to collect the actual duties owed. In some cases, such as softwood lumber, liquidation is being suspended due to ongoing litigation. While neither Commerce nor CBP can hasten collection of duties tied up in litigation, Treasury’s IG report found that, in some cases, CBP was not collecting duties because Commerce had failed to issue proper liquidation instructions to CBP. In other cases, the report said CBP had overlooked Commerce liquidation instructions. The report said clearing up the liquidation backlog should be given a high priority given the substantial dollars involved—about $2 billion in 2003. Clearing the backlog is also urgent because discrepancies between unliquidated duties and final duties often means that CBP must attempt to collect additional sums from producers that did not expect to pay more, or that went out of business. Open (unpaid or uncollected) duty bills are liquidated entries for which final bills have been issued but not paid. The Treasury’s IG report expressed concern that CBP had not collected $97 million in duties owed and said that the agency might not be able to recover some of these funds. Treasury’s IG said its discussion with CBP personnel suggested recovery could be difficult because: (1) port personnel are accepting bonds that are not sufficient to cover the duties owed plus interest when the entry is liquidated, and (2) the length of time between entry and liquidation is often several years, and in that time, some importers go out of business, leaving CBP with no way to go back for collection of additional duties. In response, CBP and Commerce took steps to identify and address the causes of CBP’s collections problems. CBP attributes the uncollected duties problem largely to “new shippers” with little import history, a problem that is particularly prevalent in the agriculture and aquaculture industries. According to CBP, one of these new shippers accounted for $130 million in uncollected duties in fiscal year 2004. To address this problem, in 2004, Commerce changed its new shipper review process and listed several steps it has taken to strengthen it. These included steps such as making the bondholder liable for duties owed on each import entry, and formalizing a checklist to ensure the legitimacy of new shippers and their sales. Subsequently in 2004, CBP announced an amended directive to help ensure that duties on agriculture and aquaculture imports were collected properly by reviewing and applying a new formula for bonds on these imports, effectively increasing these bonds by setting them at higher rates. Nevertheless, since the problem and its basic reasons became known in 2003, the size of CBP’s collections problem has more than doubled. As figure 2 shows, according to CBP data, $4.2 billion in AD/CV duties remained unliquidated and $260 million in AD/CV duties were unpaid at the end of fiscal year 2004. According to CBP, a large amount of the unliquidated entries involves duties on softwood lumber from Canada (about $3.7 billion). In February 2005, CBP reported to Congress that it had developed a plan to isolate suspended entries that were beyond the normal time frames of an AD/CV case and then worked with Commerce to obtain liquidation instructions, reducing the inventory of one million suspended entries by 80,000. However, many unliquidated entries remain and some of the unliquidated entries are still due to problems within CBP’s and Commerce’s control. CBP estimates that over 90 percent of all unliquidated AD/CV entries are awaiting Commerce instructions for liquidation. Regarding unpaid duties, a large percentage pertains to imports from China. Specifically, nearly two-thirds of these unpaid duties (about $170 million) relate to an AD order on crawfish tail meat from China. The second largest amount (about $25 million) relate to an AD order on fresh garlic from China. CBP’s continued collections problems have led to calls for more drastic measures. Several industry groups, including representatives of the garlic, honey, mushroom, and crawfish industries, have advocated for elimination of the new shipper bonding rules in favor of cash deposits on entries for new AD orders. Most crawfish and some steel recipients responding to our questionnaire also raised concerns about CBP’s collection efforts and quality of communication about ongoing problems. As a result, CBP is pursuing additional measures. In a February 2005 report to Congress, CBP said it is working with Treasury to address financial risks associated with bond holders’ insolvency and monitoring of agriculture/aquaculture importers’ compliance with its new bonding requirements on a weekly basis. In its July 2005 report to Congress, CBP highlights that it has begun working with other U.S. agencies to develop legislative proposals and other solutions to better address AD/CV duty collection problems. CBP notes that it plans to forward the results of this interagency effort to Congress by December 2005. Meanwhile, Congress is considering legislation that would change new shipper privileges. Most CDSOA payments went to a small number of U.S. producers and industries, with mixed effects reported. Top recipient companies reported that the payments had positive overall effects, although their assessments of the extent of the benefits varied. Leading recipient companies within the seven industries we examined also reported varying positive effects. In four of these industries—bearings, candles, crawfish, and pasta— recipients we contacted reported benefits, but some non-recipients said that CDSOA payments were having adverse effects on their ability to compete in the U.S. market. Although some have argued that CDSOA has caused increases in the number of AD/CV petitions filed and in the scope and duration of AD/CV duty orders, the evidence to date is inconclusive. From fiscal year 2001 to fiscal year 2004, CBP has distributed approximately $1 billion in CDSOA payments to 770 companies from a broad range of industries. These payments have been highly concentrated in a few companies. Figure 3 shows the share of payments going to the top five companies and the share received by the remaining CDSOA recipients. One company, Timken, a bearings producer, received about twenty percent of total distributions, approximately $205 million, during fiscal years 2001- 2004. Five companies, including Timken, received nearly half of the total payments, or about $486 million. Figure 4 shows the distribution of payments to the top 39 recipient companies that have received 80 percent of total CDSOA disbursements. These top recipient companies included several producers of steel, candles, and pasta. They also included producers of cement, chemicals, cookware, pencils, pineapples, and textiles. For most of the top recipient companies responding to our questionnaire, the ratio of CDSOA payments to sales was less than 3 percent. Specifically, the ratio of payments to sales ranged from less than 1 percent to over 30 percent. The ratio was generally the smallest for steel companies and the largest for candle companies. In analyzing CDSOA distributions by industry, or product group, the payments are similarly concentrated among only a few industries or product groups. For example, approximately two-thirds of total CDSOA distributions went to three product groups—bearings, candles, and iron and steel mills—which recieved approximately 40 percent, 14 percent, and 12 percent respectively. Also, 95 percent of all total payments went to 24 out of the 77 product groups. Figure 5 shows the leading industries or product groups that received CDSOA distributions. As detailed in appendix II, the 24 companies that responded to our survey of top CDSOA recipients indicated that the CDSOA disbursements had positive effects, but the extent of benefit varied from slight to substantial. We asked these companies to assess CDSOA’s effects at both the industry and company level on a number of different dimensions including prices, investment, employment, and ability to compete. The top recipients reported that CDSOA had the most positive impact in areas such as net income and employment. For example, one company commented that CDSOA payments have allowed for substantial investments in its factory and workers, providing, among other things, supplemental health care benefits. Another company reported that CDSOA payments have been helpful in justifying continued investment during periods when prices are depressed, due to dumping or subsidization. The top recipients reported that CDSOA had less of an effect in other areas such as prices and market share. For example, a company commented that disbursements have had little or no effect on prices for its CDSOA product because such prices are ultimately determined by market forces. As detailed in appendix III, in our examination of seven industries that received CDSOA payments—bearings, steel, candles, pasta, dynamic random access memory (DRAM) semiconductors, crawfish, and softwood lumber—leading recipients we contacted generally reported benefits to varying degrees, and the non-recipients we contacted either complained about being disadvantaged or did not report effects. In four industries— bearings, candles, crawfish, and pasta—recipients generally reported benefits, but some non-recipients complained that the disbursements were having negative effects on them. These industries all involve cases that predate CDSOA. In general, the non-recipients that complained of negative effects are ineligible for disbursements and several complained about their ineligibility. Bearings. The leading domestic producer of bearings is eligible for CDSOA disbursements, but several large foreign-owned companies with longstanding production in the United States are its major competitors and ineligible. Three bearings recipient companies commented that CDSOA has had positive effects, although they varied in their assessments of the extent of the benefit. One company stated that the disbursements helped it to replace equipment and enabled it to recover to the position it had held prior to being injured from dumping. Another recipient commented that, while the CDSOA disbursements were helpful, they were distributed several years after the initial injury and did not fully compensate the company for lost profits due to unfair trade. Two non-recipients provided views. One non-recipient commented that CDSOA harms global bearings companies because the antidumping duties they pay are transferred directly to a competitor. It further commented that not only is it forced to subsidize competitors through CDSOA, but the money it is paying in duties limits its ability to invest in and expand its U.S. operations. The other said it is too early to know what injurious effect CDSOA disbursements would have on non- recipients. Steel. In this industry, the largest U.S. producers are CDSOA recipients. Recipient companies reported that payments—though small relative to company size and the challenges they face in their capital-intensive industries—had positive effects. Steel accounts for the single largest industry share of outstanding dumping orders, and most major U.S. producers receive CDSOA payments under numerous AD/CV orders on different products. Steel recipients we contacted varied in their assessments of CDSOA’s effects, but generally agreed that the program benefited them by providing greater opportunities for making needed capital investments in their plant and equipment. Steel recipients also commented, though, that CDSOA has not been a complete solution to the serious problems they faced. When the Asian financial crisis spawned rising imports, falling steel prices, and consolidating of firms, the receipt of CDSOA disbursements did not prevent several steel producers from joining numerous others in the industry in filing for bankruptcy. Candles. Ten of the estimated 400 U.S. candle companies are eligible and receive CDSOA disbursements. A number of recipients contended that distributions have helped keep them in business, enabling them to develop newer, better, and safer candles through investment in equipment and research and development. One recipient stated that it has been able to offer employees more consistent and comprehensive benefits packages due to CDSOA. Several large candle producers that are comparable in size to leading recipients complained that they are in favor of the order but are ineligible to receive CDSOA disbursements. Some non-recipients argue that recipients have an unfair advantage in their ability to keep prices lower than they otherwise would. For instance, a major non-recipient company has closed two of four of its domestic manufacturing facilities and has reduced shifts at others. A smaller non-recipient company contended that when it matched its competitors’ lower prices, it was not able to make a profit. As a result, the company stated that it was forced to exit this segment of the candle business and release some workers. Crawfish. About 30 small, family-owned crawfish processors have received CDSOA disbursements. Recipients said CDSOA payments provided the industry with its first effective relief against dumped imports in several years and enabled them to buy and process more crawfish, make long-needed repairs and investments, hire more employees, and pay off debts. In June 2003, the ITC reported that CDSOA disbursements to some domestic producers had converted an industrywide net loss into net income. The 16 crawfish tail meat processors who received CDSOA distributions that we spoke with generally believe that the program has had positive effects on the industry and their companies, keeping businesses open and employees working. Non-recipients we spoke with in this industry said that CDSOA had helped recipient companies---but had put them at a competitive disadvantage. These companies want to be eligible for CDSOA disbursements and several reported they had contacted certain government and congressional sources to try to address their eligibility status, but were told they did not meet the law’s eligibility requirements regarding the expression of support during the investigation. As discussed previously, two of these companies brought legal action to challenge agency decisions on their eligibility status. Because they also have to compete against cheap Chinese imports, these non-recipients viewed the application of the law as unfair. In addition, several said they were not able to compete with recipient companies that offer processed tail meat at prices below their cost of production and appear to be able to do so because the recipients’ CDSOA disbursements will compensate them for any losses. In such conditions, some non-recipients said they cannot operate profitably and some decided to stop processing tail meat. Pasta. Three of the four leading U.S. pasta makers received CDSOA disbursements, but the fourth producer is ineligible. The top two CDSOA recipients in this industry did not respond to our questions, and one of them has filed for bankruptcy. The four CDSOA recipients that responded said they had used the funds to increase or upgrade equipment, invest in research and product development, defray manufacturing costs, and expand production capacity. Nevertheless, CDSOA payments, while not insignificant, were not large relative to sales or enough to offset other problems that the industry faces, such as decreased demand for pasta due to low-carbohydrate diets and low margins. Most non-recipients we contacted said CDSOA had no effect, but a few non-recipients said that the funds had created an uneven playing field and decreased their ability to compete in the marketplace. Several of these companies tried to file for CDSOA funds, but were found ineligible. The large non-recipient company said the money it pays in duties transferred to its competitors could have been used for product development, capital investment, and expansion of its new U.S. operations. DRAMs. All four major DRAM producers in the United States currently have production facilities in the United States as well as abroad; however, three of these companies are U.S. subsidiaries of foreign producers and have entered the market within the last decade. A CV order is in effect for DRAMs produced by one Korean company only, but the bulk of the distributions were made under an AD order on DRAMs of one megabit and above from Korea issued in 1993 and revoked in 2000, as well as on an AD order on SRAMs (static random access memory chips) issued in 1998 and revoked in 2002. A leading CDSOA recipient was the sole recipient of duties on these revoked orders. Fabrication facility costs are high and require complete replacement every few years. The DRAM industry is cyclical in nature and subject to “booms and busts,” where demand is driven by investments in computers and other end products. Both CDSOA recipients reported some net losses. One company reported benefits from receiving payments and another reported fewer effects; both payments were small relative to their net sales. Softwood Lumber. Both CDSOA recipients and non-recipients include leading softwood lumber producers. Recipients and non-recipients that we contacted indicated that disbursements to date have been too small to have a discernable effect. However, non-recipients expressed concern about potential adverse effects in the future, should the $3.7 billion in AD/CV duties being held on deposit pending liquidation ever be distributed. These duties are presently in escrow pending the outcome of litigation by Canadian interests against the U.S. duties. Current evidence does not clearly demonstrate that CDSOA is linked to an increasing number of AD/CV petition filings. Critics have raised concerns that, by awarding a portion of the tariff revenue that results from successful petitions, CDSOA could potentially lead to more AD/CV petition filings and thereby more restrictions on imports, to the detriment of the U.S. economy. However, the evidence we analyzed was inconclusive. Because CDSOA provides direct financial benefits to firms participating or supporting AD/CV petitions by awarding them a proportion of the tariff revenue, some analysts have warned that CDSOA could lead to more petitions and to more companies supporting the filings because only companies who supported the petition would receive disbursements. A report by the Congressional Budget Office (CBO) supports this view, arguing on economic incentive grounds that CDSOA encourages more firms to file or support petitions and discourages settling cases. CBO also argues that firms may resume production or increase their output due to CDSOA, which would result in inefficient use of resources and would be harmful to the U.S. economy and consumers. Our examination of the actual number of filings shows that there is no clear trend of increased AD/CV petition filings since CDSOA. Figure 6 shows that since the passage of CDSOA in 2000, the number of petitions spiked in 2001 and then sharply declined over the next three years. Moreover, this fits the historical pattern of the number of AD/CV petition filings, which also do not show a clear upward trend. The number of AD/CV petitions filed each year has fluctuated widely, ranging from a maximum of 120 in 1985 to a minimum of 16 cases in 1995. Economists have found evidence that the number of antidumping filings is closely linked to macroeconomic conditions and real exchange rates. Our analysis of company responses to our case study questions similarly reveals mixed evidence but no trend. In general, companies told us CDSOA had little impact on their decision whether to file AD/CV relief petitions. Most companies that responded to our questions said that filing and winning new cases was too expensive, and the receipt of CDSOA payments was too speculative, for CDSOA to be a major factor in their filing decision. For example, producers accounting for a sizeable share of U.S. softwood lumber production freely chose not to support the case, despite being aware of the prospect of sizeable CDSOA disbursements. However, bearings companies that had not supported earlier cases subsequently supported a later case on China brought after CDSOA’s passage. In addition to the number of filings, our interviews and responses from companies in the seven industries we examined revealed a few allegations that CDSOA resulted in orders that cover imports of more products for longer periods—that is, through wider-than-necessary product scopes of AD/CV duty orders and longer-than-warranted retention of existing orders. However, these allegations contradicted other examples, and we could not independently verify them. One steel user, for example, complained that CDSOA disbursements were a factor in the denial of its request for narrowing the scope of an order and claimed the result has been to put certain U.S. fastener makers at a disadvantage. In contrast, one steel company noted that the domestic industry has no incentive to overly broaden the scope of an AD/CV relief petition because doing so could undermine its ability to prove injury and to obtain an order in the first place. Bearings recipient companies similarly responded that CDSOA has not affected the scope or duration of AD/CV duty orders and said regular “sunset” reviews should ensure the government terminates unwarranted orders. Bearings non-recipients, on the other hand, drew a connection between the main CDSOA beneficiary within the industry and its support for continuance of orders. In the candle industry, companies universally reported that they are united in supporting retention of the existing order, but divided over efforts by some candle firms to expand its scope. After finding the CDSOA inconsistent with WTO agreements and after the United States’ failure to bring the act in compliance with the agreements, in 2004 the WTO gave 8 of the 11 members that complained about CDSOA authorization to suspend concessions or other WTO obligations owed to the United States. Canada, the European Unian (EU), Mexico and Japan have consequently applied additional tariffs to U.S. imports, and others are authorized to follow. In 2003, the WTO found the CDSOA inconsistent with U.S. obligations under WTO agreements and asked the United States to bring the act into conformity with WTO Agreements. Eleven members had brought complaints about the CDSOA to the WTO and prevailed in their claims that the CDSOA is inconsistent with WTO agreements. The WTO found that CDSOA was not consistent with U.S. WTO obligations because it was not one of the permitted specific actions against dumping and subsidization specifically listed in applicable WTO agreements. The following the ruling, the United States indicated its intention to comply. WTO gave the United States until December 27, 2003, to bring the CDSOA into conformity with the organization’s pertinent agreements. However, all efforts to repeal the law have thus far been unsuccessful. Meanwhile, the United States is also pursuing negotiations at the WTO to address the right of WTO members to distribute AD/CV duties. The President proposed repealing CDSOA in his fiscal year 2004, 2005, and 2006 budget submissions. Senate Bill 1299 was introduced in Congress in 2003 to amend the CDSOA and House Bill 3933 in 2004 to repeal the CDSOA. Neither of these efforts succeeded during that legislative session of Congress and thus expired. In a March 10, 2005, status report to the WTO, the United States reaffirmed its commitment to bringing the CDSOA into conformity with WTO agreements. The United States also reported that House Bill 1121 had been introduced on March 3, 2005, to repeal CDSOA and that it had been referred to the Committee on Ways and Means. Also in 2005, Senator Grassley introduced Amendment 1680 to the Departments of Commerce and Justice, Science and Related Agencies Appropriations bill to prohibit any further CDSOA distributions until the USTR determines that such distributions are not inconsistent with U.S. WTO obligations. However, as of the date of publication of this report, Congress has not passed House Bill 1121 and the Senate Committee on Appropriations has not adopted Amendment 1680. Since late 2001, the United States has been engaged in WTO negotiations at the Doha Round, which may include changes to the WTO agreements under which CDSOA was challenged. Following a congressional mandate to the USTR and Commerce that negotiations shall be conducted within the WTO to recognize the right of its members to distribute monies collected from antidumping and countervailing duties, the United States submitted a paper to the WTO Rules Negotiating Group stating that “the right of WTO Members to distribute monies collected from antidumping and countervailing duties” should be an issue to be discussed by the negotiating group. USTR officials told us that, to date, the U.S. proposal has not attracted support from any other WTO member. In January 2004, 8 of the 11 complainants—Brazil, Canada, Chile, the EU India, Japan, Korea, and Mexico—sought and secured authorization to retaliate against the United States. As a result of binding arbitration regarding the level of authorized retaliation, the eight members received authorization to impose an additional import duty on U.S. exports covering a total value of trade up to 72 percent of the total of disbursements made under the CDSOA for the preceding year relating to AD/CV duties on that member’s products each year. The total suspension authorized for 2005 could be up to $134 million based on the fiscal year 2004 CDSOA disbursements. Specifically, for fiscal year 2004 disbursements, the WTO arbitrators authorized the imposition of additional duties covering a total value of trade not exceeding $0.3 million for Brazil, $11.2 million for Canada, $0.6 million for Chile, $27.8 million for the EU, $1.4 million for India, $52.1 million for Japan, $20.0 million for Korea, and $20.9 million for Mexico. On May 1, 2005, Canada and the European Communities began the imposition of additional duties on various U.S. exports. In particular, Canada has imposed a 15 percent tariff on live swine, cigarettes, oysters, and certain specialty fish (including live ornamental fish and certain frozen fish) and the EU have imposed a 15 percent tariff on various paper products, various types of trousers and shorts, sweet corn, metal frames, and crane lorries. On August 18, 2005, Mexico began imposing additional duties on U.S. exports such as chewing gum, wines, and milk-based products. On September 1, 2005, Japan began imposing additional duties on U.S. exports such as steel products and bearings. The remaining four members say they might suspend concessions. The three members that did not request authorization to retaliate— Australia, Indonesia, and Thailand— have agreed to extend the deadline for requesting authorization indefinitely. As agreed, the countries will give the United States advance notice before seeking authorization to retaliate. In return, the countries retain the ability to request authorization to retaliate at any point in the future, and the United States agreed not to seek to block those requests. See figure 7 for a timeline of events related to the WTO decision on CDSOA. Congress’ stated purposes in enacting CDSOA were to strengthen the remedial nature of U.S. trade laws, restore conditions of fair trade, and assist domestic producers. Our review suggests that the implementation of CDSOA is achieving some objectives more effectively than others. One reason is that, as a result of some of the key features of CDSOA, the law in practice operates differently from trade remedies. For instance, while trade remedies such as AD/CV duties generally provide relief to all producers in a particular market, the eligibility requirements of CDSOA limit relief to only a subset of domestic producers—only those that petitioned for relief or that publicly supported the petition by sending a letter to the ITC or filling an ITC questionnaire while the agency was conducting its original investigation and remain in operation. Our analysis of CDSOA disbursement data and company views on the effects of CDSOA indicate that CDSOA has provided significant financial benefits to certain U.S. producers but little or no benefits to others. As a result, CDSOA has, in some cases, created advantages for those U.S. producers that are eligible and receive the bulk of disbursements over those U.S. producers that receive little relief or are ineligible, by choice or circumstance. Moreover, because the WTO found that CDSOA did not comply with WTO agreements, the EU, Canada, Mexico, and Japan recently retaliated against U.S. exports and this imposes costs on a number of U.S. companies exporting to those markets. In implementing CDSOA, CBP faces problems processing CDSOA claims and payments, verifying these claims, and collecting AD/CV duties. The CDSOA program’s time frame for processing payments is already too tight to perform desired quality controls. The dramatic growth in the program’s workload--an estimated 10-fold increase in the number of claims in fiscal year 2005 and the potential disbursement of billions of dollars from softwood lumber duties--heighten program risks. CBP’s labor-intensive process for claims could be streamlined through steps such as regularly obtaining from the ITC electronic updates of the list of potentially eligible companies and having companies file CDSOA claims using a standard form and submit them electronically. CBP’s recent comprehensive company claim verification effort also indicates that the agency needs additional guidance in place for filing claims. In addition, CBP lacks plans for managing and improving its CDSOA program’s processes, staff, and technology. For instance, it needs a human capital plan for enhancing its staff in the face of dramatic growth in workload processing for both CDSOA claims and payments. Accountability for the accuracy of the claims is virtually non-existent and CBP has no plans to verify claims systematically or on a routine basis. Finally, CDSOA has helped highlight CBP’s collection problems. Despite reports to Congress on its efforts to address these problems, CBP faced a doubling in the AD/CV collections shortfall in fiscal year 2004, to $260 million. This shortfall not only reduces the amount available for disbursement under CDSOA, but also undermines the effectiveness of the trade remedies generally. Given the results of our review, as Congress carries out its CDSOA oversight functions and considers related legislative proposals, it should consider whether CDSOA is achieving the goals of strengthening the remedial nature of U.S. trade laws, restoring conditions of fair trade, and assisting domestic producers. If Congress decides to retain and modify CDSOA, it should also consider extending CBP’s 60-day deadline for completing the disbursement of CDSOA funds. Meeting this deadline has been a problem in the past, and may be even more difficult in the future given that the program is experiencing a dramatic growth in its workload. For instance, extending the deadline for processing payments for another 30 days would give the program’s staff additional time for processing payments and for pursuing additional internal control activities. To the extent that Congress chooses to continue implementing CDSOA, we recommend that the Secretary of Homeland Security direct the Commissioner of Customs and Border Protection to enhance the processing of CDSOA claims and payments, the verification of these claims, and the collection of AD/CV duties. Specifically, we recommend that: To improve the processing of CDSOA claims, CBP should implement labor savings steps such as working with the ITC to formalize and standardize exchanges of electronic updates of the list of eligible producers, and requiring that company claims follow a standard form and be submitted electronically. This would also reduce data entry- related errors. To further improve the processing of claims, CBP should provide additional guidance for preparing CDSOA certifications or claims. To enhance the processing of claims and payments in the face of a growing workload, CBP should develop and implement plans for managing and improving its CDSOA program processes, staff, and technology. For instance, a human capital plan would help ensure that the CDSOA program has staff in place with the appropriate competencies, skills, and abilities. To enhance accountability for claims, CBP should implement a plan for systematically verifying CDSOA claims. This plan should aim to ensure that companies receiving CDSOA disbursements are accountable for the claims they make. CBP should also consider asking companies to justify their claims by providing additional information on their claims, such as an explanation of the basis for the claim, supporting financial information, and an independent assessment of the claim’s validity and accuracy. To better address antidumping and countervailing duty collection problems, CBP should report to Congress on what factors have contributed to the collection problems, the status and impact of efforts to date to address these problems, and how CBP, in conjunction with other agencies, proposes to improve the collection of antidumping and countervailing duties. We provided a draft of this report to the U.S. International Trade Commission, Customs and Border Protection, and the Office of the U.S. Trade Representative. We obtained written comments from CBP (see app. IV). CBP concurred with our recommendations. We also received technical comments on this draft from our liaisons at CBP, the ITC and USTR, which we have incorporated where appropriate. We are sending copies of this report to interested congressional committees, the U.S. International Trade Commission, Customs and Border Protection, and the Office of the U.S. Trade Representative. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4347. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. At the request of the Chairman of the House Subcommittee on Trade, Committee on Ways and Means, as well as several House Members, we examined the implementation and effects of the Continued Dumping and Subsidy Offset Act (CDSOA) of 2000. Specifically, we assessed (1) what key legal requirements guide and have affected agency implementation of CDSOA; (2) what problems, if any, U.S. agencies have faced in implementing CDSOA; and (3) which U.S. companies and industries have received payments under CDSOA and what effects these payments have had for recipient and non-recipient companies; and described (4) the status of the World Trade Organization (WTO) decisions on CDSOA. To determine the key legal requirements that guide and have affected agency implementation of CDSOA, we obtained and reviewed legislation and regulations establishing the requirements and procedures for the International Trade Commission (ITC) to determine company eligibility to receive CDSOA funds and for the Department of Homeland Security’s Customs and Border Protection (CBP) to implement CDSOA. We discussed these requirements and their relationship to agency implementation with officials at the ITC and CBP that carry out the agencies’ respective roles. We also reviewed judicial opinions and other documents associated with certain legal cases that have been brought to challenge key requirements of CDSOA, and incorporated the viewpoints expressed by some companies that we contacted in addressing our third objective, which illustrated the impacts of certain requirements. To assess the problems, if any, U.S. agencies have faced in implementing CDSOA, we first determined the agency roles and responsibilities that CDSOA established. We then obtained and analyzed ITC and CBP documents outlining their procedures for carrying out their CDSOA responsibilities and discussed with agency officials the actions the agencies have taken to implement CDSOA. We reviewed evaluations of CDSOA operations in both the Department of the Treasury (Treasury) and ITC conducted by the Inspectors General (IG) of those agencies. We also obtained from CBP a statement of work that had been developed for improving CBP’s management of the CDSOA program. We discussed agency implementation of CDSOA with officials from the Departments of Commerce and Agriculture, as well as certain industry representatives, affected companies, and law firms that handle CDSOA-related actions for their clients. We also reviewed GAO’s documents on human capital and disbursements for additional criteria to assess the agencies’ implementation of CDSOA. Our work focused on certain problems at CBP: To assess CBP’s claims and payments processing procedures, we conducted field work at CBP’s Revenue Division in Indianapolis where we met with officials and staff of the CDSOA Team. After they gave us a comprehensive briefing of their CDSOA operations, we observed these operations, reviewed documentation of their procedures, and discussed challenges they face in implementing the law. We discussed changes in the CDSOA Team’s workload over time with these officials and obtained data on their workload and staff resources. We discussed the team’s procedures for counting and recording eligible and actual claimants and claims, which included information they obtain from the ITC on eligible producers and internal controls the team applies to ensure accuracy in receiving and processing claims. We determined that their data were sufficiently reliable for the purpose of analyzing the changing relationship between the team’s workload and staff resources. To assess CBP’s approach to verifying claims, we discussed the approach and the extent of claim verification since the program’s inception with CBP officials and we reviewed CBP procedures for verifying company claims that were developed in 2004. We also reviewed documentation of a comprehensive verification of one company’s CDSOA claims that was conducted using these new procedures. Because this verification raised issues about the quality and consistency of CBP’s guidance regarding claims submission, we examined the fiscal year 2004 claim files for 32 top CDSOA recipients to ascertain the prevalence of these issues and also obtained the viewpoints of certain CDSOA recipients on CBP’s claims guidance. To describe CBP’s efforts to collect the anti-dumping (AD) and countervailing (CV) duties that fund CDSOA, we obtained and reviewed data on CBP’s annual CDSOA disbursements and AD/CV duty liquidations and collections. To assess the reliability of the data on unliquidated AD/CV duties, we compared them to data used by Treasury’s IG in its 2003 report and performed basic reasonable checks. We determined the data were sufficiently reliable to support the finding that there had been a substantial increase in unliquidated AD/CV duties since 2002. We also reviewed CBP reports to Congress in 2004 and 2005 that reported on AD/CV duty collections issues and problems and the section of the 2003 Treasury IG report that addressed CBP’s efforts related to liquidating and collecting AD/CV duties. Finally, we incorporated the viewpoints of certain companies and industry groups about the status of uncollected duties, and CBP’s efforts to collect them. To determine which U.S. companies and industries have received payments under CDSOA and what effects these payments have had for recipient and non-recipient companies, we obtained and analyzed CBP’s annual disbursement data for fiscal years 2001 to 2004 and collected information from top CDSOA recipients and from recipients and non-recipients in seven industries. Specifically, we identified 770 companies that had received disbursements at some point during fiscal years 2001 through 2004 and combined the multiple disbursements that companies may have received to calculate the total amount of disbursements made to each company during this period. Some companies received disbursements under different names or were acquired by, or merged with or were otherwise affiliated with, other companies on the list during this period. We did not make adjustments to the number of companies, but rather retained the company distinctions in the data as CBP provided it. We then identified 39 companies that had received the top 80 percent of the disbursements made during fiscal years 2001 through 2004 and we reported information about these disbursements. Using this data, we also identified the top 24 product groups that received 95 percent of disbursements during fiscal years 2001 through 2004, and we reported information about these disbursements. We assessed the reliability CBP’s CDSOA disbursements data, and the related Harmonized Tariff Schedule data, and Census Bureau’s data matching the Harmonized Tariff Schedule to the North American Industry Classification System by (1) performing electronic testing of required data elements, (2) reviewing existing information about the data and the system that produced them, and (3) interviewing agency officials knowledgeable about the data. We determined that the data were sufficiently reliable for the purposes of this report. To further determine the effects of CDSOA payments on recipients and non-recipients, we primarily relied on the views provided by top CDSOA recipient companies and of certain recipients and non-recipients in 7 of the top 24 industries (bearings, steel, candles, pasta, DRAMs, crawfish, and softwood lumber) to which CDSOA payments have been made. We selected these industries based on a range of criteria including: the leading recipients of CDSOA funds; industries with the most AD/CV duty orders; industries receiving press coverage related to CDSOA; and industries considered by certain experts to have unique or noteworthy situations. In selecting these industries, we also considered including different types of industries and industries with differing numbers of CDSOA recipients. We consulted with experts at the ITC, the Departments of Commerce and Agriculture, and relevant trade associations to help define the industries and identify leading non-recipients companies within them. In addition, we obtained industry background from ITC investigative reports and other official industry sources. To obtain these companies’ views on CDSOA, we developed and sent out a questionnaire to top CDSOA recipient companies, and a set of structured questions to selected recipient and non-recipient companies in the seven case study industries. We developed and pretested the questionnaire between February and April 2005. Our structured questions were based on the items in our questionnaires. We sent surveys to 32 of the top 39 recipient companies we had identified. Twenty-four of these companies provided written responses to our questions. Their views are not necessarily representative of all CDSOA recipients. We selected non-probability samples of CDSOA recipients and non- recipients that are U.S. producers for each of our seven case study industries. We selected recipient companies based primarily on the amount of CDSOA funds they had received between fiscal year 2001 and 2004. However, in certain industries with small numbers of recipients, including bearings, DRAMs, and candles, we sent structured questions to all recipient companies. We selected non-recipient companies based on industry experts’ views of the importance of the companies and recipient company views of their major non-recipient competitors. We also considered available lists of companies by industry, but found that these lists had limitations in terms of coverage and could not be used to draw probability samples. Overall, we selected 69 recipient and 82 non-recipient companies in the seven industries. In total, we received 61 written responses from recipient companies and 31 written responses from non-recipient companies. Appendix III provides details on how many companies we contacted and received information from for each industry. All recipient companies in the bearings, DRAMs, and candles industries provided responses, and these responses can be generalized. For recipient companies in the other four industries, and for non-recipient companies in all the industries, the responses we received cannot be generalized because of the non-probability samples we used and/or the number of responses we received. Thus, in these cases, the views we report are not necessarily representative of their respective groups. However, we supplemented the information we received with telephone interviews to verify, and in some cases, expand upon, the information that some companies provided. We also compared the overall responses in each industry with industry experts’ views and the information contained in available studies, such as ITC reports, and found the information we gathered to be broadly consistent with these sources. Finally, within this objective, we also conducted an analysis on the trends in the filings of AD/CV relief petitions. We collected data on the number, type, and status of AD/CV duty orders from the ITC and Commerce. We verified this information directly with the Federal Register notices, which are the official sources for AD and CV orders. We determined that the data were sufficiently reliable for the purposes of this report. In addition, we reviewed literature on the determinants of AD petition filings. We applied regression analysis to study the effects of macroeconomic conditions, real exchange rates, and CDSOA itself on the number of petition filings. We also asked the companies we surveyed to discuss CDSOA’s impact on AD/CV filings and interviewed industry representatives to gain an understanding of what affects their decision to file or support AD/CV petitions, and whether CDSOA was a significant factor in their decision. To determine the status of the WTO decisions on CDSOA, we analyzed official U.S., foreign government, and WTO documents. We also interviewed officials from the Office of the U.S. Trade Representative and the Department of State. We conducted our work in Washington, D.C., and Indianapolis, Indiana, from September 2004 to September 2005 in accordance with generally accepted government auditing standards. This appendix provides information on the CDSOA payments received by the top recipient companies and the views of these companies on CDSOA’s effects. Table 2 lists the top 39 companies that received 80 percent of the total CDSOA payments during fiscal years 2001-2004. This table also presents each company’s percentage of the total payments and the cumulative percentages. Each company’s industry is also listed. We sent surveys to the companies that received 80 percent of the CDSOA payments from 2001 through 2004, asking for their views on CDSOA’s effects. We asked these companies to assess CDSOA’s effects on a number of different dimensions including prices, employment, and ability to compete. We asked the companies to rate CDSOA’s effect, ranging from 1 (very positive) to 5 (very negative) for each particular company dimension. The top recipients reported that CDSOA had the most positive impact in the areas of net income and employment. In its written comments, one company stated that CDSOA payments have allowed it to make substantial investments in its plant and its workers, including providing supplemental health care benefits. The top recipients reported that CDSOA had less of effect in areas such as prices, net sales, and market share. Several companies commented that, for example, disbursements have had little or no effect on prices for its CDSOA products, since such prices are ultimately determined by market forces. The ratio of CDSOA payments to company net sales ranged from less than 1 percent to over 30 percent. However, this ratio was less than 3 percent for all but five companies. In table 3 we present summary information on these companies’ responses. Table 4 shows that 17 of the 24 companies reported that CDSOA had increased their ability to compete in the U.S. market. This appendix provides information on the CDSOA payments received by recipient companies in seven industries: bearings, steel, candles, DRAMs, pasta, crawfish, and softwood lumber. It also discusses the views of recipient and non-recipient companies in these industries on CDSOA’s effects. Figure 8 shows the share of CDSOA disbursements received by U.S. companies in the seven industries and in the remaining industries. Bearings are used in virtually all mechanical devices and are used to reduce friction in moving parts. Types of bearings include ball bearings, tapered roller bearings, and spherical plain bearings. The market for bearings is global and dominated by only a few multinational companies. Within the U.S. market the degree of concentration among different segments of the industry varies; the Census Bureau listed 19 producers of tapered roller bearings and 65 producers of ball bearings in 2003. The Timken Company is the largest U.S. bearings company, but several foreign-owned companies have also had a long-standing presence in this country as bearings producers and are Timken’s main competitors in the U.S. market. One foreign-owned producer, for example, has operated U.S. production facilities for over 80 years, while two others have produced in this country for over 25 years. These companies have not been eligible to receive CDSOA disbursements because they did not support the original cases. In 1975 the ITC determined that tapered roller bearings from Japan were harming the domestic industry and a dumping finding was published the following year. The Department of Commerce subsequently published antidumping orders on tapered roller bearings against Japan, China, Hungary, and Romania in 1987. Commerce then issued antidumping orders for ball bearings, cylindrical roller bearings, and spherical plain bearings from a number of other countries in 1989. Currently, there are eight bearings orders in effect against seven countries. Import penetration of the U.S. market has grown from 5 percent of consumption in 1969 to approximately 25 percent in 2003. When Commerce levied ball bearing dumping duties against Japan, Singapore, and Thailand in 1989, an opportunity arose for China. All of the world’s major bearing companies, including Timken, now have manufacturing facilities in China. Timken and Torrington are the two largest CDSOA recipient companies. Together, they received over 80 percent of all disbursements to the bearings industry and one-third of disbursements to all companies in CDSOA’s first four years. Table 5 shows CDSOA recipients in the bearings industry from fiscal years 2001 through 2004. We obtained the views of three bearings recipient companies. These companies commented that CDSOA has had positive effects, although they varied in their assessments of the extent of the benefit. Bearings recipients reported that CDSOA’s greatest impact has been in the areas of net income, employment, and ability to compete. These companies also commented that CDSOA has had less of an effect on prices, sales, and profits. One company stated that the disbursements helped it to replace equipment and become more competitive, enabling it to recover to the position it had held prior to being injured from dumping. Another recipient commented that while the CDSOA disbursements were helpful, they were distributed years after the initial injury and did not fully compensate the company for lost profits due to unfair trade. The bearings recipient companies vary greatly in their overall size. These companies are also significantly different in terms of the amount they have received through CDSOA, overall and as a percentage of their sales. For the recipient companies in our case study, in fiscal year 2004, CDSOA disbursements as a percentage of company sales ranged from just over 1 percent to 21 percent with the larger recipients generally at the low end of this scale. We obtained the views of two non-recipients, one of which reported negative effects, while the other said it is too early to tell the extent of the harm that CDSOA has caused. One company commented that CDSOA is harmful because the antidumping duties it pays are transferred directly to a competitor. The company further stated that the money it is paying in duties limits its ability to invest in its U.S. operations. The other non- recipient company emphasized the size of the CDSOA disbursements in the bearings industry, but commented that it is still too early to know the injurious effect these disbursements will have on non-recipient producers. The leading non-recipient producers have not been eligible to receive CDSOA payments because they did not support the original cases. Table 6 provides bearings recipient and non-recipients’ responses to our questionnaire on CDSOA’s effects. Table 7 provides these companies’ responses to our question on CDSOA’s effect on their ability to compete in the U.S. market. We also asked companies to describe how they used the CDSOA payments that they received. However, the law does not require that distributions be used for any specific purpose. The bearings recipient companies varied in their responses to this question. One company responded that it has used the disbursements to rebuild production equipment, maintain employment levels, and add more technical personnel for pursuing bearings customers. A second company commented that it does not earmark funds for a specific project; thus the funds have been spent on debt reduction. The third company did not specify how it used the funds, reiterating that the disbursements were based on previous qualified expenditures and emphasizing that its investments in U.S. bearings production have exceeded the money it received through CDSOA. No clear trend emerged from these companies’ production and employment data over the 4 years that CDSOA has been in effect. One recipient’s net sales increased from 2001 to 2004, for example, while another’s declined. Similarly for employment, one recipient’s number of workers decreased over the 4 years, while another’s remained about the same. The responses from the non-recipients also did not show a clear trend for production or employment. For two of the three companies, employment declined, while all three companies’ net sales increased to varying degrees. Most of the bearings companies that we contacted indicated that they had both domestic and overseas production operations. Of the three recipient companies, only one reported that it imports CDSOA products, but its imports make up a small share of its overall sales. To obtain bearings companies’ views on CDSOA’s effects, we sent out a set of structured questions to certain CDSOA recipients and certain non- recipients in the bearings industry. CDSOA payments are made in this industry under multiple AD orders that were issued in different years. To identify CDSOA recipients, we obtained information from CBP about the companies that have received payments in each of the four years that disbursements have been made and the amount of disbursements they have received. Using this information, we developed a list of seven recipients and ranked them by their total CDSOA receipts. We obtained additional information from company representatives and CBP resulting in our combining certain recipients and treating them as three distinct companies. For example, CBP sometimes listed in its annual reports on CDSOA, as separate distributions, payments to entities that were divisions or subsidiaries of other companies that also received CDSOA distributions. We surveyed the three companies, and all of them provided completed surveys. The universe of bearings non-recipients is larger than the universe of recipients. We sought to obtain views from a comparable number of non- recipients as recipients. To identify these companies, we obtained information from associations or others that were knowledgeable about the industry. Specifically, we obtained information about non-recipient bearings companies by (1) identifying members of the American Bearings Manufacturing Association, (2) asking recipient companies to identify their competitors, and (3) conducting our own research. We surveyed three non- recipient companies, of which two provided completed surveys. These two non-recipients are multi-national companies that are among the leading global producers of bearings and have had a long-standing history of production in the United States. The views of the non-recipients that responded to our questions may not be representative of all non-recipients. For this case study, we defined the scope of the steel industry to include companies that produce steel by melting raw materials. The two main types of producers of raw steel are integrated mills and minimills. Integrated producers use older, blast furnaces to convert iron ore into steel. They mainly produce “flat” products, such as plate and hot-rolled steel that are used in transportation equipment, construction, and heavy machinery. The minimills are a scrap-based industry, producing steel from recycled metal products, such as crushed cars or torn-down buildings. They use newer, electric-arc furnaces, and account for almost all of the industry’s “long” production, including wire-rod and rebar. The top three domestic steel producers—Mittal, U.S. Steel, and Nucor—together account for about half of overall domestic steel production, which is approximately 100 million tons a year. A third, much smaller sector of the industry is the specialty, or stainless, sector. These producers also use electric-arc furnaces and represent about 2 percent of the overall industry output and about 10 percent of value. The steel industry is by far the largest user of AD/CV duty orders, with over 125 iron and steel mill orders in place as of June 2005. Several industrywide trends occurring at the same time as CDSOA disbursements are relevant. Between 1997 and 2003 period, 40 steel companies declared bankruptcy, with some of them ceasing operations altogether. CDSOA recipients were not immune from this general trend; several of them have declared bankruptcy and various firm consolidations have also occurred. The Asian financial crisis was an important factor in increasing steel imports to this country, as Asian demand for steel dropped and foreign steel companies increasingly looked to the United States as a market for their products. The surge in imports led to the filing of relief petitions on hot-rolled steel against Russia, Japan, and Brazil beginning in 1998. Companies subsequently filed relief petitions against 11 other countries. In 2002, the President also took action under section 201 of the Trade Act of 1974, which allows him to implement temporary relief when an industry has been seriously injured by surging imports. Under this authority the President announced a series of safeguard tariffs of up to 30 percent on a range of steel products. These tariffs, which were imposed in addition to the AD/CV duties, remained in place from March 2002 until late 2003. Much of the industry returned to profitability in 2004, when prices rose. Table 8 depicts the top 10 CDSOA recipients for steel in fiscal years 2001 through 2004. Recipient steel companies varied in their assessments of the payments’ effects, but generally agreed that they had a positive impact in the areas of net income and investing in plant, property, and equipment. For example, several recipients said disbursements enabled them to make investments needed to survive the steel crisis and be competitive in the future. The companies also generally stated that CDSOA disbursements have had little or no effect on prices, net sales, and market share. Some steel recipients also commented that CDSOA has not been a complete solution to the problems they faced due to unfairly traded imports. One recipient commented, for example, that while CDSOA payments could be presumed to have had a tangible benefit for the industry, they have not come close to erasing the years of financial injury brought on by unfairly traded steel products. Some steel companies acknowledged that the CDSOA disbursements have not been significant in relation to their size or capital expenditure needs. For each of the 13 steel companies in our case study, the CDSOA disbursements they received amounted to less than 1 percent of their net sales in fiscal year 2004. Table 9 provides steel recipients’ responses to our questionnaire on CDSOA’s effects. Table 10 provides these companies’ responses to our question on CDSOA’s effect on their ability to compete in the U.S. market. We also asked companies to describe how they used the CDSOA payments that they received. However, the law does not require that distributions be used for any specific purpose. The steel recipient companies generally did not provide specific replies to this question. General comments by these companies included that they used the CDSOA payments to make capital investments, reduce debt, and assisted in the acquisition of steel-making assets. Sales, profit, and income figures generally improved markedly for the steel companies between 2003 and 2004, as the overall industry enjoyed a strong rebound from the previous years. In some cases companies went from showing net losses to net income between these 2 years. Some companies also expanded greatly among all categories as they grew by acquiring the assets of other companies. Overall, some companies gained employees, while other companies lost them. None of the recipient steel companies responding to our questionnaire reported that they are involved in overseas production or importation of CDSOA products. To obtain steel companies’ views on CDSOA’s effects, we sent out a set of structured questions to certain steel CDSOA recipients and non-recipients. CDSOA payments are made in this industry under multiple steel and steel- related AD and CV orders that were issued over several years. For this case study, we defined the scope of the steel industry to only include companies that produce steel by melting raw materials. Our scope excludes companies that primarily make steel-related products (such as pipe or tubing) from purchased raw steel. As discussed below, we were not able to obtain information from steel non-recipients on CDSOA’s effects. To identify CDSOA recipients, we obtained information from CBP about the companies that have received payments, according to our definition of the industry, in each of the 4 years that disbursements have been made and the amount of disbursements they have received. We obtained information from representatives of the ITC, Commerce, and industry associations to determine precisely which companies fit under our definition of the steel industry. Using this information, we developed a list of 69 recipients and ranked them by their total CDSOA receipts. Because of time and resource constraints, we decided to survey the top 15 steel recipient companies that had received 90 percent of the distributions made under the orders included in our scope. Two of these companies had ceased operations. We surveyed the remaining 13 companies and received completed surveys from all of them. The 13 respondents accounted for about 72 percent of the CDSOA payments to this industry; their views may not be representative of all recipients, particularly those that received relatively small CDSOA receipts. The universe of steel non-recipients is larger than the universe of recipients. We sought to obtain views from a comparable number of non- recipients as recipients. To identify these companies, we obtained information from associations or others that were knowledgeable about the industry. Besides ITC, we spoke with several steel industry associations (American Iron and Steel Institute, Steel Manufacturers Association, and the Specialty Steel Industry of North America) to identify leading steel non- recipients. We also asked recipient companies to identify their competitors. Based on these meetings and our own research, we surveyed 12 leading non-recipient steel companies, from which we received 1 completed survey. However, this survey did not include comments or views on CDSOA’s effects. As a result, we are not able to present the views of steel non-recipient companies on CDSOA’s effects. Petroleum wax candles are produced in several forms including columns or pillars, wax-filled containers, tapers or dinner candles, votives, and novelty candles. They are sold to consumers through retail outlets, the largest percentage of which are through mass merchandisers (such as Wal-Mart or Target); these are followed by department stores, discount retailers, card and gift shops, and door-to-door sales through membership groups. The majority of petroleum wax candles are produced and imported for national markets. The number of domestic producers has grown from over 100 when the ITC performed its original investigation in 1986 to over 400 at the time of its second 5-year review in 2005. Only 10 domestic candle producers are eligible for CDSOA payments. Table 5 shows these companies’ CDSOA disbursements and claims. According to the ITC, these recipients, in addition to approximately 35 other candle producers, make up 70 percent of U.S. candle production. In 1985 a petition was filed by the National Candle Association (NCA) alleging that the U.S. candle industry was materially injured by dumped imports of petroleum wax candles from China. The ITC determined injury in 1986, and Commerce issued an antidumping duty order of 54 percent on all Chinese producers and exporters. The ITC conducted a 5-year, expedited review in 1999, and the duty doubled from 54 percent to 108 percent after another expedited review in 2004. U.S. producers’ share of the market by quantity (pounds) went from 43 percent in calendar year 1999 to 53 percent in calendar year 2004. Imports from China, which some perceive as lower-end candles, accounted for 20 percent in 1999, rising to 27 percent in 2004. U.S. producers and Chinese suppliers have both gained market share in recent years. U.S. producers’ share of candles dollar value was 66 percent in 1999 rising to 70 percent in 2004, while China’s share rose from 10 percent in 1999 to 14 percent in 2004. The ITC is presently conducting a full 5-year “sunset” review of this order, and recently presented its findings to Commerce. Also, Commerce is considering whether the scope of the order should be changed, inquiring whether mixed wax candles composed of petroleum wax and varying amounts of either palm or vegetable wax alter the product so that they are not subject to the current order. Table 11 depicts CDSOA recipients for candles in fiscal years 2001 through 2004. Recipients report that CDSOA distributions have had positive effects on their net income; on their property, plant and equipment; and on research and development. One of the larger recipients of CDSOA distributions claims that these payments have lessened the need to consider outsourcing their candle products from abroad. However, the company reported that because of the effects of dumped Chinese candles, they continue to lay off workers, though fewer than they may have absent the CDSOA funds. Other recipients claim to have developed new, better, and safer candles with research and development reinvestment of CDSOA disbursements. Fiscal year 2004 CDSOA disbursements as a percentage of company sales range from 0.4 percent to 34.7 percent for the 10 recipient candle companies, with most companies’ shares in the higher end of this range. Non-recipients report that CDSOA distributions to their competitors have had negative effects on their ability to compete in the market, on their gross profits, and on net income. They also reported very negative effects on industry competition. One non-recipient company has closed two of four domestic manufacturing facilities, eliminated or reduced shifts, and its released workers. Another non-recipient company claims that their CDSOA-recipient competitors could reduce selling prices. While the company matched competitors’ lower prices, they made no profit. Because of this, they have recently exited this segment of the candle business and released workers accordingly. Some non-recipients also expressed the view that their ineligibility for CSDOA disbursements is unfair. One non- recipient company joined the NCA as a leader of the organization a few years after the issuance of the order, but stated that they have no institutional memory of receiving an ITC questionnaire during its original investigation in 1986. This company said it has supported the order as well as NCA’s efforts to defend the order since joining the NCA. Another non- recipient is ineligible by virtue of being acquired by a firm that opposed the original investigation, and was unsuccessful in its legal challenge of this. Table 12 shows candle recipient and non-recipients’ responses to our questionnaire on CDSOA’s effects. Table 13 depicts these companies’ responses to our question on CDSOA’s effect on their ability to compete in the U.S. market. We also asked companies to describe how they used the CDSOA payments that they received. However, the law does not require that distributions be used for any specific purpose. Several recipients claim that they have used CDSOA funds to invest in new and better equipment, and in research and development. One recipient company reports that it has been able to offer employees consistent and comprehensive benefits packages due to CDSOA funds. For smaller candle companies—both recipients and non-recipient respondents alike—net sales have stagnated, as has employment of production and related workers. Some of the larger non-recipient respondents appear to have experienced some growth in these categories, while some of the larger recipients seem to have experienced some decline or stagnation in net sales and some growth or stagnation in production and employment. Most candle companies are strictly domestic producers; however, one non- recipient stated that it would start to import some of its candle products from Asia in order to keep its costs down. To obtain the views of candle companies on CDSOA’s effects, we sent out a set of structured questions to candle CDSOA recipients and certain non- recipient companies within the industry. CDSOA payments are made under one AD order that was issued in 1986. To identify CDSOA recipients, we obtained information from CBP about the companies that have received payments in each of the 4 years that disbursements have been made. Using this information, we developed a list of 11 recipients and ranked them by their total CDSOA receipts. One of these companies now receives CDSOA payments under the name of its parent company, leaving 10 distinct companies. We sent surveys to all recipient companies, and all of them provided completed surveys. The universe of candle non-recipients is larger than the universe of recipients. We sought to obtain views from a comparable number of non- recipients as recipients. To identify these companies, we obtained information from associations or others that were knowledgeable about the industry. Specifically, we (1) obtained a list of members of the NCA from its website; (2) corroborated this list with information from a recent ITC publication; and (3) obtained information about certain non-NCA members based on our own research. Because of time and resource constraints, most of the non-recipient candle companies we contacted are members of the NCA. Surveys were sent to non-recipient candle companies for which an E-mail address could be obtained either from the NCA list or from the company directly. We surveyed 26 non-recipient candle makers, of which 8 provided completed surveys. Respondents included two relatively large candle companies whose net candle sales were similar in magnitude to one of the largest candle CDSOA recipients, and several smaller candle companies whose net sales were similar to or slightly larger than several of the smaller CDSOA candle recipients. The views of these respondents may not be representative of all non-recipients. The bulk of pasta production in the United States is dry pasta, with production of frozen or refrigerated pasta constituting a smaller portion of the U.S. industry. After several decades of mergers and acquisitions, and the 2001 sale of one major producer’s production facilities and brand names to two of its competitors, the industry’s current structure reflects a high degree of concentration among a few large producers. The four largest U.S. producers as of 2001, based on ITC data, were American Italian Pasta Company, New World Pasta, Dakota Growers Pasta Company, and Barilla America, Inc. (a U.S. subsidiary of an Italian pasta company that was set up in 1998 after antidumping and countervailing duty orders on Italian dry pasta imports were issued). An industry expert estimated that these four companies currently account for about 80 percent of dry pasta production in the United States, with the remainder supplied by smaller or specialty companies. Three of the four are eligible for CDSOA disbursements, but Barilla America, Inc., whose share of U.S. production is growing and which said it only imports a small percentage of the pasta it sells here, is not. Overall demand for dry pasta in the United States has been declining since the late 1990s, a trend that has been exacerbated, according to dry pasta companies and industry experts, by diets that emphasize low-carbohydrate intake. Further, the industry has been experiencing decreased sales, excess capacity, and plant closures. Among the more significant indicators of the downturn, New World Pasta—a leading CDSOA recipient—filed for Chapter 11 bankruptcy protection in 2004. According to ITC, about three- fourths of U.S. consumption of dry pasta in 2000 was supplied by domestic producers, with the remainder supplied by imported products. At that time, the largest sources of imported pasta were Italy, Canada, Korea, and Mexico. Several U.S. producers petitioned for relief from rapidly growing imports in 1995. In 1996, Commerce issued antidumping and countervailing duty orders on certain pasta imports from Italy and Turkey. Initial AD duties rated from 0 to about 47 percent on Italian pasta and about 61 to 63 percent on Turkish pasta, while initial CV duties ranged from about 0 to 11 percent on Italian pasta and about 4 to 16 percent on Turkish pasta. Since Commerce issued the order, dry pasta imports from Italy have declined and Turkey is no longer a leading supplier of pasta to the United States. The ITC completed a sunset review in 2001 that extended the orders until 2006. The top seven CDSOA recipients have received about 99 percent of the payments made to the industry, with American Italian Pasta Company and New World Pasta/Hershey Foods receiving 70 percent of total payments. Table 14 shows total payments made to all dry pasta CDSOA recipients in fiscal years 2001 through 2004. The four pasta recipients that responded to our survey viewed the CDSOA program as having mostly positive effects on their companies. The two largest recipients did not respond to our survey, and we did not contact the three smallest recipients. All respondents cited the most positive company effects in the areas of profit; income; and investment in property, plant, and equipment; and most cited positive effects on net sales and ability to compete. Some recipient companies noted that the program has enhanced their ability to increase production through plant expansions and upgrades; improved their cash flow, allowing them more operating flexibility; reduced manufacturing costs; and enhanced some companies’ competitive position. Funds have also helped some companies develop new products. CDSOA disbursements to the pasta industry have been small compared to each company’s net sales. For example, fiscal year 2004 CDSOA payments to the pasta companies that responded to our survey represented about 1 percent or less of each company’s 2004 net sales. Among the six pasta non-recipients that responded to our survey, views about the effect of CDSOA funds were mixed. A few said the funds had impacted their companies negatively in certain areas or created an unfair competitive environment in the industry, while others thought effects were minimal or could not judge the program’s effects. About half of the non- recipients thought the program has had little or no effects for their companies in the areas of employment, prices, sales, investment, or market share. Some non-recipients thought the program had negatively impacted their company’s profits, income, and ability to compete. Some non- recipients said that the program has probably helped recipients cut prices, and that this has created an unfair advantage in the industry for recipients. One non-recipient stated that it has had to transfer substantial sums of money to its competitors because of CDSOA, and that these funds would likely have been used for product development, capital investment, and expansion at its U.S. facility. Table 15 provides pasta recipients’ and non-recipients’ responses to our questionnaire on CDSOA effects. Table 16 provides these companies’ responses to our question on CDSOA’s effects on their ability to compete in the U.S. market. We also asked companies to describe how they used the CDSOA payments that they received. However, the law does not require that distributions be used for any specific purpose. Recipients used CDSOA funds for a variety of purposes. For example, some said they used the funds to purchase new equipment or upgrade existing equipment; reduce manufacturing costs and improve cash flow; increase production capacity; and invest in research and product development. This, in turn, led to increased production and employment among some companies. One company that did not respond to our survey disclosed in its 2003 annual report that it used a significant portion of the funds to increased investment in brand building activities and to strengthen the company’s organization. One recipient noted that CDSOA funds have been helpful because margins in the industry are very thin and competition is strong. As CDSOA improved one company’s bottom line, it was able to obtain more attractive financing rates. Our information about the effect of CDSOA on net sales and employment in this industry is limited because the two largest companies did not respond to our survey. Although press coverage of the industry has noted generally declining net sales among U.S. dry pasta companies in recent years, the companies that responded to our questions reported general increases in net sales during 2001 through 2004. Specifically, two companies reported increased sales in the 2001 through 2004 time frame, and two companies reported fluctuating sales that were higher at the end of the period than at the beginning. Among recipient respondents, two companies’ employment levels generally increased, and two companies’ employment levels generally decreased since the implementation of CDSOA. Among non- recipient respondents, net sales and employment showed mixed trends. Three companies reported increased sales, one company reported fluctuating sales that were higher at the end of 2004, and two companies reported decreased net sales. Three companies reported generally increased employment levels and three reported general decreases. All of the recipient pasta companies that responded to our survey produce their product only in the United States. However, the top CDSOA recipients that did not respond to our survey produce pasta domestically and in other countries. Four of the non-recipients produce exclusively in the United States, and two produce both domestically and overseas. To obtain pasta companies’ views on CDSOA’s effects, we sent out a set of structured questions to certain pasta CDSOA recipients and non-recipients. CDSOA payments are made in this industry under two AD and two CV orders that were issued simultaneously. To identify CDSOA recipients, we obtained information from CBP about the companies that have received payments in each of the 4 years that disbursements have been made and the amount of disbursements they have received. Using this information, we developed a list of 11 recipients and ranked them by their total CDSOA receipts. CBP provided additional information that indicated there were actually 10 distinct companies. Because of time and resource constraints, we decided to survey the top seven companies that had received 99 percent of the total payments made under these orders, from which we received four completed surveys. The two pasta companies that are top CDSOA recipients did not respond to our survey. Our information about CDSOA effects for recipients is limited to the four pasta companies that responded, which together accounted for about 27 percent of CDSOA payments to this industry. The universe of dry pasta non-recipients is larger than the universe of recipients. We sought to obtain views from a comparable number of non- recipients as recipients, but we had difficulty identifying non-recipient dry pasta companies. To identify these companies, we obtained information from associations or others that were knowledgeable about the industry. Specifically, we obtained company names and contact information from (1) the website of the National Pasta Association, which presently carries out only limited activities on behalf of the industry; (2) an association management company that handles administrative matters for the National Pasta Association; (3) a directory of pasta companies published on http://www.bakingbusiness.com, a division of Milling and Baking News, which is a business news organization that ITC had identified as closely following the pasta industry; and (4) other pasta companies. Many of the companies we identified through these sources were not makers of dry pasta as defined in the orders, but were instead makers of egg noodles, fresh or refrigerated pasta, couscous, and boxed or frozen foods that use pasta, or were flour mills or other companies linked to the production of dry pasta. We surveyed eight non-recipient dry pasta manufacturers, from which we received six completed surveys. The respondents include the fourth-largest dry pasta manufacturer in the United States, several smaller pasta companies that produce durum wheat pasta, one company that produces wheat-free pasta, and one company that produces exclusively organic pasta. The views of these respondents may not be representative of all non-recipients. Dynamic random access memory (DRAM) semiconductors are considered commodity products and compete largely on the basis of price; DRAMs of similar density, access speed, and variety are generally interchangeable regardless of the country of fabrication. Today, four companies produce DRAMs in the United States: Micron Technologies is a U.S. company, Infineon Technologies is a spin-off of the German company Siemens, and Samsung Electronics and Hynix Semiconductor are Korean companies. All of these companies now have production facilities in the United States as well as abroad, but the latter three have entered the U.S. industry within the past decade. The DRAM industry is cyclical in nature, where demand is driven by investments in computers and other end-products. Fabrication facility costs are high and require complete replacement approximately every 10 years. Due to high fixed costs, chip manufacturers cannot afford to scale down production; they must constantly produce chips and invest or go out of business. One countervailing duty order is currently in effect for DRAMs produced by Hynix only. This duty order came into effect in 2003 and its duty rate is currently 44 percent. Micron Technology received the bulk of distributions in this industry because it was the sole recipient of duties from two antidumping orders dating from the 1990s on DRAMs and other kinds of chips. Payments were made to Micron on DRAMs of 1 megabit and above under one AD order issued in 1993 and revoked in 2000, as well as on an AD order on SRAMs (static random access memory chips) issued in 1998 and revoked in 2002. The vast majority of CDSOA disbursements to the industry (approximately $33 million) in fiscal years 2001 through 2004 were related to these orders. Infineon did not incorporate in the United States until 2000 and, therefore, did not participate in the earlier investigations. Both Infineon and Micron are eligible and received disbursements under the current order but Hynix and Samsung are not eligible because they opposed the petition. Because DRAMs are a technologically dynamic product, it is expected that Commerce will revoke these orders when the subject products are obsolete. New products open themselves to new petitions and orders, thereby allowing new potential CDSOA recipients. Table 17 depicts CDSOA recipients for DRAMs in fiscal years 2001-2004. The two recipients of CDSOA disbursements reported mixed effects. One recipient reported that, although at the time it was operating at a net loss, CDSOA distributions improved its profitability, investment, employment, and research and development. The company noted that it would be of greater help if payments were made soon after other countries began their unfair trade practices. Another recipient reported that disbursements were immaterial to their operations. Fiscal year 2004 CDSOA disbursements are equal to less than 1 percent of both companies’ sales. Table 18 presents DRAM recipients’ responses to our questionnaire on CDSOA’s effects. Table 19 shows companies’ responses to our question on CDSOA’s effect on their ability to compete in the U.S. market. We also asked companies to describe how they used the CDSOA payments that they received. However, the law does not require that distributions be used for any specific purpose. One recipient uses CDSOA distributions to fund U.S. operations and to invest in new U.S. production equipment. The other recipient also uses distributions in operations. Historically, the DRAM market is subject to periods of “boom and bust.” Both CDSOA recipients reported some net losses and have experienced slight declines in production and related workers during the past 4 fiscal years. One company has DRAM production facilities in three U.S. states as well as Japan, Italy, and Singapore. The other indicated that it has both domestic and foreign production facilities; they also noted that DRAMs manufactured in the United States can be sold abroad, and DRAMs manufactured abroad can in turn be sold here. To obtain the views of DRAM-producing companies on CDSOA’s effects, we sent a set of structured questions to the two CDSOA recipients. Current CDSOA payments on DRAMs are made on a CV order issued in 2003. To identify CDSOA recipients, we obtained information from CBP about the companies that have received payments in each of the 4 years that disbursements have been made. CBP identified two companies. We surveyed both recipient companies, and both provided completed surveys. To identify non-recipients, we consulted the recipient companies to identify their competitors, and we obtained information on domestic producers from the ITC’s final determination on DRAM and DRAM Modules from Korea. There are two U.S. subsidiaries of Korean companies that are considered domestic producers who opposed the petition for the current order. We attempted to contact these companies but were unsuccessful in our efforts. We did not attempt to contact a fifth company that is also considered a domestic producer; this company does not list the major DRAM producers as competitors, and has no fabrication facilities. ITC listed other domestic producers for the purposes of its investigation, but these companies have since ceased DRAM production or have ceased to exist. Crawfish are freshwater crustaceans that resemble lobsters but are considerably smaller. U.S. commercial production of crawfish is concentrated within a relatively small area of southern Louisiana, where crawfish are harvested in the wild by fishermen and farmed in ponds. Crawfish may be sold whole and live, whole and boiled, or as fresh or frozen tail meat. Whole crawfish and fresh tail meat is consumed primarily in Louisiana and neighboring states, where there is generally a preference for local products in season. Tail meat is also sold more broadly throughout the United States. U.S. producers supply whole crawfish and fresh and frozen tail meat, whereas imports, mainly from China, are primarily frozen tail meat. U.S. businesses that process whole crawfish into tail meat are primarily small, family-owned concerns. Inexpensive imports and poor harvests have driven many domestic crawfish processors out of business in recent years. It is estimated that there were over 100 processors in Louisiana in the 1980s and early 1990s, but that number has dropped by more than half. In 1996, the Crawfish Processors Alliance, an industry association, and the Louisiana Department of Agriculture and Fisheries, filed a petition alleging that U.S. processors of crawfish tail meat were being injured by dumped imports of crawfish tail meat from China. Significant imports of tail meat began in the mid-1990s, and ITC estimates that imports’ share of consumption grew from just over 60 percent in 1997 to about 87 percent in 2002. In 1997, Commerce issued an anti-dumping order on crawfish tail meat and imposed anti-dumping margins that ranged from about 92 to about 202 percent. Table 20 depicts the top 10 CDSOA recipients for crawfish in fiscal years 2001-2004. CDSOA recipient respondents in the crawfish tail meat processing industry stated that the program has generally had positive effects for the industry and their companies. Several recipient respondents credited CDSOA with saving the domestic crawfish processing industry. Because of the program, they said, businesses remained open, employees kept their jobs, and crawfish fishermen continued to fish. The areas in which positive effects were most often cited were income; profits; investment in property, plants, and equipment; employment; and ability to compete. The program was generally seen as having little or no effect on prices, research and development, and market share. Many recipients stated that the program had encouraged them to purchase and process more crawfish and freeze more tail meat for sale in the off-season, leading to increased employment among some processors and higher sales volumes for crawfish farmers and fishermen. Many respondents noted the poor collection rate and enforcement of the AD order for crawfish and viewed the CDSOA program as providing their only effective relief from dumped imports. (CBP disbursed about $9.8 million to crawfish processors in fiscal year 2003 but reported that the uncollected duties related to crawfish in that year were about $85.4 million. In fiscal year 2004, CBP disbursed about $8.2 million to the industry, but uncollected duties rose to about $170 million. Nearly two-thirds of all uncollected duties in fiscal year 2004 were related to the crawfish order.) Recipients complained that widespread non-payment of duties means Chinese crawfish continues to enter the U.S. market unabated. In its 2003 review to evaluate continuation of the AD order, ITC found that Chinese tail meat undersold (was sold at a lower price) domestic tail meat to the same degree with the AD order in place as it had before the order was issued, suggesting that the order has not affected the price of imported tail meat. Although CDSOA disbursements in this industry have been small compared to certain other industries, these payments have been significant for some recipients when compared to net sales. Among the 16 recipients that responded to our survey, their fiscal year 2004 CDSOA disbursement as a percent of their 2004 net sales ranged from a low of about 4 percent for one company to a high of about 350 percent for another. Among the other respondents, four companies’ fiscal year 2004 disbursement was about 15 to 18 percent of their net sales that year, five companies’ disbursement was about 27 to 33 percent of their net sales, and four companies’ disbursement was between 52 and 96 percent of their net sales. One company did not report any net sales information to us. Non-recipients crawfish processors that responded to our survey said that the CDSOA program has helped recipient companies, but has harmed non- recipient companies by creating conditions of unfair competition among domestic processors. Most non-recipients cited negative effects for their companies in terms of ability to compete, net sales, profits, income, investment, and employment, which are generally the areas where recipients saw positive effects. Several non-recipients stated that they were unable to compete with the CDSOA recipients. For example, several non- recipients said that recipient companies were offering tail meat for sale at prices that were below the cost of production and were able to do so because their CDSOA funds would compensate them for any losses. In such conditions, some non-recipients said they cannot operate profitably and some decided to stop producing tail meat in recent years. Table 21 provides crawfish recipients and non-recipients’ responses to our questionnaire on CDSOA’s effects. Table 22 provides these companies’ responses to our question on CDSOA’s effect on their ability to compete in the U.S. market. We also asked companies to describe how they used the CDSOA payments that they received. However, the law does not require that distributions be used for any specific purpose. Recipient companies reported a wide range of uses for the funds. For example, most of the companies that reported this information said they purchased or upgraded equipment, buying new or larger delivery trucks, boilers, ice machines, freezers, coolers, and vacuum-pack machines. Several companies bought more crawfish to peel and hired more employees, thereby increasing their production of tail meat. Several companies said that they made investments and repairs to their plants, such as installing or expanding docks for receiving shipments of whole crawfish. Several also paid off long-standing company and personal debts. For example, the head of one small family-run company said he paid off mortgages on the plant and his residence, bought new equipment, and made needed repairs without incurring new financing costs. One company said that it started a pension plan for its employees. More than half of the recipient companies that we surveyed had growing net sales in the 2001 through 2004 time frame. Other companies’ net sales fluctuated, decreased, or were relatively stable. Several respondents said that one of the most significant outcomes of the CDSOA program was to encourage them to purchase and process more crawfish and freeze more tail meat for sale in the off-season, thereby improving their year-round cash flow. Most non-recipients that responded to our survey did not provide net sales information. More than half of the crawfish recipient respondents also reported growth in employment levels, and some of these increases were significant. One company quadrupled the number of production and related workers during the 2001 through 2004 period (from 28 to 111) and the number of such workers at three other companies doubled. Several stated CDSOA enabled them to hire more people. Three recipients reported net decreases in the number of production and related workers in this time period. Non- recipients also generally did not report employment information. Survey respondents said they process tail meat exclusively in the United States. We did not gather any information that disclosed whether, in the course of doing business, any of these processors also import or offer imported tail meat for sale. To obtain crawfish tail meat processing companies’ views on CDSOA’s effects, we sent out a set of structured questions to certain crawfish CDSOA recipients and non-recipients. CDSOA payments are made in this industry under one AD order. To identify CDSOA recipients, we obtained information from CBP about the companies that have received payments in each of the three years that disbursements have been made and the amount of disbursements they have received. Using this information, we developed a list of 35 recipients and ranked them by their total CDSOA receipts. CBP provided additional information that indicated that certain companies had received funds under different names in different years. Because of time and resource constraints, we decided to survey 20 of the top recipients that had received about 90 percent of the total payments made under this order. We received 16 completed surveys. These 16 companies accounted for about 73 percent of CDSOA payments to this industry; their views may not be representative of all recipients, particularly those that received relatively small CDSOA disbursements. The size of the universe of crawfish non-recipients not known. We sought to obtain views from a comparable number of non-recipients as recipients, but we had difficulty identifying non-recipient crawfish companies. To identify these companies, we obtained information from associations or others that were knowledgeable about the industry. Specifically, we obtained contact information for current and former tail meat processors that are non-recipients from (1) a law firm that represents the Crawfish Processors Alliance, an entity that was a petitioner in this case; (2) the Louisiana Department of Agriculture and Fisheries, an entity that was a petitioner in this case; (3) the Louisiana Department of Health and Hospitals, which licenses and inspects processors; and (4) certain other tail meat processors. We lacked accurate contact information for several of these companies. We surveyed 17 current and former processors, from which we received 9 completed surveys. The views of these respondents may not be representative of all non-recipients. Softwood lumber generally comes from conifers or evergreen trees including pine, spruce, cedar, fir, larch, Douglas fir, hemlock, cypress, redwood, and yew. Softwood is easy to saw and used in structural building components. It is also found in other products such as mouldings, doors, windows, and furniture. Softwood is also harvested to produce chipboards and paper. U.S. softwood lumber producers are generallly located in the southeast and northwest, with the northwest softwood lumber being comparable to Canadian softwood lumber. CDSOA disbursements to the softwood lumber industry went to 143 companies in fiscal years 2003 and 2004. According to one estimate, about half of the softwood lumber companies are eligible to receive these disbursements. Canada’s share of the U.S. lumber market rose from less than 3 billion board feet (BBF) and 7 percent of the market in the early 1950s to more than 18 BBF per year and 33 percent of the market in the late 1990s. In 2003, U.S. imports of softwoods were 49,708 thousands of cubic meters, and the ratio of these imports to consumption was 37.4 percent. Since 1981, the United States and Canada have been involved in several softwood lumber disputes, leading to, among other things, a 15 percent Canadian tax on lumber exports in 1986; a countervailing duty of 6.51 percent on Canadian imports in 1992, which ended in 1994; and a 1996 Softwood Lumber Agreement restricting Canadian exports for five years, until 2001. The U.S. again imposed antidumping and countervailing duties on Canadian imports in 2002. From May 2002 to December 2004 most Canadian softwood lumber exported to the United States was subject to a combined antidumping and countervailing duty of 27 percent. In December 2004 this combined duty was reduced to 21 percent. These two duty orders funded about $5.4 million in CDSOA disbursements to U.S. softwood lumber companies in fiscal years 2003 and 2004. Leading U.S. softwood lumber producers are among the industry’s top CDSOA recipients. However, major U.S. producers are also among those ineligible to receive CDSOA disbursements. CBP has received over $3.7 billion in deposits to cover estimated duties from softwood lumber imports from Canada. Table 23 depicts the top 10 CDSOA softwood lumber recipients for fiscal years 2003-2004. Recipient and non-recipient companies generally noted that, because CDSOA disbursements had been so small in fiscal years 2003-2004, totaling about $5.4 million, they had had little or no effect on their companies. Although recipient companies vary greatly in their overall size, these companies do not vary significantly in terms of the amount they have received through CDSOA as a percentage of their sales in fiscal year 2004. Specifically, CDSOA disbursements to company sales amounted to less than 1 percent for the recipient companies in our study. However, some recipient and non-recipient companies emphasized that, if the United States ever were to liquidate and disburse the large amount of softwood lumber duties currently being held in deposit by Treasury, these disbursements would have major effects on both recipient and non- recipient companies. One recipient company noted that these disbursements would have positive effects on its company, while a non- recipient company emphasized negative effects. Because capital is a major function in competitiveness, a non-recipient company stated that, if recipient companies were to invest large CDSOA disbursements on new mills, they would be able to dramatically increase their efficiency, output, and market share. Table 24 provides softwood lumber recipients and non-recipients’ responses to our questionnaire on CDSOA’s effects. Recipient and non-recipient companies generally reported that the CDSOA disbursements had had no effect on their companies’ ability to compete in the U.S. market. Table 25 presents these companies responses to our question on CDSOA’s effect on their ability to compete in the U.S. market. We also asked companies to describe how they used the CDSOA payments that they received. However, the law does not require that distributions be used for any specific purpose. Overall, companies noted that they had used the payments for a variety of purposes, such as paying debt, past qualifying expenditures, general operating expenses, general corporate expenses, and capital investment. Others noted that the payments had been too small to track their use in any area. Overall, recipient and non-recipient companies we contacted vary significantly in size. Both show slight increase in net sales and employment over the 4 years that CDSOA has been in effect. Leading U.S. producers are among the CDSOA recipient and non-recipient companies. Most recipient companies we contacted produced CDSOA-related products domestically. Some non-recipient companies we contacted produced these products domestically. Others produced them both domestically and abroad. To obtain softwood lumber companies’ views on CDSOA’s effects, we sent out questionnaires to certain softwood lumber CDSOA recipients and non- recipients. CBP made CDSOA payments to recipients in this industry in fiscal years 2003 and 2004 under an AD order and a CV order both issued in 2002. To identify CDSOA recipients, we obtained information from CBP about the companies that had received CDSOA payments in the 2 fiscal years and the amount of disbursements they had received. Using this information, we developed a list of 143 recipients and ranked them by their total CDSOA receipts in the 2 fiscal years. Because of time and resource constraints, we decided to survey the top 14 recipients that had received about 60 percent of the total softwood lumber payments. CBP provided contact information on these companies to us. From these 14 companies, we received 13 completed surveys. These 13 companies accounted for about 59 percent of all softwood lumber disbursements. Their views may not be representative of all recipients, particularly those that received relatively small CDSOA disbursements. Given that about half of the industry is eligible to receive CDSOA disbursements, we sought to obtain views from a comparable number of recipients and non-recipients. To identify non-recipient companies, we obtained information from public and private sources that are knowledgeable about the industry. Specifically, we obtained information on non-recipients from the ITC and softwood lumber companies. We surveyed 15 companies and we received six completed surveys from them. These respondents included a wide range of top non-recipients, including one of the largest companies in the industry. However, their views may not be representative of all non-recipients. Kim Frankena served as Assistant Director responsible for this report, and Juan Tapia-Videla was the Analyst-in-Charge. In addition to those named above, the following individuals made significant contributions to this report: Shirley Brothwell, Ming Chen, Martin de Alteris, Carmen Donohue, John Karikari, Casey Keplinger, Jeremy Latimer, and Grace Lui. The team benefited from the expert advice and assistance of Jamie McDonald, Jena Sinkfield, Tim Wedding, and Mark Speight.
Between fiscal years 2001 and 2004, the Continued Dumping and Subsidy Offset Act (CDSOA) provided over $1 billion funded from import duties to U.S. companies deemed injured by unfair trade. Some supporters state CDSOA helps U.S. companies compete in the face of continuing unfair trade. Some opponents believe CDSOA recipients receive a large, unjustified windfall from the U.S. treasury. Also, 11 World Trade Organization (WTO) members lodged a complaint over the law at the WTO. This report assesses (1) key legal requirements guiding and affecting agency implementation of CDSOA; (2) problems, if any, U.S. agencies have faced in implementing CDSOA; and (3) which companies have received CDSOA payments and their effects for recipients and non-recipients; and describes (4) the status of WTO decisions on CDSOA. Congress enacted CDSOA to strengthen relief to injured U.S. producers. The law's key eligibility requirements limit benefits to producers that filed a petition for relief or that publicly supported the petition during a government investigation to determine whether injury had occurred. This law differs from trade remedy laws, which generally provide relief to all producers in an industry. Another key CDSOA feature requires that Customs and Border Protection (CBP) disburse payments within 60 days after the beginning of a fiscal year, giving CBP limited time to process payments and perform desired quality controls. This time frame, combined with a dramatic growth in the program workload, presents implementation risks for CBP. CBP faces three key implementation problems. First, processing of company claims and CDSOA payments is problematic because CBP's procedures are labor intensive and do not include standardized forms or electronic filing. Second, most companies are not accountable for the claims they file because they do not have to support their claims and CBP does not systematically verify the claims. Third, CBP's problems in collecting duties that fund CDSOA have worsened. About half of the funds that should have been available for disbursement remained uncollected in fiscal year 2004. Most of the CDSOA payments went to a few companies with mixed effects. About half of these payments went to five companies. Top recipients we surveyed said that CDSOA had beneficial effects, but the degree varied. In four of seven industries we examined, recipients reported benefits, but some non-recipients noted CDSOA payments gave their competitors an unfair advantage. These views are not necessarily representative of the views of all recipients and non-recipients. Because the United States has not brought CDSOA into compliance with its WTO obligations, it faces additional tariffs on U.S. exports covering a trade value of up to $134 million based on 2004 CDSOA disbursements. Recently, Canada, the European Union, Mexico, and Japan imposed additional duties on various U.S. exports. Four other WTO members may follow suit.
The federal government has established a policy to develop its employees through training programs, to improve public service, increase efficiency and economy, and build and retain a force of skilled and efficient employees, among other things. In 1967, President Johnson signed Executive Order No. 11348, to provide agency heads and OPM with presidential direction on how training is to be carried out. Under Executive Order No. 11348, OPM is responsible for planning and promoting the development, improvement, coordination, and evaluation of training in accordance with chapter 41 of title 5 of the U.S. Code and the established policy. Chapter 41 of title 5 sets forth the statutory framework for federal government training and development. The executive order further requires OPM to identify functional areas in which new or expanded interagency training activity is needed and either conduct such training or arrange for agencies having the substantive competence to do so; as well as to coordinate interagency training conducted by and for agencies. It also requires OPM to assist agencies in developing sound programs and financial plans for training and provide advice, information, and assistance to agencies on planning, programming, budgeting, operating, and evaluating training programs. In addition to these activities, OPM provides advice and assistance to agencies on training and development programs. OPM’s Training and Executive Development (TED) group (a subcomponent within the Office of Executive Resources) is the primary office that provides policy direction and leadership to agencies in developing plans and strategies to implement training and development programs. It also provides agency guidance to ensure the government’s training and development programs support strategic human capital investments. The TED group provides assistance through two main mechanisms: guidance documents and technical assistance. OPM has developed five guides that agencies can use as references for different aspects of making or reporting training investment decisions in the planning, designing, implementation, and evaluation phases of their training and development programs (see table 2). The TED group also provides technical assistance on agency training investments through facilitating discussions and forums, and providing training to agencies’ human resources (HR) staff. For example, the TED group uses various web-based mechanisms—such as OPM’s website, OPM LISTSERV, OPM Federal Training and Development web site and OPM Federal Training and Development Wiki—to facilitate discussions between agencies on training investments and to share guidance with agencies. In addition to these facilitated discussions and forums, the TED group provides training to federal HR professionals in various areas, including activities that support making training investment decisions. For example, OPM provides training to HR staff through its partnership with the CHCO Council to operate HR University, which OPM officials and the HR University website report is the federal government’s single “one stop” training resource center for the HR professional throughout the federal government. HR University is an effort that is intended to achieve government-wide savings through pooling and sharing training resources and identifying the best HR training across government. Agencies have the primary responsibility for establishing, operating, maintaining, and evaluating their training programs in support of achieving their mission and goals. OPM regulations specify that agency employee developmental plans and programs should be designed to build or support an agency workforce capable of achieving agency mission and performance goals and facilitating continuous improvement of employee and organizational performance. Furthermore, Executive Order No. 11348 states that agency heads must undertake several activities in support of developing employees, including: Review periodically, but not less often than annually, the agency’s program to identify training needed to bring about more effective performance at the least possible cost; Conduct periodic reviews of individual employee’s training needs as related to program objectives; Conduct research related to training objectives required for program Plan, program, and evaluate training for both short and long-range program needs by occupations, organizations, or other appropriate groups; Establish priorities for needed training, and provide for the use of funds and man-hours in accordance with these priorities; Establish training facilities and services as needed; Extend agency training programs to employees of other agencies and assign his employees to interagency training whenever this will result in better training, improved service, or savings to the government. The CHCO Council, established under the Chief Human Capital Officers Act of 2002, provides assistance to OPM and agencies in accomplishing federal human capital goals. The 25-member CHCO Council is composed of the Director of the OPM, who serves as chairman; the Deputy Director for Management of the Office of Management and Budget (OMB), who acts as vice chairman; the CHCOs of the 15 executive departments; and the CHCOs of 8 additional agencies designated by the OPM Director. Additionally, the CHCO Council has an Executive Director from OPM who coordinates and oversees the activities of the council. The CHCO Council supports OPM in leading federal agencies in the strategic management of human capital, providing a forum for senior management officials to exchange HR best practices, and informing the dialogue on civil service reform in order to build and maintain an outstanding Federal workforce for the Nation. According to the CHCO Council’s charter, among other purposes, the council is to: advocate and assure a culture of continuous learning and high performance, developing and implementing effective strategies to attract, develop, manage, and retain employees with superior abilities; identify human capital best practices and benchmarks, and apply those exemplars to their agencies and the federal government as a whole; and provide leadership in identifying and addressing the needs of the federal government’s human capital community, including training and development. To help CHCOs implement their training goals, many of the 24 Chief Financial Officers Act agenciesLearning Officers (CLOs). These officers subsequently formed an informal Chief Learning Officers (CLO) Council, which is a community of and smaller agencies established Chief practice composed of federal CLOs or their equivalents who meet periodically to share best practices and create learning opportunities for agencies and organizations. The purpose of the CLO Council is to provide a regular forum for CLOs to discuss and collaborate on high-level agency strategic and operational issues affecting the Federal learning and workforce development community of the federal government. These two Councils, in partnership with OPM are to play a key role in assisting agencies in the implementation of federal training and development efforts. Many CHCOs reported that they are implementing leading practices we identified as being important to making strategic training and development investment decisions, especially regarding the delivery of training. These practices include determining the best mix of decentralized and centralized training, considering government-wide reforms when identifying their training needs, and measuring employee satisfaction with training, among other things. However, many CHCOs reported that they are not implementing the leading practices that would allow them to make more cost-effective training decisions, such as having an agency-wide process for prioritizing training investments so that the most important training needs are addressed first and comparing the merits of different delivery mechanisms (e.g. classroom or computer-based training) to determine what mix of mechanisms will be most efficient and cost- effective. All of these practices are important to ensuring that training investments will be both effective and efficient in equipping federal employees to accomplish their agencies’ goals. Many CHCOs reported that they are implementing six of the leading practices that we identified as being important to making strategic training and development investment decisions, especially regarding the delivery of training, as shown in Table 3. However, regarding the leading practice related to tracking training investments agency-wide, we found that even those who reported that they track training agency-wide did not do so completely or reliably. All CHCOs reported that their agencies have implemented this practice. We have previously reported that, while neither approach fits every situation, agencies need to consciously think about the advantages and disadvantages of using centralized and decentralized approaches, particularly for the design of training and development programs. Centralizing design can enhance consistency of training content and offer potential cost savings. A decentralized approach to training design can enable agencies to tailor training programs to better meet local and organizational unit needs. Agencies with decentralized approaches often embed training representatives within their business lines and field structures to assist in coordination of training efforts, including design and development. Nineteen of the 27 agencies reported that they have both centralized and decentralized training processes, while eight reported having completely decentralized training processes. Most of these agencies reported that their CHCOs or CHCO staff typically make centralized training decisions, while the leadership within the components, subagencies or offices make mission-specific training decisions. In the questionnaire responses, CHCOs identified a range of officials who are involved in making training investment decisions at the corporate and sub-agency level, including CHCOs and their staff, chief management officers, chief executive officers, budget officers, chief information officers, and others. A number of agencies also reported that advisory or oversight boards or training universities within their agency are involved in making training investment decisions. In the four agencies that we selected for review to obtain illustrative examples of how they implemented the training investment practices, the CHCOs or their representatives reported that their agencies made a decision to have both centralized and decentralized processes because they believe that the components or sub-agencies are more knowledgeable about their mission-specific training needs, while the central human capital staff can add the most value by managing investment decisions for more general training across the department. VA—which was one of the four agencies that we selected—established a corporate university known as the Veterans Affairs Learning University (VALU) to provide training to all VA employees. VALU provides training primarily in general areas such as leadership, management and supervision, as well some career and technical training. VALU offers training to the administrations and staff offices through a request process that is based on the training needs that the administrations and staff offices identify. Those training needs are required to be aligned to VA critical training areas. An Enterprise Training Advisory Board, established in April 2012, also advises the Dean of VALU on the impact of training, potential training development, and methods of delivery. However, another tier of training is also provided within VA’s three administrations—the Veterans Health Administration (VHA), Veterans Benefits Administration (VBA), and National Cemetery Administration. Each administration independently makes training investment decisions and provides training to its employees in mission-specific and some general and mandatory areas. The leadership of each administration makes decisions about the level and prioritization of these training investments. For example, at the VHA the Associate Deputy Under Secretary for Health (or equivalent) and, subsequently, the Deputy Under Secretary for Health assess the training requested by their offices against various criteria, including whether training requests are aligned with and support VA and VHA strategic goals, objectives, strategies, initiatives, and performance improvement goals. During each review these officials prioritize the requests through a voting process, and forward selected training to the next level. Ultimately, the training is sent for approval to the Under Secretary for Health and the VA Chief of Staff. The other three agencies that we met with also reported having both centralized and decentralized processes for making mission-specific training investment decisions. However, most often, decentralized training decisions were not required to be vetted with department level leadership for these three agencies. Nearly all CHCOs in our review reported that they have a process for these considerations. We have previously reported that when planning training and development efforts, agencies should look to the actions of the administration, Congress, and internal and external auditors by considering administration priorities, legislative reforms, and major management challenges that might shape agency priorities and strategies for training and development. As an administration focuses its efforts on addressing its priorities, agencies can benefit by having mechanisms or processes for considering whether and to what extent these initiatives could be linked to employees’ skills and competencies and the related training and development approaches that might be needed. Twenty- three of the 27 CHCOs who responded to our questionnaire reported having such a process in place. For example, 16 of the CHCOs reported that they are already setting investment allocations or training priorities to implement GPRAMA. At DOE, another agency we selected for review to obtain illustrative examples, officials reported that they have identified the training that the department currently offers and will need to offer to implement GPRAMA. The Secretary issued a memo to DOE employees on GRPAMA’s implementation and is holding town hall meetings on improving organizational performance. Another effort that DOE expects to support the implementation of GPRAMA is the Goals-Engagement-Accountability- Results (GEAR) model that OPM and OMB are helping to pilot in DOE and four other federal agencies, which includes efforts to improve employee performance, among other things. and related documentation, DOE’s GEAR implementation plan includes aligning employee performance management with organization performance management and developing training to support these goals, which along with initiating knowledge sharing activities, will promote improvement of DOE’s organizational performance. Beginning in late May 2011, a workgroup of the National Council on Federal Labor- Management Relations (LMR) partnered with members of the CHCO Council to develop a new model of employee performance management, referred to as GEAR. GEAR focuses on articulating a high-performance culture, aligning employee performance engagement with organizational performance management, implementing accountability at all levels, and create a culture of engagement. OPM is piloting GEAR at five agencies—the Housing and Urban Development, the DOE, the Coast Guard, OPM and VA. Many of the CHCOs in our review reported having criteria for this purpose. Training can be provided by the agency itself, another government agency, a school, a manufacturer, a professional association, or other competent persons or groups in or outside of government. To aid in making these decisions, agencies should try to develop clear criteria for determining when to contract for training and development services. We have previously stated that factors that agencies should consider in these decisions include the capability of in-house staff to develop and implement the training; the prior experience, capability, and stability of possible providers in the marketplace; and agency limitations on cost, time, and resources Of the 27 CHCOs included in our questionnaire, 15 reported that they have criteria for determining whether to design training and development programs in-house or obtain these services from a contractor or other external source. One agency that we selected for review to obtain illustrative examples was DOI, which reported implementing this practice, however, the extent to which this decision-making process is implemented agency-wide is unclear. In its questionnaire response, DOI’s CHCO reported that DOI’s Office of Strategic Employee and Organizational Development has responsibility for offering corporate training through DOI’s university. This office decides whether to “make or buy” departmentwide training. When we met with DOI officials in the course of our review, they explained that although almost all courses are delivered by vendors because DOI has no internal trainers, they do have a small cadre of instructional designers who can develop some e-Learning courses. Decisions on whether to develop the courses internally are based on various criteria, including whether a course can be developed quickly, does not require a significant amount of content development, and subject matter experts can be provided to support the course development. Although this department level process is useful, DOI officials did not know if the bureaus within the department consistently use a “make or buy” approach. They reported that the larger bureaus have some capacity for in-house development while the smaller bureaus do not have this capability. Many CHCOs reported that their agencies implement this practice, although most of the CHCOs who reported that they do not track training investments agency-wide were leaders of the agencies with the largest workforces. We have previously reported that to obtain a comprehensive determination of the costs of these initiatives, agencies need to find ways around barriers that prevent them from fully and accurately identifying the expenses associated with all components of their training and development processes. These costs can include expenses for instructional development; participant and instructor attendance; facility, material, and equipment costs, and travel and per diem expenses. To track the cost and delivery of training and development programs, agencies need credible and reliable data from learning management systems as well as accounting, financial, and performance reporting systems. To the extent possible, agencies also need to ensure data consistency across the organization (such as having data elements that are pulled from various systems representing the same type of information). Variation in the methods used to collect data can greatly affect the analysis of uniform, quality data on the cost and delivery of training and development programs. In response to our questionnaire, 16 CHCOs reported that they track training investments agency-wide. A learning management system is a software application that automates the administration, tracking, and reporting of training events. and stated that they could not provide reliable training data to OPM, which requests these data to address its government-wide training responsibilities. Under OPM regulations, agencies are required to maintain data on training activities and expenditures and submit these data to OPM. As an example of challenges tracking training investments, DHS reported that it is unable to track or better leverage training investments across the department because of the nine, major incompatible Learning Management Systems that it uses to track training throughout the agency. We highlighted these same challenges in a 2005 report on DHS training, noting that the lack of common management information systems and the absence of commonly understood training terminology across components, among other things, may impede the agency’s ability to achieve its training goals. According to more recent documentation on the limitations of DHS’ tracking systems, the components’ disparate systems currently limit them from sharing useful training information across the department, effectively aggregating training data agency-wide, and reporting complete training investment information to OPM. As a result, DHS is seeking to purchase a single learning management system. Even when agencies had a single training information system, the components may not consistently use them to track training investments because of inconsistent coding schemes for tracking similar training activities. For example, even though DOI has a single system for tracking training information, officials reported that their human capital office must rely on employees or data stewards to input their training data and some cost data may not be included, such as training travel costs, or certain types of training may not be entered, such as conferences. Such costs are sometimes paid directly by an employee’s immediate office using a government credit card and are not tracked as training. In addition, learning management system training data are often not reconciled with DOI’s financial expenditure data because, until recently, their financial systems have not captured education tuition and training fees, and are still unable to track training travel costs. Therefore DOI’s cost data are most likely incomplete. Similarly, officials from DOE, DHS, and VA reported that they are aware of some inconsistencies in whether some types of training, such as conferences, are entered into their learning management systems. They also stated that there are inconsistencies in how agency components capture and code workforce training into their system because they lack a common definition for what types of activities should be considered training or have varying coding schemes or tools for capturing the cost. For example, some organizations use a procurement request or obtain a contractor to deliver training and do not document these costs using the standard government form for tracking data or in their learning management systems. In other cases, training investment data are captured using different coding in various learning management systems and financial systems. Some officials report that reconciling these data would be difficult. For example, DOE’s chief financial officer reported that it takes a couple of months to gather training investment data from DOE’s various systems, partly because the systems have inconsistent coding for these data. DOE’ officials reported that changing financial codes to reconcile training data would be time consuming and expensive because their financial systems are 20 years old. However, after years of highlighting this challenge, they are seeking approval to make such changes. Officials from the four agencies generally reported that, as a result of all of these factors, there is no overarching awareness or oversight of how much is spent on training investments and for which activities . Nearly all CHCOs reported having a formal process to evaluate employee satisfaction with training, but fewer had processes to evaluate the impact of training on employees or agency performance. We have previously reported that it is increasingly important for agencies to be able to evaluate their training and development programs and demonstrate how these efforts help develop employees and improve the agencies’ performance because it can aid decision makers in managing scarce resources, and provide credible information on how training and development programs have affected organizational performance. To do so agencies need to develop evaluation processes that systematically track the cost and delivery of training and development efforts and assess the benefits of these efforts. Training and development programs can be assessed by measuring (1) participant reaction to the training program, (2) changes in employee behavior or performance; and (3) the impact of the training on program or organizational results, which may include a return on investment assessment that compares training costs to derived benefits. Some of these methods can help provide better value through identifying areas for continuous improvement in training programs. We consider the processes for conducting these evaluations to be formal when they are systematically conducted throughout the agency, have established guidelines and criteria that govern how they are implemented and are documented. However, CHCOs may also have other criteria for determining what is considered a formal process, based on their agencies’ environment. We asked CHCOs about their formal processes for conducting the three levels of evaluation listed earlier, which are the common types of evaluations. Many CHCOs reported routinely implementing the first two, but not the third (which we discuss later in this report). Twenty-five of the 27 CHCOs included in our questionnaire reported that they measure employee satisfaction; and a little more than half reported that they measure improvement in employee performance. Officials from the four agencies that we interviewed reported that they all assess employee’s reaction to training and sometimes assess changes in employee performance. For example, officials from DOE reported that they evaluate all the training that they offer by surveying participants’ reactions to the training—which can include their feedback on the effectiveness of the instructor, the topics, the presentation style, the schedule, audiovisuals, and other subjects—and use this information to make revisions to the program courses. Documents that we reviewed on training evaluations identified updates or revisions made to course materials and tests to improve their effectiveness, based on training feedback and policy updates. As an example of evaluating the impact of training on employee performance, DOI officials stated that, while they do not have an agency-wide process, some of their organizations—such as those within the Bureau of Land Management and National Park Service use an online evaluation tool to assess the impact of training courses on employees’ abilities to perform tasks, about 6 weeks after a course has been completed. According to the official, at this time, the process is not used department-wide, but the agency is looking into how it may be able to do so starting in fiscal year 2013. According to DOI’s CLO, establishing this link between training, employee competencies and mission critical occupation work is one that DOI is targeting for improvement. Nearly all CHCOs reported that they implement this practice. We have previously reported that there are many ways to help improve performance, so it is important for agencies to continually look to others to identify innovative approaches that may relate to their training and development efforts. Within the context of that agency’s unique environment and situation, an agency can compare its investments, approaches, and outcomes with those of public and private organizations that are undertaking notably innovative and effective training and development efforts. Agencies can uncover weaknesses in their training and development strategies that need improvement and identify new ideas, mechanisms, and metrics that they could employ. Twenty-four of the 27 CHCOs included in our questionnaire reported that they compare training investments, methods, or outcomes with those of other organizations to identify innovative approaches or lessons learned. Officials from the two agencies we asked to provide examples of this practice described this process as occurring informally through interactions with other CLO Council members. For example, DHS officials that we met with reported that they meet with other agencies to share best practices and recommend vendors during breaks or after the CLO meetings. The officials said that examples of sharing ideas on new training programs included recent discussions by OPM and agencies on the GEAR pilot program lessons learned and new courses for developing supervisors. While two agencies reported having informal interactions with other agencies to share and compare training information, none of the agencies that we met with described efforts to benchmark their practices with agencies or other relevant entities. We have previously reported that benchmarking can help agencies determine who is the very best, who sets the standard, and what that standard is. Many CHCOs reported that they are not implementing the leading practices that would allow them to make more cost-effective training decisions, as shown in Table 4. Many CHCOs included in our review reported that they have not implemented this practice. We have previously stated that, to determine the best ways to leverage investments and establish priorities, agencies can develop an annual training plan that targets developmental areas of greatest need and that outlines the most cost-effective training approaches to address those needs. When assessing investment opportunities for its training plan, the agency ought to consider the competing demands confronting the agency, the limited resources available, and how those demands can best be met with available resources. If training is identified as a solution to improve agency performance, agencies can prioritize training using criteria, such as expected demand for the investment from internal sources, availability of resources to support the effort, potential for increased revenue, and risk of unfavorable consequences if investments are not made. Given current budget constraints, agencies may also want to prioritize training that has the potential to improve their efficiency. Developing a business case for training and development that includes this information sets forth the expected costs and benefits of the investments and provides decision makers with essential information they need to allocate necessary resources. Furthermore, under Executive Order No.11348 and OPM regulations, agencies are to establish training priorities, although agencies are not specifically instructed to establish an agency-wide process to do so. Of the 27 CHCOs questionnaire responses, 16 CHCOs reported that they do not set a level of investment agency-wide and 15 CHCOs reported that they do not prioritize training agency-wide. In our meetings with officials from the DOE, DOI, DHS, and VA as well as the CLO council, agency officials cited several reasons for why they do not establish a level of training investment agency-wide or prioritize training agency-wide. Some of the reasons were described as purposeful decisions not to do so and other reasons were described as limitations in their ability to do so. First, CHCOs elect to establish and prioritize training investments for centralized training and are often not involved in the investment decisions made for specific training within the components or offices, as we previously described. In addition, large components or sub- agencies often have autonomy over their training budgets because the budgets are appropriated directly to them from Congress. As a result, CHCOs and their staff are often unaware of how much these components spend for training and do not have input into these decisions. Component and sub-agency heads often act autonomously and are not required to communicate with the CHCO about these decisions. Further, because of limitations in internal tracking systems for training (which we discussed earlier in this report), CHCOs do not have information on all of the training that is completed in their agency and the related costs. Officials from various agencies involved in the CLO Council and three of the four agencies that we individually met with reported having a lack of visibility into the prioritization and level of training investments throughout their agencies, which they reported limits their ability to better leverage and reduce duplication in training investments their agencies. Officials in the agencies that we met with reported that, although they believe that their components or organizational elements are more capable of making training decisions related to their specific missions, the lack of coordination and communication on training investments and priorities has led to some duplicative and ineffective training investments in their departments. For example, senior human capital officials in DOI reported that the department’s leadership, including the CHCO are not aware of the department’s overall training investments agency-wide and have no formalized mechanism for ensuring accountability for how the funds are used. They are aware that bureaus are buying duplicative training or offering similar training classes that are of varying effectiveness—which is resulting in inefficient training investments. For example, one bureau recently independently contracted with an external provider for mid-level manager leadership training that was already offered at DOI’s university and paid $50,000 more than DOI University charges. According to officials, this is a common problem. In addition to duplicative training courses, in some cases, bureaus are duplicating the creation of new training facilities. For example, a regional director of an DOI bureau built a training classroom with a computer lab, despite having access to existing computer labs within the complex where he worked and also at DOI facilities a few miles way. Further, according to the officials, because it is common practice for each bureau to independently secure training, there is no consistency, little quality control, and no maximization of procurement tools (such as blanket purchase agreements) across DOI. In order to address these challenges, DOI has formed a one-time departmentwide task force known as the Department Innovation and Efficiency Team for Training. This task force was expected to identify: potential duplication in training, funds expended in training delivery, and the cost of travel and facilities, among other things. In July 2012, the committee made recommendations to the CLO on opportunities to generate efficiencies and savings in training operations. DOI’s Office of Strategic Employee and Organization Development is developing action plans to address the committee’s recommendations. Officials from DHS also reported experiencing similar challenges with duplicative or ineffective training investments in their agencies. Some of these challenges are long standing. For example, seven years ago we reported that DHS’s two-tier training process (component and departmentwide) and lack of communication throughout the department on the availability of some training programs and resources were challenges that could impede its ability to achieve departmental training goals and efficiencies. DHS is still taking steps to address this on-going challenge. In June 2005, DHS formally chartered a Training Leaders Council (TLC) and recently revised its charter in June 2011. The TLC is made up of senior training leaders from each component, and representatives from headquarters to serves as an advisory and collaborative community of practice to promote effective and efficient training, education, and professional development opportunities to DHS employees. According to DHS’ Human Capital leaders, while this group does not set or prioritize training investments, it provides a forum for exchanging useful information about common challenges and training practices, which helps in making more efficient use of existing agency resources. DHS also established the Human Resource Information Technology Executive Steering Committee, made up of management chiefs and HR and information technology leadership across DHS in 2010, and included TLC leadership as members in July 2011. This group makes some funding decisions related to some training investments, such as their recent decision to fund the purchase of a single learning management system for the entire department. However, according to DHS officials, because DHS has multiple congressional committees and subcommittees from which the components receive funding and training direction, coordinating training investments remains challenging. Many CHCOs that responded to our questionnaire reported that they do not compare the merits of different training delivery mechanisms. Our past research and that of others has shown that agencies should deliberatively consider the options for delivering training and consider essential issues, such as the goals and objectives for the training, the type of audience intended for the training, the nature of the training content, the availability of technology and resources, and the timing for delivering the training. Agencies can use a variety of instructional approaches to achieve learning—in the classroom, through distance learning, or in the workplace. When warranted, agencies should also consider blended learning that combines different teaching methods (e.g. Web-based and instructor-led) within the same training effort and provide trainees with the flexibilities to choose among different training delivery methods while leveraging resources in the most efficient way possible. When assessing delivery options, agencies can try to achieve economies of scale and avoid duplication of effort by taking advantage of existing course content or training, such as sharable on-line courses or multiagency training programs. However, In the responses to our questionnaire, 16 of the 27 CHCOs reported that they do not compare the merits of the different training delivery mechanisms in their agency. In our meetings with DHS and VA to obtain illustrative examples, DHS officials reported that their current learning management systems do not allow them to mine information on the different delivery mechanisms used throughout the department and to assess and compare their effectiveness. According to the officials, they could obtain this information manually, but it would be a very labor intensive process. Therefore, it is not done. In contrast, VA officials informed us that they are assessing different delivery mechanisms for training and conferences offered by VALU because they recognize that opportunities exist to offer more efficient mechanisms (such as e-learning). Moreover, VHA, which has the largest workforce in the department, builds into its initial investment decision-making process considerations of which delivery methods will be most effective and efficient, and subsequently evaluates employee satisfaction with the various delivery methods to inform future investment decisions. Without processes such as these, agencies that do not compare the merits of different training delivery mechanisms have limited information for determining what mix of methods provides the most efficient and effective delivery of federal training. Most CHCOs reported that their agencies do not have a routine formal process to implement this practice. As we previously mentioned, it is increasingly important for agencies to be able to evaluate their training and development programs and demonstrate how these efforts help to improve the agencies’ performance, and to assist them in making more effective decisions about how to allocate scarce resources. Agencies are required by statute and OPM implementing regulations to evaluate how well training programs contribute to mission accomplishment and meet organizational performance goals. We have identified having a formal process for this evaluation as a leading practice. However, there are some understandable limitations to regularly and formally implementing this practice. For example, some agency officials that we met with reported that the cost and time required can be significant for obtaining results of evaluations of training that measure the impact on agency performance goals. As a result, they can only conduct this level of review for training that they identify as highly important to key areas of their mission. We have previously reported that not all training and development programs require, or are suitable for, higher levels of evaluation.For example, it may be ineffective to try to measure the impact of training in an area that is still undergoing other significant changes that could affect relevant performance goals, such as changes in related policy and management structure. We recognize that higher levels of evaluation (such as evaluating the impact on organizational performance or return on investment) can be challenging to conduct because of the difficulty and costs associated with data collection and the complexity in directly linking training and development programs to improved individual and organizational performance. Factors to consider when deciding the appropriate level of evaluation include estimated costs of the training effort, size of the training audience, management interest, program visibility, and the anticipated “life span” of the effort. Each agency will need to consider the feasibility and cost-effectiveness of conducting these in-depth evaluations, along with budgetary and staffing circumstances that may limit the agency’s ability to complete such evaluations. Given the current budget constraints that agencies face, making thoughtful tradeoffs regarding how to target costly evaluation reviews is a sensible approach. While it is important to prioritize reviews of training, 8 of the 27 CHCOs that responded to our questionnaire reported that they do not have a formal process for evaluating the impact of their training on their agency’s performance. For example, the CHCO at DOE reported in our questionnaire that DOE does not implement this practice. We met with the CLO from DOE who informed us that DOE does not have a formal process for implementing this practice because the agency does not have a systematic documented approach for conducting this level of review. Moreover, evaluation data are not collected in a way that allows it to be aggregated into a comprehensive assessment of its impact on the agency’s overall mission. For example, different organizations within DOE conduct reviews to assess the impact of training on their goals, but they are not captured in an automated system and the methodologies that DOE organizations use to conduct these reviews vary. As an illustration, DOE organizations that work with nuclear material evaluate the technical training that they provide to their employees against required certification and mission goals. However, the organizations conduct these evaluations differently, and because of these varied methodologies and lack of automated results data, it is difficult to aggregate the reviews into an assessment of how training has affected DOE’s overall training and mission goals. Similarly, the CLO’s office evaluates cross-cutting training for employee satisfaction and employee performance at DOE, but does not effectively or consistently evaluate its impact on agency goals. According to the CLO, to assist them in developing a more systematic formal process, they are participating in OPM training on developing training evaluations and in the GEAR pilot program—which is intended to better link employee performance to organizational goals. In contrast, VA’s training review processes illustrate that agencies that have a formal process for assessing the impact of training on their performance mission and goals can use it to make better training investment decisions. VA recently assessed the return on investment of its corporate training and, the department’s Administrations recently evaluated the impact of mission-specific training on their performance goals. In January 2012, VA evaluated the monetary and mission-related benefits of training that was implemented under its Human Capital Investment Plan. According to the return on investment assessment and report developed by VALU and VHA’s National Center for Organization Development, VA’ s two-year $577 million investment in training and development under VA’s Human Capital Investment Plan has resulted in $604 million dollars in savings that are tied to reductions in costly VA turnovers, fewer overdue accounts renewable, and fewer equal employment opportunity complaints. The report also states that VA has gained non-financial returns, such as faster benefits processing, increased veteran hiring programs, and improved patient satisfaction. According to VALU officials, they have used details in this report along with other factors to make decisions about future training and development investments. Similarly, for its mission-specific training, VHA recently conducted an in-depth review of training provided to Patient Aligned Care Teams to improve their collaborative delivery of care to patients. The evaluation assessed the training participants’ satisfaction, skill acquisition, application on the job, and impact on VHA’s business. The assessment ultimately determined that the training was successful in addressing the desired behavior changes in the work place and that key organizational results were influenced by the training, but it also identified some improvements that VHA could make. OPM guidance and assistance to agencies on federal training investments are in line with five of the eight leading practices, but OPM lacks guidance and assistance in some areas that are challenges to agencies, as shown in table 5. OPM guidance or assistance to agencies on federal training investments addresses five of the eight leading practices for federal agency training investment decision-making processes. OPM’s five primary guidance documents that relate to making training investment decisions include the Guide to Human Resources Reporting, Training Evaluation Field Guide, Draft Training Policy Handbook, Guide for Collection and Management of Training Information, and Guide to Strategically Planning Training and Measuring Results. (See table 2 for a brief description of these guides). In addition, OPM provides technical assistance to agencies via facilitated forums, discussion, and training. Our review of OPM’s guidance documents and assistance shows that OPM has provided some technical assistance to agencies on this practice, although OPM does not have guidance documents that provide specific advice on this topic. For example, because of requirements in GPRAMA, OPM is providing assistance to agencies in considering this government-wide reform when planning their training and development programs. GPRAMA required OPM to identify the competencies needed to perform the following three functions: developing goals, evaluating programs, and analyzing and using performance information for the purpose of improving government efficiency and effectiveness. OPM, working with subject matter experts developed a competency model for the three new roles required by GPRAMA—performance improvement officer, performance improvement staff, and goal leader. Earlier this year OPM advised agencies that it would provide guidance on how to incorporate the skills and competencies into these position descriptions, as specified in the GPRAMA. The Director of OPM stated that the agency would work with the CLOs to incorporate the key skills and competencies into agency training programs. OPM has begun providing this assistance to agencies by facilitating sessions for agencies to develop training requirements for implementing the new positions and roles required by GPRAMA. For example, OPM worked with OMB to gather information on existing training, provide learning opportunities, and consolidate new and existing training courses and materials to support this effort. Using this information, OPM and OMB led two working group meetings with agencies to discuss GPRAMA training needs and next steps. In a working group meeting in February 2012, OPM and agencies discussed which competencies identified in GPRAMA could be improved readily through training. OPM provided participants with a chart developed from an OPM and the Merit System Protection Board 2011 Trainability Study on which competencies for the three new roles required by GPRAMA were highly trainable versus those that were less trainable. After the discussion, OPM and participants identified the most critical and manageable next steps, including: create a common competency assessment tool to assess competency gaps within agencies; create a course on writing results-oriented goals and standards— while also gathering existing training; create a working group to assess the needs and create a solution to satisfy the training requirement for the Organizational Performance Analysis, Planning and Evaluating, and Performance Measurement competencies and to collect relevant case studies, as well as to identify opportunities to leverage agency resources; identify existing subject matter experts in the agencies and create forums, workshops, training sessions, etc. where they can share their expertise and possibly engage in peer-to peer coaching; create working groups where necessary; and consider the development of a career path after OPM’s classification study. In addition to this assistance, although not specific to government-wide reforms, OPM’s Training Policy Handbook advises agencies to conduct a training needs assessment that includes an evaluation of organization needs, which should take into consideration changing demographics, political trends, technology, and the economy. OPM’s TED group advises agencies in its Training Policy Handbook and 2000 Guide to Strategically Planning Training and Measuring Results to use multiple delivery methods, or combine them, when providing training to employees. For example, the Training Policy Handbook maintains that agencies should decide which delivery option is best to achieve the instructional goals of the training, highlighting that some methods are more effective for certain courses. It states that a performance management course may include role play scenarios which may not be suited for an e-learning course. Further, the guide states that agencies need to develop training delivery mechanisms that effectively limit unnecessary overlap and duplication of efforts. Similarly, we have previously reported that agencies need to consider essential issues such as the goals and objectives for the training, the type of audience for which the training is intended, the nature of the training content, the availability of technology and resources, and the timing for delivering the training when identifying the most effective and efficient delivery mechanism. Agency officials who have implemented this practice reported seeing positive results. For example, VHA officials that we met with and agencies that have publically discussed their efforts to assess the different delivery mechanisms at a March 2012 Partnership for Public Service Forum on: Going Virtual- Maximizing the Return On Investment of Online Training reported significant savings and increases in the effectiveness of their training by assessing and changing their training delivery mechanisms. Specifically, VHA officials reported achieving several non-financial and financial benefits as a result of moving from in-person meetings and audio and video conferencing to providing training on-line for one of its leadership training programs. According to a VHA assessment report, the benefits included: consistent curriculum across eight medical centers in three states; easier accessibility to course materials and job aids; immediate access to feedback on courses from learners; easier reproduction of courses for instructors; a return on investment of 140 percent since implementation; and $116,000 saved in travel costs, facilitation, and facilities; among other things. OPM guidance informs agencies that they should implement this practice. However, the guidance does not include methodologies for how to implement it. Officials from DHS—an agency that reported that it does not implement this practice—stated that tools provided by OPM could be strengthened to assist them in comparing training delivery mechanisms. For example, DHS officials reported that they have difficulty implementing this practice partly because their components do not track comparative data on the different delivery mechanisms. According to the DHS officials, the standard government form for tracking training data (Standard Form- 182) does have a category for tracking training delivery type, but filling out this block is not mandatory and is often not used. The DHS officials reported that an OPM requirement to capture these data would improve their ability to gather the information needed from DHS components to effectively implement this practice. As noted earlier, 15 of the 27 CHCOs included in our review, reported that they do not implement this practice, which indicates that they may also benefit from additional guidance and tools on ways to do so. OPM’s TED group provides guidance and assistance to agencies on tracking and reporting the cost and delivery of training and development programs in four of its five guides. For example, OPM’s 2000 Guide to Strategically Planning Training and Measuring Results advises agencies to calculate the cost of the expenses associated with designing, developing, implementing, and evaluating their training programs and provides a list of the most common types of training costs. OPM’s Guide for Collection and Management of Training Information also outlines agency requirements to track various types of data training and provides a list of several data sources (e.g. Standard Form-182, agency personnel records, procurement documents, financial and performance records, training evaluation forms, etc.) that agencies could use to collect this information. Similarly, the Training Policy Handbook also incorporates guidance on tracking the cost and delivery of agencies’ training and development. In the more recent 2012 Guide to Human Resources Reporting, OPM outlines requirements for agencies to track training data and describes the requirement to use certain standard tracking forms, such as the standard “Authorization, Agreement, and Certification of Training” (Standard Form-182) to track data. The guide also instructs agencies to provide all training information included in this form for submission to OPM’s Enterprise Human Resources Integration (EHRI) database systems. Although OPM provides several guidance documents and assistance on tracking the cost and delivery of training, we found that this practice continues to be a challenge for many agencies to implement. Agency officials that we met with reported that they could benefit from additional assistance from OPM in developing a common definition of what should be tracked as training, developing policies to strengthen the utilization of Standard Form -182 to document and report all training costs, and encouraging agencies through guidance and technical assistance, to routinely report training cost data to agency learning management systems. In addition, to providing guidance on tracking data, OPM facilitates the collection of federal training data government-wide. Executive Order No. 11348 requires OPM to develop, install, and maintain a system to provide the training data needed to carry out its own functions and to provide staff assistance to the President. OPM’s EHRI is the government-wide repository for these training data. As noted above, agencies have been required since 2006 to report training data to OPM monthly via this system. However, according to OPM officials, they consider the data to be unreliable because they are incomplete. Therefore, OPM officials have not used it to inform their training guidance and assistance to agencies, to counsel heads of agencies and other agency officials regarding federal training needs or investments or to assist agencies in developing sound programs and financial plans for training programs. According to OPM officials and documents, OPM should assess EHRI training data for technical compliance and data quality validation. Technical compliance is the testing and approving of agency systems for data quality (i.e. correct formatting, adherence to edit rules). Once systems are technically compliant, agencies are required to send monthly data feeds of completed training events to OPM. Once agencies are reporting these data for all major components, all employees, all types of training (e.g. conferences, on-line, classroom), and training cost data, OPM are to evaluate the data quality to determine if it presents an accurate picture of all training in the agency. However, OPM officials told us that they have not assessed the quality of data or developed a report on its reliability because no agency is sending information on all training events. According to OPM officials, when agencies request assistance or when OPM finds that an agency has been grossly delinquent in providing data, OPM officials will inquire further and offer assistance to the agencies. However, they typically do not document reliability issues or the agreed upon action plans to address the problems. The officials agreed that this is a problem, but stated that they would need more staff resources to provide this level of assistance and oversight. We believe that the current reliability of agency training investment data is unknown because OPM officials have not internally assessed improvements in the completeness of the data over the last 3 years or the quality of the data in the six years that agencies have been required to submit it. The two internal reviews that OPM conducted of training data were in 2008 and September 2009. In the 2009 review, OPM reported that there was an increase from fiscal year 2008 in the amount of training data being reported by agencies, but that the quality of the data was still less than what was necessary to provide an accurate picture of federal training investments. According to the 2009 report, over half of all agencies were reporting data for the entire agency, 86 percent were reporting on a regular basis, but only 7 percent were reporting cost data. The report identified several of the same reasons that we previously described as limitations to agencies reporting training investment data. Although the report stated that OPM would continue to work with agencies to assess the quality/validity of training investment data and determine whether agencies are reporting all training events, as noted above, OPM officials informed us that they have not assessed the quality of the data because the data are not 100 percent complete. While it is important to have complete data, we do not believe that having incomplete data necessarily prevents OPM from assessing the overall reliability of the data, if it meets standards for sufficiency. In our guidance on assessing the reliability of computer based data, we have stated that agencies can assess data if it is sufficiently complete. Data are sufficiently reliable when testing and reviews of the existing information provide assurance that (1) the likelihood of significant errors or incompleteness is minimal and (2) the use of the data would not lead to an incorrect or unintentional message. Further, we consider the data not to be sufficiently reliable when there are significant errors or incompleteness in some of or all the key data elements and if using the data would probably lead to an incorrect or unintentional message. Because OPM has not conducted an assessment of improvements in agency training data in three years, it is unknown whether it is currently complete enough to test other aspects of its quality and reliability. According to the officials, although they have not conducted a formal review of the data, they are able to visually look at the EHRI data base and tell that the data are significantly more complete than in past years. OPM also previously identified several steps that its officials would take to assist agencies in improving their data, but have not yet implemented all of them. According to OPM’s 2009 report assessing EHRI data, OPM planned to assist agencies in improving training investment data, by: (1) working with agencies to fully report all training investment data— including costs; (2) working with agencies to decrease errors in reporting; and (3) providing individual agencies with summary reports of the data that they submitted to OPM for their review and verification. We found that OPM has initiated some related efforts, but has not fully addressed two of these issues. In order to decrease errors in reporting, OPM officials and EHRI reports show that OPM has worked with agencies to identify technical errors in their training data submission. However, to improve reporting on cost data—which is currently a challenge for agencies— OPM held one focus group with agencies in 2007, which it used to updated it’s guidance on tracking training data in 2008. OPM also has not followed through on plans to annually provide agencies with reports of their training data for verification and correction. According to OPM, the purpose of the training data report is to (1) inform agencies of the training data OPM had received (2) offer them the opportunity to work closely with OPM in correcting any identified deficiencies and (3) to make note of the progress they have made in addressing OPM’s training reporting requirement. OPM officials said that they sent one report to agencies (in fiscal year 2010) summarizing their training data and requesting verification and this report was provided in response to expectations that the data would be posted on the government-wide website Data.gov. In our review of examples of agency responses, we found that agencies identified important discrepancies in their data, including significant underestimates of the costs spent on training, and reported that they would take steps to address incorrect data. However, OPM officials informed us that they do not have a process for documenting whether agencies have taken steps to correct their data. Although OPM has only provided one summary of EHRI data to agencies, agency officials that we met with stated that they could benefit from using this type of data summary to improve their training data. Further, OPM officials stated that using these summaries to improve EHRI data could help agencies measure the return of investment on their training and assist agencies’ stakeholders in making more informed decisions on the best use of training dollars. During our review, OPM officials reported that they began developing a report to send to agencies with fiscal year 2011 training data. The TED group provides guidance and assistance to agencies in evaluating training programs through three of its guides and in workshops. OPM’s Training Evaluation Field Guide is the primary guide through which OPM advises agencies on how to evaluate training. The guide instructs agencies to define the results they want to achieve from training and to align training with the agency mission and goals. Further, the Guide discusses useful models for evaluating training and describes Return on Expectations as the ultimate indicator to demonstrate the training value to stakeholders. The guide also provides information on evaluation requirements outlined in laws and regulations, and provides practical instruction by identifying common challenges and solutions related to identifying the most cost-effective methods to train and develop employees. In addition to this guidance, OPM’s Training Policy Handbook instructs agencies to evaluate all training to determine whether or not it provides meaningful contributions to agency results. Similarly, in its Guide for Collection and Management of Training Information, OPM highlights the importance of collecting accurate, comprehensive training information and making it available to decision-makers and others who have a vested interest in the training activities of the federal government. This guide discusses the two basic types of performance measures for measuring training and development program effectiveness: process indicators and outcome indicators. In addition to its guides, OPM has made evaluation tools available to agencies on its website and held workshops on training evaluation in order help agencies identify and share best practices on evaluating training. For example, OPM’s website contains a Training Evaluation Tool that describes the levels of training evaluation and provides agencies with evaluation questions to be answered in each of the four levels and the types of information typically collected. As previously mentioned, some agency officials reported that it is difficult to conduct these reviews because their cost and time demands can be significant. As a result, some agencies only conduct them for the most critical training and others reported that they do not have a formal process for conducting these reviews at all. While we agree that it is appropriate to target costly evaluations to the most important training, those who do not implement this practice at all could benefit from using OPM’s comprehensive guidance and assistance on training evaluations. OPM does not have a guidance document that advises agencies on how to compare training investment methods and outcomes with other agencies, but provides some support to agencies in this area through technical assistance. For example, the TED group uses various web- based mechanisms such as OPM’s website, OPM LISTSERV, OPM Federal Training and Development web site and OPM Federal Training and Development Wiki to facilitate discussions between agencies on training investments. We observed the exchange and sharing of information among agencies through OPM’s LISTSERV, which is used by 950 employees from various federal agencies to share training practices and advice. At times, agencies requested and shared information with each other on the most effective or efficient ways to implement specific training programs or requested models for which to compare their activities. Similarly, OPM’s wiki page contains examples and models of training programs for others to use when developing their training programs. According to OPM officials, the TED group also provides best practice forums on topics when they believe agencies need additional assistance. For example, OPM officials reported that they have held forums with agencies on the Training Evaluation Field Guide to share best practices and tools among agencies. OPM is working on a similar forum for developing supervisory training. OPM does not have guidance and assistance for three leading training investment practices, two of which are areas in which agencies reported experiencing challenges. We examined the five guidance documents that OPM provides related to making training investment decisions and documentation on OPM’s technical assistance and did not find support for the following three practices. OPM officials confirmed that that the agency does not provide direct guidance or assistance in some of these areas. In our review of OPM guidance and documentation on its technical assistance, we found that OPM provides some guidance to agencies regarding steps to identify training and development investment needs and related training strategies, but we did not find guidance on prioritizing these investments so that the most important training needs are addressed first. According to TED officials, each agency must assess its own needs as the primary driver for investment determinations. To that end, OPM officials provide guidance and tools for conducting training needs assessments in OPM’s training policy guide and on OPM’s website and also direct agencies to review benchmarks in American Society for Training & Development’s State of the Industry reports. OPM officials also reported that they use the Human Capital Assessment Accountability Framework and related efforts to emphasize the importance of considering training as a solution to addressing mission critical competencies and skill gaps, but acknowledged that OPM does not provide specific guidance on prioritizing training investments through these processes. OPM officials identified guidance that they believe addresses the leading practice of prioritizing training investments; however, we found that the guidance does not address prioritization. As we previously mentioned, it is a leading practice for agencies to prioritize their training investments using criteria, such as expected demand for the training from internal sources, availability of resources to support the effort, potential for increased revenue, and risk of unfavorable consequences if investments are not made, or their potential to improve their efficiency. TED officials identified OPM’s 2000 Guide to Strategically Planning Training and Measuring Results as the source of guidance to agencies on this practice.guidance to agencies on prioritizing training investments. Instead, the guide advises agencies to build a business case for their training strategies. OPM defines a business case as a method for projecting and documenting the benefits to be gained as a result of investing resources in a training intervention. The guide encourages agencies to consider questions that are important to building a business case and provides an example of how to build a business case using this information (see Appendix III for diagrams from the guide on building a business case for training). These steps are consistent with our identified leading practice. However, the guide does not take the additional step of advising agencies on how to prioritize training investments selected from their business case(s) relative to each other. We have previously reported that, when budgets are constrained, training is often one of first investments that agencies reduce. Therefore, it is increasingly important for agencies to prioritize their selected training activities, so that the most important training is identified. Moreover, they need to communicate those priorities agency-wide, in order to identify common needs and potential areas for consolidated investments. As previously noted in our review, this is a practice that most CHCOs reported that they do not implement, which, as illustrated by our case example agencies, has resulted in costly duplicative and inefficient training investments at some agencies. Identify the most appropriate mix of centralized and decentralized approaches for its training and development programs. In our review of OPM guidance and documentation on its technical assistance, we did not identify specific guidance or assistance to agencies on this practice. As we previously noted, while neither approach fits every situation, agencies need to consciously think about the advantages and disadvantages of using centralized and decentralized approaches, particularly for the design of training and development programs. Although OPM officials confirmed that they do not provide guidance and assistance to agencies in this area, OPM officials agreed with this leading practice. We found that most agencies included in our review reported that they already implement the practice, so additional guidance may not be necessary. Have criteria for determining whether to design training and development programs in- house or obtain these services from a contractor or other external source. TED officials agreed that they do not provide guidance or assistance to agencies on this practice. TED officials stated that agencies need to incorporate this leading practice into their training investment decision- making process. As we previously noted, once an agency has identified its training and development needs, it should make informed decisions about whether to design and develop training and development programs in-house or buy these services from a contractor or other external source. Factors that they should consider include the capability of in-house staff to develop and implement the training; the prior experience, capability, and stability of possible providers in the marketplace; and agency limitations on cost, time, and resources. As previously mentioned, 12 of the 27 CHCOs reported that they do not implement this leading practice, and our discussion with DOI officials, one of the four agencies that we interviewed for this review, illustrates that even those agencies that reported implementing this practice may not be doing so for all training in the agency. Agency officials reported they could use more OPM assistance in leveraging federal training investments across the government. Part of OPM’s role is to identify functional areas in which new or expanded interagency training activity is needed and either conduct such training or arrange for agencies having the substantive competence to do so; as well as to coordinate interagency training conducted by and for agencies. Members of the CLO council emphasized that they could benefit from more OPM assistance in achieving greater interagency collaboration on training to reduce duplicative training investments. All four agencies that we interviewed reported concerns similar to the Council’s. For example, DOI officials noted that OPM’s knowledge and expertise could help agencies identify one basic approach to competency management (e.g., establishing levels of proficiency and competency validation processes) that can be used across government rather than using multiple approaches at various agencies. At the present, each agency individually identifies training needed for these competencies, which results in duplication and variation in the quality of training provided throughout the government. DOI officials stated it could be more efficient if agencies would use a standard set of Knowledge, Skills, and Abilities to hire and identify training and development investment priorities. The officials also suggested that OPM’s HR University could be used to provide training for other mission-critical occupations. Further, VA officials stated that it would assist agencies if OPM established government-wide courses for mandatory training and cross-cutting areas. As an example, the officials stated the federal government has 17 different versions of No Fear Act training. The officials suggested that OPM could establish one government wide training for such subjects which would help agencies save federal time and money. Officials from DHS and DOE expressed similar views. In contrast, the shared training efforts that are being implemented by the Federal Healthcare Training Partnership collaborative, which consists of 14 federal agencies that provide clinical health care or related training to support their mission, illustrate the potential magnitude of savings that could be achieved by leveraging training across agencies. The Federal Healthcare Training Partnership was created by its members to share training programs and resources across the agencies to speed up the provision of employee learning and reduce training costs. According to VA officials—who lead the effort, the agencies formed this group because they saw the unaddressed need and opportunity to save costs in common training areas. Documentation provided by VA on the collaborative group states that in fiscal 2011, Federal Healthcare Training Partnership partner organizations shared more than 2,300 programs, generating a total cost avoidance of more than $82 million. They did so by utilizing the partner organizations’ existing learning systems to share training that was originally developed for a single agency’s internal use and making it available to all federal learners, as well as by coordinating the joint development or purchase of training needed by two or more partner agencies. VA officials stated that, while this has been a valuable effort to specifically improve healthcare-related training investments for the agencies involved, all federal agencies would benefit from an expansion of leveraging training investments across the government. OPM officials agreed that increased coordination of mandatory and common training across the government could reduce duplication and improve the efficiency of federal training investments. The officials reported that OPM has already engaged in some efforts to partner with or support CLO and CHCO Council efforts to share specific training across agencies. For example, the officials worked with the Social Security Administration to share a “Plain-Language” writing course developed by the Social Security Administration with other agencies, by placing it on OPM’s Training and Development Wiki page. In addition, OPM officials stated that in 2010, the CHCO Council and OPM collaborated to establish HR University, which is aimed at addressing the competency and skill gaps within the HR community and achieve savings government-wide by identifying and sharing the best HR training with all agencies. While the system was initially designed to provide training to the HR community, it has also been used to provide some mandatory training and HR training to other supervisors and managers. For example, officials reported that they recently added a mandated Uniformed Services Employment and Reemployment Rights Act training to HR University. The CHCO Council and OPM have also developed a formula to calculate cost savings resulting from the shared courses, which agencies can use to track their savings and return on investment. According to the OPM Executive Director of the CHCO Council and HR University’s website, in its first year, HR University has saved the government $14.5 million as a result of the shared training, and OPM officials expect that it could produce significantly more savings, when other courses are added. According to OPM officials, while HR University primarily serves the needs of the HR community, OPM would support using the HR University model to centralize training in other occupations or functional areas. The federal government’s efforts to build and retain a workforce of skilled and efficient employees are essential to addressing skill gaps in critical fields and effectively and efficiently deliver services to the public. Training and development programs play a vital role in fulfilling these goals. However, agency leaders need to be as strategic about how they invest resources in this area as they are in other key areas of agency operations. Training investment decisions should be based on an assessment of the appropriate level of training investments and the prioritization of those investments, as well as an evaluation of the most cost-effective delivery mechanisms, and the known costs and benefits of their training investments. CHCOs and OPM each play a vital role in ensuring that these investment decisions are effectively made. While CHCOs report that they are implementing leading practices that support the successful delivery of training, they could do more to ensure that these investments are more cost effective. Because many CHCOs do not have the information that they need from component or subagency leaders regarding the level of training investments and mechanisms for setting priorities agency-wide, their agencies are duplicating some internal training investments and missing opportunities to leverage economies of scale and share the most effective training across their agencies. Many CHCOs are also limiting their opportunities to make training more cost effective and accessible because they are not comparing the merits of different training delivery mechanisms. OPM’s guidance and assistance in these three areas are minimal or absent and could be strengthened to assist agencies in implementing these leading practices. In addition to these limitations, some CHCOs do not have a formal process to evaluate the impact of training on their mission. While not all training and development programs require, or are suitable for, higher levels of evaluation, those who do not implement this practice for any training are missing information that could help them make more effective investment decisions and could benefit from using OPM’s existing guidance and assistance on conducting such evaluations. Federal agencies and OPM also need reliable information on how much agencies spend on training and for what purposes, in order to make effective training investment decisions. However, CHCOs do not completely and reliably track training costs agency-wide and, therefore, are unable to provide OPM with the reliable information that it needs to more effectively guide government-wide training policies. OPM has responsibility for providing regulations for maintenance of agency training data, assessing the completeness and quality of those data when agencies submit it, and using it to target its assistance to agencies. But OPM does not know the extent of the reliability of federal training investment data because they have not compared improvements in the completeness of the data over last 3 years and determined if it meets the standards of sufficiency for data assessment and have not assessed the quality of the data in the 6 years that agencies have been required to submit it. Given the fiscal challenges facing the nation, the federal government needs to take advantage of every opportunity to better leverage resources and investments across agencies. However, at present many agencies independently purchase or develop training for the same government-wide mandated courses. OPM has an opportunity to reduce duplicative and inefficient training investments by leveraging existing training resources government-wide. Agency leaders and OPM recognize that this has led to redundant and inefficient federal training investments. HR University—the one-stop-shop training platform administered by OPM for many courses mostly related to the HR community—provides a model that can result in cost savings and help standardize some mandatory training courses across government. To improve federal training investment decision-making processes, the Director of OPM should take the following five actions: 1. Include in existing or new OPM guidance or technical assistance additional information in the following areas: Steps agencies should take and factors they should consider when prioritizing federal training investments agency-wide, including developing a process to rank training using criteria, such as expected demand for the investment from internal sources, availability of resources to support the effort, potential for increased revenue, and risk of unfavorable consequences if investments are not made. Steps agencies should take and factors they should consider for comparing the merits of different delivery mechanisms and determining the mix of mechanisms to use, in order to ensure efficient and cost-effective delivery of federal training. Such guidance could include requesting that agencies consistently utilize Standard Form-182 to document and report training costs associated with the different delivery mechanisms employed. 2. In line with statutory and regulatory provisions on maintenance and reporting of training information, work with the CHCO Council to improve the reliability of agency training investment information by: ensuring that agencies are familiar with and follow guidance outlined in OPM’s Guide for the Collection and Management of Training Information regarding which training events should be documented as training and reported to OPM; developing policies to strengthen the utilization of Standard Form- 182 to document and report training costs; encouraging agencies through guidance and technical assistance, to develop policies that require consistent reporting of training data to their learning management systems; and encouraging each agency to assess its existing training information system(s) and identify whether it is providing complete and reliable data and, if not, to develop approaches to improve the system(s), in order to do so. 3. Provide regular report summaries to agencies on EHRI training investment data and its reliability, in order to improve the transparency and reliability of federal training investment data. 4. Once federal training data reliability has been sufficiently improved, consistent with Executive Order No. 11348, use EHRI data to: a) counsel heads of agencies and other agency officials on the improvement of training, and b) assist agencies in developing sound programs and financial plans for training and provide advice, information, and assistance to agencies on planning and budgeting training programs. In collaboration with the CHCO and CLO Councils, identify the best existing courses that fulfill government-wide training requirements, such as mandatory Equal Employment Opportunity training, or training in common federal occupations, such as basic training in financial management, and offer them to all agencies through HR University or other appropriate platform to reduce costly and duplicative federal training investments. We provided a draft of this report to the departments of OPM, DHS, DOE, DOI, and VA for review and comment. OPM commented on our five recommendations to their agency, concurring with one recommendation, partially concurring on three recommendations and not concurring with a portion of one recommendation. OPM’s official comments are reprinted in appendix IV. OPM, DOI, and VA provided technical comments, which we incorporated into our report, as appropriate. DOE and DHS had no comments. OPM partially concurred with our first recommendation that it should provide in existing or new guidance information on prioritizing federal training investments agency-wide and factors agencies should consider for comparing the merits of different delivery mechanisms. OPM stated that its publications mentioned in our report already provide guidance on necessary steps and specific factors agencies should consider when prioritizing training investments. However, none of the guides that we obtained or that OPM provided for our review contain a specific discussion about ranking training investments based on key factors that should be considered, such as expected demand for the investment from internal sources, availability of resources to support the effort, potential for increased revenue, risk of unfavorable consequences if investments are not made or the potential to improve efficiency. OPM stated that, as part of its effort to revise the Human Capital Assessment Framework resources that it provides to agencies, OPM plans to include tools and guidance on steps agencies can take to prioritize learning investments as part of its strategic human capital planning. We did not change our recommendation, which is based on OPM’s current guidance and assistance. OPM’s reported future plan to provide more specific guidance on prioritization has the potential to address our recommendation, when implemented. OPM also agreed to provide further guidance regarding what steps agencies should take and what factors they should consider in comparing the merits of different delivery mechanisms and determining the mix of mechanisms to use to ensure efficient and cost-effective delivery of federal training. OPM did not concur with the portion of our second recommendation regarding working with the CHCO council to improve the reliability of agency training investments by developing a common definition of what should be documented as training. OPM stated that the definition of training is clearly stated in 5 U.S.C Chapter 41 and OPM’s Draft Training Policy Handbook and Guide for the Collection and Management of Training Information outlines which training events should be documented as training and reported to OPM. Consequently, OPM recommended that we delete this task for OPM. OPM’s Guide for the Collection and Management of Training Information states that all courses, workshops and conferences paid for by the government; all federally mandated training; and, all agency required training should be reported to OPM’s EHRI system. It further states that agencies do not have to report training that occurs spontaneously or casually/incidentally (e.g., reading a book, having a discussion, web casts, briefings, etc.); training that has no specified training goals; training where there are no ways to evaluate if the training improved knowledge, skills, abilities or competencies; and, training that was not paid for by the government. We agree that this guidance should assist agencies in knowing which training to track and report, and therefore have removed this task from the recommendation. However, given the concerns raised by officials in our case example agencies regarding inconsistencies in whether conferences and other trainings are actually tracked, and recent events regarding spending at such training, we modified our recommendation to suggest that OPM work closely with CHCOs to ensure that this guidance is followed as it addresses the other actions we recommend to improve reliable reporting. OPM concurred with the other actions identified in the recommendation which included working with the CHCO council to: develop policies to strengthen the utilization of Standard Form-182 to document and report training costs; encourage agencies through guidance and technical assistance, to develop policies that require consistent reporting of training data to their learning management systems; and encourage each agency to assess its existing training information system(s) and identify whether it is providing complete and reliable data and, if not, to develop approaches to improve the system(s), in order to do so. OPM partially concurred with our third recommendation that it should provide regular report summaries to agencies on EHRI training investment data and its reliability, in order to improve the transparency and reliability of federal training investment data. OPM stated that it will provide regular summaries to agencies on the training investment data submitted to OPM to improve transparency. However, OPM stated that these summaries will not directly lead to improved reliability of the data because agencies must take action to improve the data in order to have an effect on data reliability. OPM also noted that agencies currently have the option of working with OPM to secure a subscription to Business Objects—a reporting tool that will allow agencies to run reports of the data they have provided to OPM and determine whether those data accurately reflect what is occurring in their agencies. OPM recommended that we revise our recommendation to read, “Provide regular report summaries to agencies on EHRI training investment data in order to improve the transparency of federal training investment data.” We agree that agencies are ultimately responsible for making changes to their data to improve its reliability. However, OPM plays an important role in the first step of that process by reporting the current information that it has, so that agencies can make corrections. We believe that this recommendation along with our prior recommendation on steps OPM and CHCOs can take to improve reliability will contribute to improving the transparency and reliability of agency training data. Therefore, we did not make changes to this recommendation. OPM concurred with our fourth recommendation that it should counsel heads of agencies and other agency officials on the improvement of training; assist agencies in developing sound programs and financial plans for training; and provide advice, information, and assistance to agencies on planning and budgeting training programs using EHRI data, once federal training data reliability has been sufficiently improved. OPM stated that it will consult with agencies on possible improvements and assistance on planning training programs once federal training data are reliable. OPM partially concurred with our fifth recommendation that it should, in collaboration with the CHCO and CLO Councils, identify the best existing courses that fulfill government-wide training requirements, such as mandatory Equal Employment Opportunity training, or training in common federal occupations, such as basic training in financial management, and offer them to all agencies through HR University or another appropriate platform to reduce costly and duplicative federal training investments. OPM stated that it agrees and is already collaborating with the CHCO and CLO Councils to identify, collect, and share existing mandatory courses that fulfill government-wide training requirements (e.g., Plain Writing, Telework. USERRA, Veterans Employment, Constitution Day) through HR University or on OPM’s Federal Training and Development Wiki. Therefore, OPM recommended that we revise the recommendation to recognize that the expansion of mandatory training by HR University would be a continuation of efforts they have started. We have revised the recommendation to reflect this comment. We are sending copies of this report to the Director of OPM. In addition, this report will be available at no charge on the GAO website at www.gao.gov. If you have any questions about this report, please contact me at 202-512-2717 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. To better understand how federal training investment decisions are made and whether improvements are needed, you asked us to review the methods that agencies are using to establish their training investment strategies and Office of Personnel Management’s (OPM) training investment guidance to agencies. Accordingly, this review assesses the extent to which (1) chief human capital officers (CHCOs) of selected federal agencies have established processes to set and prioritize training investments that are aligned with leading practices; and (2) OPM’s guidance and assistance for developing training investment strategies align with these leading practices. For the purposes of this review, we define the key terms “training”, “development” and “agency-wide” in the following ways: Training is making available to employees planned and coordinated educational programs of instruction in professional, technical, or other fields that are or will be related to the employee’s job responsibilities. Training can be accomplished through a variety of approaches, such as classroom training, e-learning, and professional conferences that are educational or instructional in nature. Development is generally considered to include training, structured on-the-job learning experiences, and education. Developmental programs can include experiences such as coaching, mentoring, or rotational assignments. Agency-wide includes all components, sub-agencies or offices within a cabinet department or independent agency. For both objectives of the review, we compared OPM and CHCO practices against eight federal training investment leading practices, which are based on our prior studies, other expert studies, and statutory, regulatory, and executive order training requirements. (See table 1 at the beginning of this report). OPM reviewed these criteria and agreed that they are practices that agencies should be implementing to support effective training investment decisions. They also informed us that, while some leading practices are related to training program requirements contained in statutory, regulatory, or executive order provisions, responses to our questions about the leading practices are not an indication of whether agencies are in compliance with these laws and regulations. To obtain government-wide information on agency training investment practices, through a questionnaire on their training investment practices and processes, we obtained high-level information from members of the 27 agencies represented on the CHCO Council. We provided a standard set of questions to each CHCO to ensure we consistently captured their responses to our questions on their training investment practices. We then analyzed the results of the questionnaire to identify the main themes and develop summary findings. Two of our analysts conducted this analysis, placed CHCO responses into categories, and tallied the number of responses in each category. A third analyst traced the responses back to the original questionnaire and verified the appropriate categorization of CHCOs’ responses. To characterize CHCOs views throughout this report, we defined modifiers (e.g., “many”) to quantify users’ views as follows: “nearly all” users represents 23 to 27 users, “most” users represents 18 to 22 users, “many” users represents 13 to 17 users, “several” users represents 8 to 12 users, “some” users represents 3 to 7 users, and “few” users represents 0 to 3 users. To obtain additional perspective and insights on the training investment practices identified in the questionnaire, we discussed the responses with CHCO and chief learning officers (CLO) councils. In addition, based on the responses to the questionnaire and workforce size, we selected four agencies (the Department of Homeland Security, Department of Veterans Affairs, Department of the Interior, and Department of Energy) from which to obtain illustrative examples of how they implemented the training investment practices identified in the questionnaire. (See table 6 for selection traits). As part of our review of agency practices, we also obtained information on the steps that agencies are taking to identify and prioritize investment allocations for training required to implement the GPRA Modernization Act of 2010 (GPRAMA). To identify and assess OPM’s oversight and guidance to agencies on training investment strategies, we reviewed OPM training guidance, relevant documentation on forums, workshop or other assistance, and oversight activities. In addition, we interviewed officials from OPM offices with primary responsibility for providing training policy guidance and technical assistance to agencies. We compared this information to the leading practices identified in table 6. We also identified and described the steps, that OPM has taken to identify the skills and training needed to implement performance management improvements, such those required by GPRAMA, as a foundation for future agency training investments. However, we did not assess the effectiveness of OPM’s efforts to identify GPRAMA-related skills and actions to develop related training Based on information obtained from agencies and OPM, we assessed which leading training investment practices were being implemented by agencies and addressed by OPM guidance and assistance. We also identified the challenges or limitations reported by agencies to implementing the practices, and opportunities for improvement in agency processes and related OPM guidance. We conducted this performance audit from December 2011 to September 2012, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Exec. Order No. 11348, section 303(e) requires agency heads to establish priorities for needed training, and provide for the use of funds and man-hours in accordance with these priorities. 5 C.F.R. § 410.201(c), in implementing the E.O., requires agency heads (or designee(s)) to establish priorities for training employees and allocate resources according to those priorities. There are no statutory, regulatory, or Executive Order requirements directly related to this practice. There are no statutory, regulatory, or Executive Order requirements directly related to this practice. There are no statutory, regulatory, or Executive Order requirements directly related to this practice. There are no statutory, regulatory, or Executive Order requirements directly related to this practice. 6. Agencies should track the cost and delivery of its 5 U.S.C. 4118 authorizes OPM to prescribe regulations—and in doing so—to specifically provide for the maintenance of necessary information concerning the general conduct of the training activities of each agency, and such other information as is necessary “to enable the President and Congress to discharge effectively their respective duties for supervision, control and review of these training programs.” The submission of reports by agencies on results and effects of training programs and plans and economies resulting therefrom, including estimates of costs of training. 5 C.F.R. § 410.601(a) requires agencies to maintain records of training plans, expenditures, and activities in such form and manner as necessary to submit to OPM. Subsection (b) provides that beginning December 31, 2006, agencies are to report training data at such times and in such form as required for OPM’s government- wide Electronic Data Collection System. Agencies should evaluate the benefits achieved through training and development programs, including improvements in individual and agency performance 5 U.S.C . § 4103(c) requires the head of an agency to evaluate, on a regular basis, each program or plan established, operated, or maintained under subsection (a) There are no statutory, regulatory, or Executive Order requirements directly related to this practice. These key training investment practices are part of the framework outlined in the GAO’s guide GAO, Human Capital: A Guide for Assessing Strategic Training and Development Efforts for the Federal Government, GAO-04-546G (Washington, D.C.: March 2004). This guide summarizes attributes of effective training and development programs and it is based on the GAO analysis of prior work, other related expert studies, and federal training requirements. 5 USC 4103(a) requires agency head to establish, operate, maintain, and evaluate a program or programs, and a plan or plans there under, for the training of employees. In addition to the contact named above, William Doherty (Assistant Director), Latesha Love, Angela Leventis, and Karin Fangman made key contributions to this report. Also contributing to this report were Benjamin Crawford, Eric Gorman, and Natalie Maddox.
OPM and agency CHCOs play an important role in ensuring that federal training dollars are invested effectively. GAO was asked to review the extent to which: (1) CHCOs of selected federal agencies have established processes to set and prioritize training investments that are aligned with leading practices; and (2) OPM’s guidance and assistance for developing training investment strategies align with these leading practices. GAO obtained information from 27 CHCOs on their training investment practices through a questionnaire, and selected four agencies—the Departments of Energy (DOE), Homeland Security (DHS), the Interior (DOI) and Veterans Affairs (VA)—to provide illustrative examples. We compared both CHCO and OPM practices to leading practices, identified through past GAO and expert studies. Many Chief Human Capital Officers (CHCOs) reported that they are implementing several leading practices important to making strategic decisions about training delivery, such as determining the best mix of decentralized and centralized training and considering government-wide reform when planning training. However, many CHCOs reported they are not implementing some practices that support making more cost-effective training investment decisions, such as prioritizing training so that the most important needs are met first and evaluating the benefits of training. In addition, many CHCOs do not have information from component or sub-agency leaders regarding their level of investments and priorities. Consequently, some agencies are duplicating internal across their agencies. Federal agencies also need reliable information on how much they spend on training and for what purposes. However, several CHCOs reported they do not completely and reliably track training costs agency-wide. The Office of Personnel Management (OPM) provides guidance and assistance to agencies on a number of the leading practices, such as evaluating the benefits of training in three of its guides and in workshops. In some practice areas thatare challenges to agencies, such as prioritization of investments and determining whether to design training and development programs in-house or obtain these services from a contractor, guidance is minimal or absent. OPM also requires agencies to submit training investment data and provides guidance on how to do so, but considers this data to be unreliable because it is incomplete. However, OPM officials have not internally assessed improvements in the completeness of the data over the last 3 years or the quality of the data in the six years that agencies have been required to submit it, and have only provided agencies with one summary of their data for correction. Agencies and OPM reported there are also opportunities for OPM to help agencies reduce duplicative investments across agencies. For example, currently, agencies independently purchase or develop training for the same mandated or common occupational training. Agency leaders and OPM recognize that this has led to redundant and inefficient federal training investments. According to OPM officials, HR University—which is a website currently administered by OPM to provide training for the HR community—has already resulted in a cost savings of $14.5 million as a result of sharing the best HR training government-wide. Several agencies and OPM officials reported that HR University could be expanded to provide mandatory mtraining and serve as a model for centralizing training in other occupations or functional areas, which could save millions more and help standardize training. GAO recommends, among other things, that OPM improve guidance and assistance to agencies inestablishing a process for setting and prioritizing training investments; improve the reliability of agency training investment information; and identify the best existing courses that fulfill governmentwide training requirements, and offer them to all agencies through the HR University or other appropriate platforms. OPM fully or partially concurred with four recommendations and did not concur with a portion of another. OPM, DOI and VA provided technical comments, which GAO incorporated, as appropriate, into the report. DOE and DHS had no comments.
As part of our undercover investigation, we produced counterfeit documents before sending our two teams of investigators out to the field. We found two NRC documents and a few examples of the documents by searching the Internet. We subsequently used commercial, off-the-shelf computer software to produce two counterfeit NRC documents authorizing the individual to receive, acquire, possess, and transfer radioactive sources. To support our investigators’ purported reason for having radioactive sources in their possession when making their simultaneous border crossings, a GAO graphic artist designed a logo for our fictitious company and produced a bill of lading using computer software. Our two teams of investigators each transported an amount of radioactive sources sufficient to manufacture a dirty bomb when making their recent, simultaneous border crossings. In support of our earlier work, we had obtained an NRC document and had purchased radioactive sources as well as two containers to store and transport the material. For the purposes of our current undercover investigation, we purchased a small amount of radioactive sources and one container for storing and transporting the material from a commercial source over the telephone. One of our investigators, posing as an employee of a fictitious company, stated that the purpose of his purchase was to use the radioactive sources to calibrate personal radiation detectors. Suppliers are not required to exercise any due diligence in determining whether the buyer has a legitimate use for the radioactive sources, nor are suppliers required to ask the buyer to produce an NRC document when making purchases in small quantities. The amount of radioactive sources our investigator sought to purchase did not require an NRC document. The company mailed the radioactive sources to an address in Washington, D.C. On December 14, 2005, our investigators placed two containers of radioactive sources into the trunk of their rental vehicle. Our investigators – acting in an undercover capacity – drove to an official port of entry between Canada and the United States. They also had in their possession a counterfeit bill of lading in the name of a fictitious company and a counterfeit NRC document. At the primary checkpoint, our investigators were signaled to drive through the radiation portal monitors and to meet the CBP inspector at the booth for their primary inspection. As our investigators drove past the radiation portal monitors and approached the primary checkpoint booth, they observed the CBP inspector look down and reach to his right side of his booth. Our investigators assumed that the radiation portal monitors had activated and signaled the presence of radioactive sources. The CBP inspector asked our investigators for identification and asked them where they lived. One of our investigators on the two-man undercover team handed the CBP inspector both of their passports and told him that he lived in Maryland while the second investigator told the CBP inspector that he lived in Virginia. The CBP inspector also asked our investigators to identify what they were transporting in their vehicle. One of our investigators told the CBP inspector that they were transporting specialized equipment back to the United States. A second CBP inspector, who had come over to assist the first inspector, asked what else our investigators were transporting. One of our investigators told the CBP inspectors that they were transporting radioactive sources for the specialized equipment. The CBP inspector in the primary checkpoint booth appeared to be writing down the information. Our investigators were then directed to park in a secondary inspection zone, while the CBP inspector conducted further inspections of the vehicle. During the secondary inspection, our investigators told the CBP inspector that they had an NRC document and a bill of lading for the radioactive sources. The CBP inspector asked if he could make copies of our investigators’ counterfeit bill of lading on letterhead stationery as well as their counterfeit NRC document. Although the CBP inspector took the documents to the copier, our investigators did not observe him retrieving any copies from the copier. Our investigators watched the CBP inspector use a handheld Radiation Isotope Identifier Device (RIID), which he said is used to identify the source of radioactive sources, to examine the investigators’ vehicle. He told our investigators that he had to perform additional inspections. After determining that the investigators were not transporting additional sources of radiation, the CBP inspector made copies of our investigators’ drivers’ licenses, returned their drivers’ licenses to them, and our investigators were then allowed to enter the United States. At no time did the CBP inspector question the validity of the counterfeit bill of lading or the counterfeit NRC document. On December 14, 2005, our investigators placed two containers of radioactive sources into the trunk of their vehicle. Our investigators drove to an official port of entry at the southern border. They also had in their possession a counterfeit bill of lading in the name of a fictitious company and a counterfeit NRC document. At the primary checkpoint, our two-person undercover team was signaled by means of a traffic light signal to drive through the radiation portal monitors and stopped at the primary checkpoint for their primary inspection. As our investigators drove past the portal monitors and approached the primary checkpoint, they observed that the CBP inspector remained in the primary checkpoint for several moments prior to approaching our investigators’ vehicle. Our investigators assumed that the radiation portal monitors had activated and signaled the presence of radioactive sources. The CBP inspector asked our investigators for identification and asked them if they were American citizens. Our investigators told the CBP inspector that they were both American citizens and handed him their state-issued drivers’ licenses. The CBP inspector also asked our investigators about the purpose of their trip to Mexico and asked whether they were bringing anything into the United States from Mexico. Our investigators told the CBP inspector that they were returning from a business trip in Mexico and were not bringing anything into the United States from Mexico. While our investigators remained inside their vehicle, the CBP inspector used what appeared to be a RIID to scan the outside of the vehicle. One of our investigators told him that they were transporting specialized equipment. The CBP inspector asked one of our investigators to open the trunk of the rental vehicle and to show him the specialized equipment. Our investigator told the CBP inspector that they were transporting radioactive sources in addition to the specialized equipment. The primary CBP inspector then directed our investigators to park in a secondary inspection zone for further inspection. During the secondary inspection, the CBP inspector said he needed to verify the type of material our investigators were transporting, and another CBP inspector approached with what appeared to be a RIID to scan the cardboard boxes where the radioactive sources was placed. The instrumentation confirmed the presence of radioactive sources. When asked again about the purpose of their visit to Mexico, one of our investigators told the CBP inspector that they had used the radioactive sources in a demonstration designed to secure additional business for their company. The CBP inspector asked for paperwork authorizing them to transport the equipment to Mexico. One of our investigators provided the counterfeit bill of lading on letterhead stationery, as well as their counterfeit NRC document. The CBP inspector took the paperwork provided by our investigators and walked into the CBP station. He returned several minutes later and returned the paperwork. At no time did the CBP inspector question the validity of the counterfeit bill of lading or the counterfeit NRC document. We conducted corrective action briefings with CBP and NRC officials shortly after completing our undercover operations. On December 21, 2005, we briefed CBP officials about the results of our border crossing tests. CBP officials agreed to work with the NRC and CBP’s Laboratories and Scientific Services to come up with a way to verify the authenticity of NRC materials documents. We conducted two corrective action briefings with NRC officials on January 12 and January 24, 2006, about the results of our border crossing tests. NRC officials disagreed with the amount of radioactive material we determined was needed to produce a dirty bomb, noting that NRC’s “concern threshold” is significantly higher. We continue to believe that our purchase of radioactive sources and our ability to counterfeit an NRC document are matters that NRC should address. We could have purchased all of the radioactive sources used in our two undercover border crossings by making multiple purchases from different suppliers, using similarly convincing cover stories, using false identities, and had all of the radioactive sources conveniently shipped to our nation’s capital. Further, we believe that the amount of radioactive sources that we were able to transport into the United States during our operation would be sufficient to produce two dirty bombs, which could be used as weapons of mass disruption. Finally, NRC officials told us that they are aware of the potential problems of counterfeiting documents and that they are working to resolve these issues. Mr. Chairman and Members of the Subcommittee, this concludes my statement. I would be pleased to answer any questions that you or other members of the committee may have at this time. For further information about this testimony, please contact Gregory D. Kutz at (202) 512-7455 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
To address the threat of dirty bombs and other nuclear material, the federal government has programs in place that regulate the transportation of radioactive sources and to prevent illegal transport of radioactive sources across our nation's borders. The Department of Homeland Security through the U.S. Customs and Border Protection (CBP) uses radiation detection equipment at ports of entry to prevent such illicit entry of radioactive sources. The goal of CBP's inspection program is to "...thwart the operations of terrorist organizations by detecting, disrupting, and preventing the cross-border travel of terrorists, terrorist funding, and terrorist implements, including Weapons of Mass Destruction and their precursors." Deploying radiation detection equipment is part of CBP's strategy for thwarting radiological terrorism and CBP is using a range of such equipment to meet its goal of screening all cargo, vehicles, and individuals coming into the United States. Most travelers enter the United States through the nation's 154 land border ports of entry. CBP inspectors at ports of entry are responsible for the primary inspection of travelers to determine their admissibility into the United States and to enforce laws related to preventing the entry of contraband, such as drugs and weapons of mass destruction. Our investigation was conducted at Congressional request as a result of widespread congressional and public interest in the security of our nation's borders, given today's unprecedented terrorism threat environment. Our investigation was conducted under the premise that given today's security environment, our nation's borders must be protected from the smuggling of radioactive sources by terrorists. For the purposes of this undercover investigation, we purchased a small amount of radioactive sources and one container used to store and transport the material from a commercial source over the telephone. One of our investigators, posing as an employee of a fictitious company located in Washington, D.C., stated that the purpose of his purchase was to use the radioactive sources to calibrate personal radiation detection pagers. The purchase was not challenged because suppliers are not required to determine whether a buyer has a legitimate use for the radioactive sources, nor are suppliers required to ask the buyer to produce an NRC document when making purchases in small quantities. The radiation portal monitors properly signaled the presence of radioactive material when our two teams of investigators conducted simultaneous border crossings. Our investigators' vehicles were inspected in accordance with most of the CBP policy at both the northern and southern borders. However, our investigators were able to enter the United States with enough radioactive sources to make two dirty bombs using counterfeit documents. Specifically, they were able to successfully represent themselves as employees of a fictitious company and present a counterfeit bill of lading and a counterfeit NRC document during the secondary inspections at both locations. The CBP inspectors never questioned the authenticity of the investigators' counterfeit bill of lading or the counterfeit NRC document authorizing them to receive, acquire, possess, and transfer radioactive sources.
The United States is engaged in a comprehensive effort to protect and defend the homeland and defeat terrorism. Using all instruments of national power, the United States and its partners are attacking terrorists both at home and abroad, denying terrorists sanctuary and sponsorship, disrupting the financing of terror, and building and maintaining a united global front against terrorism. After the terrorist attacks of September 11, 2001, military operations began with Operation Noble Eagle, which is aimed at defending the U.S. homeland from terrorist attacks, and Operation Enduring Freedom, which takes place principally in and around Afghanistan, but also covers additional operations in the Horn of Africa, the Philippines, and elsewhere. In 2003, DOD began Operation Iraqi Freedom, which takes place in and around Iraq. DOD and the military services are responsible for carrying out these operations. Recently, DOD reported about 132,000 U.S. military personnel are deployed to Iraq and about 15,000 are deployed to Afghanistan. Diplomatic efforts are also underway to rebuild areas in and around Iraq and Afghanistan, as well as to assist these countries in rebuilding their governments and creating secure nations. State is responsible for all U.S. activities in Iraq except security and military operations. Other U.S. government agencies also play significant roles in this reconstruction effort, including the USAID and the U.S. Army Corps of Engineers. The Multi-National Security Transition Command-Iraq, which operates under the Multi-National Force-Iraq, leads coalition efforts to train, equip, and organize Iraqi security forces. In Afghanistan, USAID manages the majority of reconstruction programs and operations. Other U.S. agencies provide additional assistance, including DOD. Members of the North Atlantic Treaty Organization also play a key role in training and equipping Afghan forces. Since 2001, DOD has prepared reports on the costs of its involvement in GWOT. The costs of military contingency operations are referred to as “incremental costs,” which are costs that are directly attributable to the operation and would not otherwise have been incurred, were it not for the operation. Specifically, the costs are above and beyond baseline training, operations, and personnel costs. Incremental costs include the pay of mobilized reservists as well as the special pays and allowances for deployed personnel, such as imminent danger pay and foreign duty pay for those personnel serving in Operation Iraqi Freedom and Operation Enduring Freedom; the cost of transporting personnel and materiel to the theater of operation and supporting them upon arrival; and the operating cost of equipment, such as vehicles and aircraft, among many other costs. Costs that are incurred regardless of whether there is a contingency operation, such as the base pay of active duty military personnel, are not considered incremental. DOD tracks the obligations incurred to support GWOT and produces a monthly cost report, which is distributed throughout the department and used by senior DOD leadership, along with other information, in discussing the cost of the war. It is also used in formulating future budget requests to fund GWOT. The report identifies the monthly and cumulative incremental GWOT obligations. DOD reports the costs by service, defense agency, contingency operation, and appropriation. On October 1, 1998, DOD implemented a standard contingency cost breakdown structure consisting of 55 cost categories to improve contingency cost reporting consistency between its multiple services and agencies. Examples of cost categories include facilities/base support and airlift. Furthermore, this cost breakdown structure was also to facilitate future efforts to understand and interpret differences between estimated and actual costs. DOD Financial Management Regulation 7000.14-R, volume 12, chapter 23, generally establishes financial policy and procedures related to DOD contingency operations. The regulation incorporates the common cost categories and multiple subcategories, which were established in 1998 and updated in September 2005, that are used to report DOD’s monthly GWOT costs. Obligations are the foundation of all GWOT cost reporting. For example, operation and maintenance obligations in support of GWOT represent tens of thousands, if not hundreds of thousands, of individual transactions ranging in value from one penny to millions of dollars. When obligations are incurred, the military services enter them into their accounting systems using accounting codes. Using the Army as an example, an Army budget activity, such as an installation or unit, initially obligates funds for acquired goods and services by using the Standard Army Accounting Classification Code. An obligation entry includes information on the funding source; the operational mission, such as Operation Iraqi Freedom; and the category of cost. The cost categories are established by the services. Since 2001, Congress has appropriated about $430 billion to DOD and other U.S. government agencies for military operations and reconstruction and stabilization activities supporting GWOT. Much of the funding has come in the form of supplemental appropriations. Some funding has also come through the normal baseline budget appropriated to the departments. For example, in fiscal years 2005 and 2006, DOD was provided so-called “bridge” funding—$25 billion and $50 billion, respectively—through its regular appropriation, which was intended to fund operations from the beginning of the fiscal year until a supplemental appropriation could be enacted. Also, funds are appropriated to the various appropriations accounts for each department and are not specifically designated for operations in Iraq or Afghanistan. Since September 2001, DOD has received about $386 billion to fund military operations supporting GWOT. In addition, about $44 billion has been made available to U.S. agencies—including DOD, USAID, and State—for reconstruction and stabilization efforts in Iraq ($34.5 billion) and Afghanistan ($9 billion) with an additional $400 million for use in Iraq and Afghanistan through the Commander’s Emergency Response Program. These efforts include training and equipping of security forces and repairing critical infrastructure. (See table 1.) For fiscal year 2007, DOD has requested another $50 billion in bridge funding for military operations and other U.S. government agencies have requested $771 million for reconstruction and stabilization activities. The $386 billion DOD has received to fund military operations supporting GWOT also includes funding for homeland defense under Operation Noble Eagle. This operation was funded through supplemental appropriations for DOD until fiscal year 2005, when it was moved into DOD’s baseline budget. This movement is consistent with our prior suggestion that, once an operation reaches a known level of effort and costs are more predictable, more funding should be built into the baseline budget. This $386 billion also includes funding for DOD’s intelligence programs, as well as other DOD initiatives, such as the Army’s efforts to transform its traditional division-based force into a more rapidly deployable modular force that is better able to conduct joint and expeditionary operations. Beginning in fiscal year 2007, the Army’s modular transformation will be included in DOD’s regular baseline appropriation. Prior to passage of the most recent supplemental appropriation, military service officials told us that they had already spent the $50 billion bridge that was included in the fiscal year 2006 defense appropriations act and had started to use baseline appropriations for GWOT activities, as well as undertake cost-cutting measures until the supplemental was enacted. DOD has requested another $50 billion in bridge funding as part of its fiscal year 2007 budget. In addition to the funding provided to support military operations, Congress has appropriated about $44 billion to DOD and other U.S. government agencies to support important reconstruction and stabilization activities in Iraq and Afghanistan since 2001. These activities support GWOT objectives because they help train and equip local security forces, and help establish the foundations of a sound economy with the capacity to deliver essential services, such as clean water and reliable electricity. A growing economy also provides employment opportunities as an alternative to recruitment efforts made by insurgents. Since 2003, about $34.5 billion has been provided to support these types of activities in Iraq and, since September 2001, about $9 billion to support these activities in Afghanistan. The recent supplemental appropriation also provided an additional $400 million for the Commander’s Emergency Response Program for use in Iraq and Afghanistan. This would bring total reconstruction and stabilization funding to over $44 billion. Funding for reconstruction and stabilization efforts has supported the following activities, among others: Training and equipping of Iraqi security forces. Since fiscal year 2003, about $11.7 billion has been made available for U.S. security and justice programs in Iraq, including funds to train and equip the Iraqi security forces. Over the past several months, the Secretaries of State and Defense have cited progress in developing Iraqi security forces, and reported that the numbers of operational army personnel and trained and equipped police have increased from about 142,000 in March 2005 to about 266,000 in June 2006. However, as we have previously reported, the number of trained and equipped forces does not provide reliable information on their capabilities. In addition, the administration received $3.0 billion in the recent fiscal year 2006 supplemental appropriation to continue moving the Iraqi security forces toward stand-alone operational capacity. Restoring Iraq’s essential services. Since fiscal year 2003, about $10.5 billion has been made available for restoring essential services in Iraq, specifically activities in the oil, water, health, and electricity sectors. U.S. reconstruction efforts have helped increase electricity generation capacity, restart crude oil production, and restore some water treatment plants. However, key reconstruction goals in the oil, electricity, and water sector have yet to be achieved due to security, management, and sustainment challenges in U.S.-funded projects. The administration received an additional $1.5 billion in the recent supplemental appropriation for reconstruction assistance to Iraq, including $50 million to USAID’s Iraq Community Action Program and $50 million for democracy, rule of law, and reconciliation programs. Training and equipping the Afghan national army. The United States led the international effort to train and equip the Afghan national army, which is crucial to both long-term security and U.S. counter- terror efforts. About 26,500 troops have been trained and equipped and the defense force is projected to reach up to 70,000 military and civilian personnel, according to State reporting. The administration has received $1.9 billion in the fiscal year 2006 supplemental appropriation to further prepare Afghan security forces to operate without U.S. support. The U.S. government also funds a variety of other programs that indirectly support GWOT. For example, Congress provides funding for security assistance on a worldwide basis to help train or equip foreign security forces (military and police). In fiscal year 2006, Congress provided an estimated $4.5 billion for two security assistance programs, the International Military Education and Training program and Foreign Military Financing program. In addition, the U.S. government reported costs of $1.2 billion on worldwide public diplomacy programs in fiscal year 2005. Since GWOT began in 2001, U.S. government agencies have reported hundreds of billions of dollars in costs for overseas military and reconstruction operations; however, as we have previously reported, data reliability and reporting concerns make it difficult to know DOD’s total GWOT costs. Since 2001, DOD has reported costs of about $273 billion on overseas GWOT military operations through the end of April 2006. The department’s reported costs have grown steadily from a reported about $105 million in fiscal year 2001, to begin preparations for operations in Afghanistan, to over $81.5 billion in fiscal year 2005. U.S government agencies have reported costs of about $23 billion for Iraqi reconstruction and stabilization. However, U.S. government agencies, other than DOD, do not formally track all GWOT costs. This, along with DOD’s cost reliability and reporting problems, make it difficult for the decision makers to reliably know how much the war is costing, to determine how appropriated funds are being spent, and to use historical data to predict future trends. Since the attacks of September 11, 2001, DOD has reported cumulative incremental costs of about $273 billion, through the end of April 2006, on military operations overseas in support of GWOT. This amount includes almost $215 billion for operations in Iraq and almost $58 billion on operations in Afghanistan, the Horn of Africa, the Philippines, and elsewhere. This does not include obligations for intelligence activities and the Army’s modular force transformation. The difference between the amount appropriated and DOD’s reported costs through April 2006 can generally be attributed to these unreported obligations for intelligence and Army modular force transformation, as well as funding for procurement, military construction, and research, development, test, and evaluation, which can be obligated over multiple years, that has not yet been obligated. In addition to the costs for overseas operations, DOD has also reported obligations of $27.7 billion through April 2006 for operations in defense of the homeland U.S., under Operation Noble Eagle. To date, the largest reported costs for the overseas GWOT operations have typically been associated with two of DOD’s appropriations accounts— operation and maintenance and military personnel. Operation and maintenance expenses cover a number of things, such as operational support for housing, food, and services; transportation to move people and supplies and equipment into the theaters; and the repair of equipment. Military personnel expenses include military pay and allowances for mobilized reservists, as well as the special payments or allowances, such as imminent danger pay and the family separation allowance that all qualifying military personnel receive. While expenses for operation and maintenance, and military personnel have tended to be among the highest, DOD has also reported incurring costs for procurement of equipment and other items. As we have reported in the past, we have significant concerns about the overall reliability of DOD’s reported cost data. As a result, neither DOD nor Congress can reliably know how much the war is costing. As we reported in September 2005, we found numerous problems with DOD’s processes for recording and reporting costs for GWOT. Factors affecting the reliability of DOD’s reported costs include long-standing deficiencies in DOD’s financial management systems and business processes, the use of estimates instead of actual costs, and the lack of supporting documentation. In at least one case, our work showed that some reported costs may have been materially overstated. Specifically, reported obligations for mobilized Army reservists in fiscal year 2004 were based primarily on estimated rather than actual information and differed from related payroll information by as much as $2.1 billion, or 30 percent of the amount DOD reported in its cost report. In addition, we found inadvertent double counting in a portion of DOD’s reported costs amounting to almost $1.8 billion from November 2004 through April 2005. In our September 2005 report, we made several recommendations to the Secretary of Defense to (1) undertake a series of steps to ensure that the services’ reported GWOT costs are accurate and reliable; (2) direct the Office of the Under Secretary of Defense (Comptroller) to oversee the services’ efforts and to develop a systematic process to review and test the reliability of the overall GWOT reports; (3) expand the department’s financial management regulation for contingency operations to include contingencies as large as GWOT; and (4) establish guidelines to control costs and require the services to keep the Comptroller’s office informed of their efforts in this area. Since the time of our report, we know that DOD has taken some measures in response to our recommendations intended to improve the reliability and accuracy of its cost reports, such as requiring the military services to identify variances in reported costs from month to month, and determine the causes. However, our initial review suggests that DOD and the services have yet to take sufficient action to fully implement these measures and that certain weaknesses in cost reporting continue to exist. Without aggressive action on the part of DOD and the services, the reliability of cost reports will remain in question. Also, there has been an ongoing issue with the timeliness of DOD’s cost reporting. For example, the cost reports for October through December 2005 were not issued until March 2006. To address this long-standing problem, Congress, in the National Defense Authorization Act for Fiscal Year 2006, directed that DOD provide the cost reports to GAO no later than 45 days after the end of the month being reported. DOD has now provided GAO with the March and April cost reports on schedule. DOD’s reported costs for GWOT operations have grown steadily in each fiscal year through fiscal year 2005—from about $105 million in fiscal year 2001 to about $81.5 billion in fiscal year 2005. For fiscal year 2006, as of April 2006, DOD has reported obligations of about $49 billion; about $41.9 billion for Iraq and about $7.5 billion for Afghanistan. These amounts include about $22.5 billion in operating support, which pays for transportation, fuel, maintenance, housing, food, services, and other items; $5.3 billion for procurement of equipment and other items; $9.9 billion in military personnel costs, including special pays and allowances for deployed military personnel; $2.9 billion for personnel support, including clothing and medical $4.0 billion for transportation, including airlift and sealift; $2.6 billion in support for the Iraq Security Forces; $813 million for the Afghanistan Security Forces; and, $474 million in support of coalition forces. Costs for the remainder of fiscal year 2006 are expected to be higher than DOD anticipated, in part because the Army has been unable to follow through with plans to close a number of forward operating bases and consolidate troops at some of the larger locations. Also, savings resulting from a United States and coalition hand-off of operations in Afghanistan to troops from nations representing the North Atlantic Treaty Organization is not expected to be realized until after fiscal year 2007. Furthermore, the rising cost of fuel is likely to push costs even higher than envisioned at the start of the fiscal year. In examining the historical growth of reported overseas GWOT costs, the largest increase has been in operations and maintenance expenses. For example, between fiscal year 2002 and fiscal year 2005, DOD has reported increases in these expenses from about $7.6 billion to about $48.7 billion. According to DOD, some of this increase is attributable to higher fuel costs and increased operational support costs for contracts to provide housing, food, and services for the military locations in Iraq and Afghanistan. For the same time frames, reported obligations for military personnel have increased from about $3.4 billion to about $14.9 billion, and reported procurement obligations have increased from $0 to about $16.5 billion. With the steady growth in reported GWOT costs, we believe there is a need to ensure that all commands seek to control costs to the extent possible. As we reported in September 2005, individual commands have taken steps to control costs and DOD policy generally advises its officials of their financial management responsibilities to ensure the prudent use of contingency funding. However, DOD has not established guidelines that would require commands to take steps to control costs and keep DOD informed of these steps as we recommended in our prior report. In the absence of such guidelines and reports, DOD cannot be sure that enough is being done to control costs. U. S. government agencies have reported obligating $23 billion for Iraqi reconstruction and stabilization, as of January 2006. However, U.S. government agencies, other than DOD, do not formally track all GWOT costs. Among other uses, these funds have been used for infrastructure repair of the electricity, oil, water, and health sectors; training and equipping of Iraqi security forces (military and police); and administrative expenses. State reports that the remaining funds will not be used for large reconstruction projects, but to sustain the projects that have already been built and to build greater capacity at the national, provincial, and municipal levels for better and more responsive governments. It appears the United State’s military and diplomatic commitments in Iraq and Afghanistan will continue for the foreseeable future and are likely to be in the hundreds of billions of dollars. However, costs are difficult to predict because they depend on several direct and indirect cost variables. DOD’s future costs will likely be affected by the pace and duration of military operations, the types of facilities needed to support troops overseas, force redeployment plans, and the amount of damaged or destroyed equipment that will need to be repaired or replaced. Other future costs to the U.S. government include nation-building and reconstruction efforts and treating injured GWOT veterans. These costs will require administration decision makers and Congress to consider difficult trade-offs as the nation faces increasing fiscal challenges in the years ahead. The future costs associated with DOD’s commitments to GWOT depend on several variables, including (1) the extent and duration of operations, (2) the types of facilities that will be required to support troops overseas, and (3) the amount of equipment that will need to be restored or replaced. As DOD has done with Operation Noble Eagle, we would encourage the department to consider moving other GWOT costs into the baseline budget. This is consistent with our prior suggestion that, once an operation reaches a known level of effort and costs are more predictable, more funding should be built into the baseline budget. Doing so will assist decision makers in determining investment priorities and making trade-offs among funding needs. It is uncertain how long DOD will be engaged in a high pace of military operations associated with GWOT, making it difficult to predict costs associated with future troop levels and mission requirements. There has also been some discussion of reducing the number of troops in Iraq and Afghanistan. However, DOD officials have not announced information about the level or timing of any troop reduction or redeployment plans. While it would appear that reducing the number of troops in theater could lower costs for these operations, we have seen from previous operations in Bosnia and Kosovo that costs may rise due to the increased use of contractors to replace the military personnel. Bases still have to be maintained, even with fewer military members in them. Also, if the pace of operations remains high because of security concerns or hostilities, costs for force protection, fuel, and other items could remain high. The United States does not currently have any basing agreements with Iraq. DOD has constructed facilities in Iraq and neighboring countries supporting missions in both Iraq and Afghanistan. Examples of facilities funded by military construction funds and other appropriations funding include force protection, airfield and road improvements, fuel handling, power and water distribution, and support facilities. The Secretary of Defense recently testified that already some 30 U.S. military bases have been returned to Iraqi control or closed altogether. If the United States decides to enter into agreements with the new governments in Iraq and Afghanistan to have an enduring presence in these countries, costs to make our temporary bases into more permanent facilities could be significant. Sustained GWOT operations have and will continue to take a toll on the condition and readiness of military equipment. The United States faces short-and long-term costs to maintain and restore equipment in theater, as well as reequip units as these missions end and the units return to their home stations. The uncertainties of how long ongoing operations will continue make it difficult to estimate future costs of maintaining and replacing this large amount of equipment. The Army and Marine Corps will have the largest reset costs of the services. DOD has reported Army equipment usage rates have averaged two to eight times those of peacetime rates, while senior Marine Corps officials testified that the ground equipment used by the Corps in ongoing operations has experienced usage rates four to nine times that of peacetime rates. We recently testified that the services are currently funding their reset programs through the use of supplemental appropriations and plan to rely on supplemental appropriations for reset funding through at least fiscal year 2007. According to recent testimony, the Army requirement for reset in fiscal year 2007 is $17.1 billion. The Army expects the requirement beyond fiscal year 2007 to be $12 billion to $13 billion per year through the end of the conflict and for a minimum of two to three years thereafter. In recent testimony, the Marine Corps stated that $11.9 billion is needed to manage the equipment in Operation Iraqi Freedom and Operation Enduring Freedom for fiscal year 2007 and an estimated additional requirement of $5.3 billion for each year that the conflict continues. The uncertainties of pace and duration of ongoing operations as well as the overall condition of major equipment items make it difficult to estimate future equipment reset costs. Equipment used in operations in Iraq will eventually require more intensive repair and overhaul than what is typically expected in peacetime. While the services are working to refine overall requirements, the total requirements and costs are unclear. In addition to resetting a large number of major equipment items, the Army and Marine Corps must also plan to replace active, National Guard, and Reserve equipment left in theater to support ongoing operations. As we previously testified, in late 2003 the Army began to direct redeploying National Guard and Reserve units to leave their equipment in theater for use by deploying forces. DOD policy requires the Army to replace equipment transferred to it from the Reserve Component including temporary withdrawals or loans in excess of 90 days, yet at the time of our report in October 2005, the Army had neither created a mechanism in the early phases of the war to track Guard equipment left in theater nor prepared replacement plans for this equipment, because the practice of leaving equipment behind was intended to be a short-term measure. As of June 2006, only 3 replacement plans have been endorsed by the Secretary of Defense, all to replace National Guard equipment, while 22 plans are in various stages of approval. While the exact dollar estimate for these replacements will not be known until operations in Iraq cease, it will likely cost billions of dollars. Future cost variables for other U.S. government agencies include the efforts to help form national and provincial governments and build management capacity in both Afghanistan and Iraq and build capable and loyal Iraqi and Afghani security forces. Also, there will be further need for funding to restore, sustain, and protect infrastructure. The new Iraqi government will need significant help in building the procurement, financial management, and accountability systems needed to govern and provide basic services to millions of its citizens. In addition, the 18 provincial governments will also require assistance in building management capacity and delivering results to the Iraqi people that make a difference in their daily lives. The costs of sustaining an Iraqi force of 266,000 personnel may require the Iraqi government to spend more money on personnel, maintenance, and equipment than originally anticipated. In addition, the new Iraqi security forces will have recurring training needs, and will need additional assistance in replacing lost or stolen equipment, and in developing improved logistical and sustainment capabilities. While most of the reconstruction money for Iraq has been obligated, additional funds will be needed to finance remaining reconstruction needs, and to restore, sustain, and protect the infrastructure that has been built to date. For example, Iraqi needs are greater than originally anticipated. In the next several years, Iraq will need an estimated $30 billion to reach and sustain oil capacity of 5 million barrels per day, according to industry experts and U.S. officials. For electricity, they will need $20 billion through 2010, according to the Department of Energy’s Energy Information Administration. Iraqi budget constraints and limited government managerial capacity limit its ability to contribute to future rebuilding efforts. There is widespread corruption in Iraq. Reconstruction efforts have not taken the risk of corruption into account when assessing the costs of achieving U.S. objectives in Iraq. The International Monetary Fund, the World Bank, Japan, and the European Union officials cite corruption in the oil sector as a special problem. In addition, according to State officials and reporting documents, about 10 percent of refined fuels are diverted to the black market, and about 30 percent of imported fuels are smuggled out of Iraq and sold for a profit. Future international contributions for Iraq may be limited. Most of the U.S. funds have been obligated and about 70 percent of the $13.6 billion in international pledges are in the form of loans. The new U.S. embassy will be costly. The embassy is projected to cost about $592 million, but the full cost of establishing a diplomatic presence across Iraq is still unknown. Additional funds are needed to train and equip Afghan security forces. The United States, other donors, and the new Afghan government face significant challenges to establishing viable Afghan army and police forces. Although DOD and State have not yet prepared official cost estimates, the army and police programs could cost up to $7.2 billion to complete and about $600 million annually to sustain. Moreover, slow progress in resolving other Afghan security problems—the lack of an effective judiciary, the substantial illicit narcotics industry, and the continued presence of armed militias— threaten to undermine overall progress made toward providing nationwide security and ensuring the stability of the Afghan government. Lastly, one of the variables that can influence how much these efforts will cost the United States is the long-term cost of caring for our veterans. Both improvements in medical care in the field and in body armor have increased the survival rate of those who are seriously injured in combat. However, seriously injured survivors will likely require substantial long- term medical care from the VA, and may require extensive inpatient and home and community-based support services to assist those with traumatic brain injury, spinal cord injury, and other severely disabling conditions. We also know that many servicemembers have been exposed to intense and prolonged combat, which research has shown to be strongly associated with the risk of developing post-traumatic stress disorder. This disorder can occur after experiencing or witnessing a life- threatening event and is the most prevalent mental health disorder resulting from combat. Mental health experts predict that 15 percent or more of the servicemembers returning from operations in Iraq and Afghanistan will develop post-traumatic stress disorder. In addition to an influx of more severely injured patients, the VA health care system will be required to serve large numbers of returning veterans with shorter term, more routine, health care needs. VA has estimated that a little more than 100,000 veterans from operations in Iraq and Afghanistan are currently using VA health care services. VA originally underestimated by 77,000 the number of returning veterans who would use its health care, which in part required the VA to request additional appropriations in both fiscal years 2005 and 2006. Long-term estimates of how many returning veterans will use VA health care and the costs of that care are imprecise for a variety of reasons, including uncertainty about the duration of operations in the theaters as discussed above. But, current levels of usage by returning servicemembers indicate a growing VA health care workload and costs. Furthermore, while we have no clear idea of the magnitude, there will undoubtedly be long-term financial commitments associated with payments to veterans with long- term disabilities. Mr. Chairman, this concludes my prepared statement. I would be happy to answer any questions you and the subcommittee members may have. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
After the terrorist attacks of September 11, 2001, the President announced a Global War on Terrorism (GWOT), requiring the collective instruments of the entire federal government to counter the threat of terrorism. Ongoing military and diplomatic operations overseas, especially in Iraq and Afghanistan, constitute a key part of GWOT. These operations involve a wide variety of activities such as combating insurgents, civil affairs, capacity building, infrastructure reconstruction, and training military forces of other nations. The U.S. has reported substantial costs to date for GWOT related activities and can expect to incur significant costs for an unspecified time in the future, requiring decision makers to consider difficult trade-offs as the nation faces increasing long-range fiscal challenges. GAO has issued several reports on current and future financial commitments required to support GWOT military operations, as well as diplomatic efforts to stabilize and rebuild Iraq. This testimony discusses (1) the funding Congress has appropriated to the Department of Defense (DOD) and other U.S. government agencies for GWOT-related military operations and reconstruction activities since 2001; (2) costs reported for these operations and activities and the reliability of DOD's reported costs, and (3) issues with estimating future U.S. financial commitments associated with continued involvement in GWOT. Since 2001, Congress has appropriated about $430 billion to DOD and other government agencies for military and diplomatic efforts in support of GWOT. This funding has been provided through regular appropriations as well as supplemental appropriations, which are provided outside of the normal budget process. Since September 2001, DOD has received about $386 billion for GWOT military operations. In addition, agencies including the Department of State, DOD, and the Agency for International Development have received since 2001 about $44 billion to fund reconstruction and stabilization programs in Iraq ($34.5 billion) and Afghanistan ($9 billion) and an additional $400 million to be used in both Iraq and Afghanistan. Since 2001, U.S. government agencies have reported significant costs associated with GWOT, but GAO has concerns with the reliability of DOD's reported cost data. Through April 2006, DOD has reported about $273 billion in incremental costs for GWOT-related operations overseas--costs that would not otherwise have been incurred. DOD's reported GWOT costs and appropriated amounts differ generally because DOD's cost reporting does not capture some items such as intelligence and Army modular force transformation. Also, DOD has not yet used funding made available for multiple years, such as procurement and military construction. GAO's prior work found numerous problems with DOD's processes for recording and reporting GWOT costs, including long-standing deficiencies in DOD's financial management systems and business processes, the use of estimates instead of actual cost data, and the lack of adequate supporting documentation. As a result, neither DOD nor the Congress reliably know how much the war is costing and how appropriated funds are being used or have historical data useful in considering future funding needs. GAO made several recommendations to improve the reliability and reporting of GWOT costs. In addition to reported costs for military operations, U.S. agencies have obligated about $23 billion of $30 billion received for Iraqi reconstruction and stabilization, as of January 2006. U.S commitments to GWOT will likely involve the continued investment of significant resources, requiring decision makers to consider difficult trade-offs as the nation faces increasing fiscal challenges in the years ahead; however, predicting future costs is difficult as they depend on several direct and indirect cost variables. For DOD, these include the extent and duration of military operations, force redeployment plans, and the amount of damaged or destroyed equipment needed to be repaired or replaced. Future cost variables for other U.S. government agencies include efforts to help form governments and build capable and loyal security forces in Afghanistan and Iraq, and meet the healthcare needs of veterans, including providing future disability payments and medical services.
Select agent regulations do not mandate that specific perimeter security controls be present at BSL-4 labs, resulting in a significant difference in perimeter security between the nation’s five labs. According to the regulations, each lab must implement a security plan that is sufficient to safeguard select agents against unauthorized access, theft, loss, or release. However, there are no specific perimeter security controls that must be in place at every BSL-4 lab. Although BSL-4 labs may have different levels of inherent risk, we determined that these 15 controls (discussed in more detail in app. I) represent a baseline for strong perimeter security. While three labs had all or nearly all of the key security controls we assessed, our September 2008 report demonstrated that two labs (Labs C and E) had a significant lack of these controls. See table 1 below. Lab C: Lab C had in place only 3 of the 15 key security controls we assessed. The lab was in an urban environment and publicly accessible, with only limited perimeter barriers. During our assessment, we saw a pedestrian access the building housing the lab through the unguarded loading dock entrance. In addition to lacking any perimeter barriers to prevent unauthorized individuals from approaching the lab, Lab C also lacked an active integrated security system. By not having a command and control center or an integrated security system with real-time camera monitoring, the possibility that security officers could detect an intruder entering the perimeter and respond to such an intrusion is greatly reduced. Lab E: Lab E was one of the weakest labs we assessed, with 4 out of the 15 key controls in place. It had only limited camera coverage of the outer perimeter of the facility and the only vehicular barrier consisted of an arm gate that swung across the road. Although the guard houses controlling access to the facility were manned, they appeared antiquated and thus did not portray a strong, professional security infrastructure. The security force charged with protecting the lab was unarmed. Of all the BSL-4 labs we assessed, this was the only lab with an exterior window that could provide direct access to the lab. In lieu of a command and control center, Lab E contracts with an outside company to monitor its alarm in an off- site facility. This potentially impedes response time by emergency responders with an unnecessary layer that would not exist with a command and control center. Since the contracted company is not physically present at the facility, it is not able to ascertain the nature of alarm activation. Furthermore, there is no interfaced security system between alarms and cameras and a lack of real-time monitoring of cameras. Although the presence of the controls we assessed does not automatically ensure a secure perimeter, having most of these controls in place and operating effectively reduces the likelihood of intrusion. As such, we recommended in the September 2008 report that the Director of CDC take action to implement specific perimeter controls for all BSL-4 labs to provide assurance that each lab has a strong perimeter security system in place. As part of this recommendation, we stated that CDC should work with USDA to coordinate its efforts, given that both agencies have the authority to regulate select agents. In its response to the September 2008 report, HHS agreed that perimeter security is an important deterrent against theft of select agents. HHS indicated that the difference in perimeter security at the five labs was the result of risk-based planning; however, they did not comment on the specific vulnerabilities we identified and whether these should be addressed. In regard to requiring specific perimeter controls for all BSL-4 labs, HHS stated that it would perform further study and outreach to determine whether additional federal regulations are needed. Significant perimeter security differences continue to exist among the nation’s five BSL-4 labs operational at the time of our most recent assessment. In our July 2009 report, we stated that CDC has taken limited steps to address our recommendation that it should take action to implement specific perimeter security controls for all BSL-4 labs. CDC stated that the following actions have been taken as of May 2009: In late 2007, CDC, along with other federal agencies, established a U.S. Government Trans-Federal Task Force on Optimizing Biosafety and Biocontainment Oversight. The task force was formed to assess the current framework for local and federal oversight of high-containment laboratory research activities and facilities, including the identification and assessment of pertinent laws, regulations, policies, guidelines, and examination of the current state of biosafety oversight systems. The task force held a public consultation meeting in December 2008. According to CDC, the task force will communicate specific recommendations about the nation’s lab safety and security issues to the Secretaries of both HHS and USDA. CDC and USDA hosted a workshop series in Greenbelt, Maryland, in December 2008 for all of their registered entities and partners. CDC stated that it included several safety and security topics, including discussion of physical security and operational security. In January 2009, in response to Executive Order 13486, a federal working group (WG) was convened to review current laws, regulations, and guidelines in place to prevent theft, misuse, or diversion to unlawful activity of select agents and toxins. The WG is chaired by HHS and the Department of Defense (DOD) and includes representatives from several federal agencies and includes a subgroup that is focused on physical and facility security of biolabs. The WG is expected to issue its final report to the President. Although CDC has taken some modest steps for studying how to improve perimeter security controls for all BSL-4 labs, CDC has not established a detailed plan to implement our recommendation. Without a detailed plan from CDC on what corrective actions are planned, it is impossible to monitor CDC’s progress in implementing our recommendation to improve perimeter security controls for all BSL-4 labs. The ability to monitor progress openly and transparently is especially important because a sixth BSL-4 lab recently became operational, as mentioned above, and CDC expects more BSL-4 labs to be operational in the future. Although CDC has taken limited action to address our findings from our September 2008 report, the two deficient BSL-4 labs have made progress on their own. In our July 2009 report, we stated that one BSL-4 lab made a significant number of improvements to increase perimeter security, thus reducing the likelihood of intrusion. The second one made three changes and formed a committee to consider and prioritize other changes. We confirmed the following improvements at Lab C: Visitors are screened by security guards and issued visitor badges. A command and control center was established. Camera coverage includes all exterior lab entrances. Closed-circuit television (CCTV) is monitored by the command and control center. The cameras currently cover the exterior of the building. Guards can control the cameras by panning, zooming, or tilting. One visible guard is present at the main entrance to the lab, but the guard is not armed. A guard mans the entrance 24-hours a day, 7 days a week. Although the guard is unarmed, this improvement does partially address the requirement for guard presence at lab public entrances. Lab officials described installing armed guards as cost prohibitive. While the loading dock is still located inside the footprint of the main building, Lab C improved its loading dock security by building a loading dock vehicle gate. Moreover, a pedestrian gate with a sign forbidding entry was built to prevent pedestrians from entering the building through the loading dock; pedestrians were previously allowed to enter the building through the loading dock as a way of taking a short-cut into the building. These new gates prevent individuals from walking into the building, or vehicles driving up to the building, unchallenged. Lab officials said additional enhancements would be completed by fall 2009. These include an active intrusion detection system that is integrated with CCTV and the addition of 14 new interior cameras with pan, tilt, and zoom capabilities. The new cameras will enhance the interior perimeter security of the lab. The command and control center also will have access to and control of these new cameras. After these improvements are finished, the lab will have 8 of the 15 controls we tested in place plus 2 others that were partially addressed. We verified three improvements were made at Lab E—heavy concrete planters were added as a vehicle barricade along the roadside adjacent to the building; the window was frosted to block sight lines into the lab from nearby rooftops; and a vehicle barricade is being constructed to block unauthorized access to the parking lot adjacent to the lab, thereby increasing the blast stand-off area. The lab also formed a committee to consider additional perimeter security measures such as widening buffer zones and increasing lighting at the perimeter fence. In all, the lab now has 6 of the 15 controls we assessed in place. Although lab officials made three improvements and are considering others, the lab’s head of research operations at the facility objected to the findings of our September 2008 report and has challenged the 15 controls we deemed critical to strong perimeter security. He said that the officials from the lab were not afforded an opportunity to respond to the report and correct “inaccuracies.” Specifically, he made the following comments on our previous findings: He questioned the basis for our selection of the specific 15 controls we identified as critical to perimeter security, and noted that CDC also expressed similar concerns in its comments on our September 2008 report. The lab windows do not provide direct access to the lab. He maintained that a number of features prohibited entry by these windows: the lowermost edge of the windows is more than 7 feet 8 inches above ground level, the windows are certified bulletproof glass and are equipped with inside bars, and breaching the integrity of the outer bulletproof glass triggers alarms for the local guard force. Furthermore, he said that having such a window was deemed programmatically important when the laboratory was designed in order to provide light- dark orientation for laboratory workers. Finally, he represented that a group of nationally recognized security experts has opined that the windows are not a security threat, but did not provide evidence of these experts’ assessment. Armed guards are present on the campus. He stated that a table in our September 2008 report indicates that armed guards are not present on the campus, although a footnote on a subsequent page acknowledges that an armed security supervisor patrols the facility. A vehicle barrier does surround the perimeter of that portion of the laboratory building housing select agents, including the BSL-4 laboratory. He said it was recommended and approved by the Federal Bureau of Investigation during consultations on the safety of the building and installed in 1999 prior to initiation of research in this facility. We continue to believe that our assessment of perimeter controls at Lab E is accurate. Specifically, we disagree with Lab E’s position as follows: As stated in the September 2008 report, we developed the 15 security controls based on our expertise in performing security assessments and our research of commonly accepted physical security principles. Although we acknowledge that the 15 security controls we selected are not the only measures that can be in place to provide effective perimeter security, we determined that these controls (discussed in more detail in app. I) represent a baseline for BSL-4 lab perimeter physical security and contribute to a strong perimeter security system. Having a baseline provides fair representation as to what key perimeter security controls do or do not exist at these facilities. The controls represent commonly accepted physical security principles. A lack of such controls represents a potential security vulnerability. For example, as mentioned above, at the time of our original assessment Lab E had only limited camera coverage of the outer perimeter of the facility. Camera coverage of a building’s exterior provides a means to detect and quickly identify potential intruders. As mentioned above, Lab E was the only lab with an exterior window that could provide direct access to the lab. This window allowed for direct “visual” access into the lab area from an adjacent rooftop. Lab E in essence acknowledged this when it informed us in a letter that it “Frosted the BSL-4 laboratory windows to block sight lines from adjacent rooftops.” While we credited Lab E for obscuring visual access to the lab by frosting this window, the window continues to pose a security vulnerability because it is not blast proof. Armed guards are not present on the campus. As mentioned above, Lab E’s head of research operations pointed out that our September 2008 report acknowledged that an armed security supervisor patrols the facility. However, employing one armed security supervisor does not support the plural definition of “guards.” The supervisor also is not generally at the entrances to the facility. He normally responds to incidents and would not generally be in a position to confront an intruder at the point of attack. Furthermore, placing armed guards at entrances also functions as a deterrent. The vehicle barrier did not surround the full perimeter of the BSL-4 lab building as it adjoined another lab building at the time of our original assessment. The facility has since placed additional barriers as noted in this testimony to give full coverage, thus validating our original assessment. Furthermore, part of the barrier in the area between a small parking lot and the BSL-4 lab building did not provide an adequate blast stand-off area. The lab, as noted in the July 2009 report, has since erected barriers to this parking lot to allow only deliveries into the area. The following table summarizes the progress the two labs have made on 9 of the 15 controls we initially assessed. In our July 2009 report, we made two additional observations that concern perimeter security differences among the nation’s five BSL-4 labs that were operational at the time of our assessment: All five BSL-4 labs operating in 2008 had a security plan in place when we assessed them. Yet significant perimeter security differences exist among these high-containment labs. A reason for the discrepancies can be found in the additional federal security requirements the three labs with strong perimeter security controls in place had to follow beyond the select agent regulations. For example, Lab B is a military facility subject to far stricter DOD physical security requirements. It had a perimeter security fence and roving patrol guards visible inside and outside this fence. Labs A and D also must meet additional mandates from the federal agencies that oversee them. A lack of minimum perimeter security requirements contributes to sharp differences among BSL-4 labs as well.] CDC inspection officials stated their training and experience had been mainly in the area of safety. They also noted that their philosophy is a layered approach to security and safety. According to CDC officials, they are developing a comprehensive strategy for safety and security of biosafety labs and will adjust the training and inspection process accordingly to match this comprehensive strategy. We made no new recommendations in our July 2009 report. In responding to our report, CDC stated that multiple groups are assessing the issue of laboratory security and developing related recommendations. CDC stated that it will consider our prior recommendation and the reports from the multiple groups together before developing a detailed plan to address security at select agent laboratories. CDC also stated that it is in the process of hiring a Security Officer to provide continued focus on laboratory security. Labs C and E commented on relevant sections of our report, indicating that they have taken or plan to take various actions to improve perimeter security. Mr. Chairman and Members of the Committee, this completes my prepared statement. I would be happy to respond to any questions you or other Members of the Committee may have at this time. To perform our perimeter security assessment of biosafety level 4 (BSL-4) labs, we identified 15 key perimeter security controls. We based their selection on our expertise and research of commonly accepted physical security principles that contribute to a strong perimeter security system. A strong perimeter security system uses layers of security to deter, detect, delay, and deny intruders: Deter. Physical security controls that deter an intruder are intended to reduce the intruder’s perception that an attack will be successful—an armed guard posted in front of a lab, for example. Detect. Controls that detect an intruder could include video cameras and alarm systems. They could also include roving guard patrols. Delay. Controls that delay an intruder increase the opportunity for a successful security response. These controls include barriers such as perimeter fences. Deny. Controls that can deny an intruder include visitor screening that only permits authorized individuals to access the building housing the lab. Furthermore, a lack of windows or other obvious means of accessing a lab is an effective denial mechanism. Some security controls serve multiple purposes. For example, a perimeter fence is a basic security feature that can deter, delay, and deny intruders. However, a perimeter fence on its own will not stop a determined intruder. This is why, in practice, layers of security must be integrated in order to provide the strongest protection. Thus, a perimeter fence should be combined with an intrusion detection system that would alert security officials if the perimeter has been breached. A strong system would then tie the intrusion detection alarm to the closed-circuit television (CCTV) network, allowing security officers to immediately identify intruders. A central command center is a key element for an integrated, active system. It allows security officers to monitor alarm and camera activity—and plan the security response—from a single location. Table 3 shows 15 physical security controls we focused on during our assessment work. In addition to the contact named above, the following individuals made contributions to this testimony: Andy O’Connell, Assistant Director; Matt Valenta, Assistant Director; Christopher W. Backley; Randall Cole; John Cooney; Craig Fischer; Vicki McClure; Anthony Paras; and Verginie Tarpinian. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Biosafety laboratories are primarily regulated by either the Department of Health and Human Services (HHS) or the U.S. Department of Agriculture (USDA), depending on whether the substances they handle pose a threat to the health of humans or plants, animals, and related products, respectively. Currently, all operational biosafety level 4 (BSL-4) labs are overseen by HHS's Centers for Disease Control and Prevention (CDC). BSL-4 labs handle the world's most dangerous agents and toxins that cause incurable and deadly diseases. This testimony summarizes GAO's previously issued reports on perimeter security at the nation's BSL-4 laboratories that were issued in September 2008 (GAO-08-1092) and July 2009 (GAO-09-851). Specifically, this testimony describes (1) the findings and recommendation on key perimeter security controls at five of the nation's operational BSL-4 labs, (2) CDC efforts to address our recommendation, (3) improvements that have been made to the perimeter security controls at the two labs found to be deficient, and (4) other observations about the BSL-4 labs GAO assessed. Significant perimeter security differences continue to exist among the nation's five BSL-4 laboratories operational at the time of GAO's assessment. In September 2008, GAO reported that three of the five labs had all or nearly all of the 15 key controls GAO evaluated. Two labs, however, demonstrated a significant lack of these controls, such as camera coverage for all exterior lab entrances and vehicle screening. As a result, GAO recommended that CDC work with USDA to require specific perimeter security controls at high-containment facilities. However, as we reported in July 2009, CDC has taken limited action on the GAO recommendation. In July 2009, GAO reported that the two deficient labs made progress on their own despite CDC's limited action. One made a significant number of improvements, thus reducing the likelihood of intrusion. The second made a few changes and formed a committee to consider and prioritize other improvements. Two additional observations about BSL-4 labs concern the significant perimeter security differences among the five labs GAO originally assessed for our September 2008 report. First, labs with stronger perimeter controls had additional security requirements mandated by other federal agencies. For example, one lab is a military facility subject to far stricter Department of Defense physical security requirements. Second, CDC inspection officials stated their training and experience has been focused on safety. CDC officials said they are developing a comprehensive strategy for safety and security of labs and will adjust the training and inspection process to match this strategy.
From 1996 through 2000, NASA was one of the few agencies to be judged by its independent auditor at that time, Arthur Andersen, as meeting all of the federal financial reporting requirements. That is, NASA was one of the few agencies to receive an unqualified, or “clean,” opinion on its financial statements, with no material internal control weaknesses noted, and no indications that its financial management systems were not in substantial compliance with the requirements of FFMIA. FFMIA reflects the need for agencies to have systems that produce reliable, timely, and accurate financial information needed for day-to-day decision making by requiring agencies to implement and maintain financial management systems that substantially comply with (1) federal financial management systems requirements, (2) the U.S. Government Standard General Ledger (SGL) at the transaction level, and (3) applicable federal accounting standards. Thus, the auditor’s report implied that NASA could not only generate reliable information once a year for external financial reporting purposes but also could provide the kind of information needed for day-to- day management decision making. However, as we and others have reported, the independent auditor’s reports did not provide an accurate picture of NASA’s financial management systems and, instead, failed to disclose pervasive financial management problems that existed at NASA. For example, we have identified NASA’s contract management function as an area of high risk since 1990 because of NASA’s inability to (1) oversee its contractors and their financial and program performance, and (2) implement a modern, integrated financial management system, which is integral to producing accurate and reliable financial information needed to support contract management. Also, in February 2002, NASA’s new independent auditor, PricewaterhouseCoopers, further confirmed NASA’s financial management difficulties and disclaimed an opinion on the agency’s fiscal year 2001 financial statements. The audit report also identified a number of material internal control weaknesses—primarily regarding PP&E and materials— and stated that, contrary to previous financial audit reports, NASA’s financial management systems did not substantially comply with FFMIA. While NASA received an unqualified opinion for its fiscal year 2002 financial statements, these results were achieved only through heroic efforts on the part of NASA and its auditor and again, the audit report identified a number of material internal control weaknesses and stated that NASA’s financial management systems did not substantially comply with FFMIA. To its credit, in April 2000, NASA began an effort known as IFMP. The schedule for implementing IFMP was originally planned for fiscal year 2008, but after NASA’s new Administrator came on board in fiscal year 2002, the timeline was accelerated to fiscal year 2006, with the core financial module to be completed in fiscal year 2003. NASA’s IFMP includes nine module projects supporting a range of financial, administrative, and functional areas. According to NASA officials, of the nine module projects, five are in operation, one is currently in implementation, and three are future modules. The five modules in operation are resume management, position description management, travel management, executive financial management information (called Erasmus), and core financial; the one project in implementation is budget formulation; and the three future module projects are human resources, asset management, and contract administration. The core financial module, which utilizes the SAP R/3 system, is considered the backbone of IFMP and has become NASA’s standard, integrated accounting system used agencywide. The other IFMP module projects will be integrated/interfaced with the core financial module, where applicable. The Joint Financial Management Improvement Program (JFMIP) defines a core financial system (or module) as the backbone of an agency’s integrated financial management system: It should provide common processing routines, support common data for critical financial management functions affecting the entire agency, and maintain the required financial data integrity control over financial transactions, resource balances, and other financial systems. A core financial system should support an agency’s general ledger, funds management, payment, receivable, and cost management functions. Also, the system should receive data from other financial-related systems, such as inventory and property systems, and from direct user input, and it should provide data for financial statement preparation and for financial performance measurement and analysis. The scope of NASA’s core financial module includes the general ledger, budget execution, purchasing, accounts receivable, accounts payable, and cost management. NASA completed implementation of the core financial module at all 10 NASA centers in June 2003. The pilot for the core financial module—conducted at Marshall Space Flight Center—was implemented in October 2002. NASA then deployed the core financial module at the other 9 NASA centers in three “waves,” the last of which was completed in June 2003. In April 2003, we issued our first report on IFMP in response to your request. At that time, we reported that NASA was not following key best practices for acquiring and implementing the system, which may affect the agency’s ability to fully benefit from the new system’s capabilities. Specifically, we reported that NASA (1) did not analyze the relationships among selected and proposed IFMP components, (2) had deferred addressing the needs of key system stakeholders, including program managers and cost estimators, and (3) did not properly manage and test its system requirements prior to implementation of the core financial module. As a result, we reported that: NASA has increased its risks of implementing a system that will not optimize mission performance, and will cost more and take longer to implement than necessary; the core financial module is not being designed to integrate the cost and schedule data that program managers need to oversee the work of NASA’s contractors; and costly rework will likely be required to fix requirement defects not identified prior to implementation. Although NASA has met the core financial management module’s implementation schedule, the system as implemented in June 2003 has limited external financial reporting capabilities. When NASA announced in June 2003 that the core financial management module was complete, NASA officials acknowledged that additional work remained, including the need to develop and configure a cost-allocation structure within the system so that it would accumulate the full cost of NASA’s programs and projects for external financial reporting purposes. However, to meet its implementation schedule, we also found that NASA (1) deferred requirements that require significant business process reengineering or extensive software configuration and (2) continues to rely on manual procedures for many transactions that should be automated in the new system. Consequently, only about one-third of the transaction types that NASA uses in its business processes are currently implemented and fully automated in the core financial module. As part of its implementation strategy, NASA delayed conversion to full- cost accounting until the core financial module was implemented at all centers. After completing implementation of the module in June 2003, NASA began designing the agency’s new cost-allocation structure and expected that full-cost accounting capabilities needed to provide the full cost of its programs and projects for external financial reporting purposes would be available through the core financial module by October 1, 2003. Properly designing, configuring, and testing the cost-allocation structure is key to capturing the full costs of all direct and indirect resources and allocating them to NASA’s programs and activities. However, on May 30, 2003, NASA’s Inspector General reported that NASA had not yet determined how to allocate space shuttle program costs to programs that benefit from space shuttle services or how to allocate civil service personnel costs to benefiting programs and projects. Once these issues were resolved, NASA would then have to configure the core financial module software to accommodate the new allocation structure and properly test the new configuration. Consequently, NASA’s Inspector General expressed concerns about NASA’s ability to meet its October 1, 2003, target date. In early October, we inquired about the status of full-cost accounting within the core financial module and IFMP officials told us that this capability would be fully implemented on October 26, 2003. However, because of the timing of this report, we did not verify whether this implementation date was met. If NASA is successful in implementing full-cost accounting, the new system should link all of NASA’s direct and indirect costs to specific programs and projects, and for the first time shed light on the full cost of these programs for external financial reporting purposes. As explained later, managerial cost accounting goes beyond providing the full cost of programs and projects and producing external financial reports, and is also critical for producing the type of cost information needed to effectively manage and oversee NASA’s programs. NASA did not adequately test key requirements or configure the core financial module software to satisfy these requirements prior to implementing the module. Adequately testing and configuring a system prior to implementation helps assure the integrity and effectiveness of transactions that will be processed through the system, thereby reducing the likelihood of rejected transactions, labor-intensive manual workarounds, and inaccurate data. However, prior to implementation, NASA tested only 120, or 53 percent, of the 225 unique financial events or transaction types identified by NASA as critical for carrying out day-to-day operations and producing external financial reports. NASA deferred implementation of the remaining 105 transaction types until after June 23, 2003, when the system would be implemented at all centers. Ideally, all transactions should be thoroughly tested prior to implementing a system. However, to meet the agency’s implementation schedule, NASA identified and deferred implementation of transactions that it determined would not have a significant or immediate impact on operations. For example, 29 of the deferred transactions were related to year-end closing procedures that would not be needed until September 30, 2003. However, other deferred transactions do have a significant and immediate impact on NASA’s operations throughout the year. For example, 40 transaction types were related to upward and downward adjustments to prior year data, many of which affected NASA’s ability to properly capture adjustments to obligations. Because NASA deferred implementing this capability, the agency has continued to rely on ad hoc, manual processes and “workarounds.” As discussed later, these are the same cumbersome manual processes that resulted in a $644 million error in NASA’s fiscal year 1999 financial statements. NASA hoped to implement most of these deferred transactions by October 2003. In mid-October, NASA officials told us that 75 of the 105 deferred transaction types had been implemented, and the remaining 30 transaction types would be implemented later in fiscal year 2004. Until the remaining transaction types are implemented, however, NASA must continue to process them outside of the module using manual procedures. In addition to the 105 transaction types that NASA has deferred, NASA also uses manual accounting entries to record 43, or 36 percent, of the 120 unique transaction types NASA considers implemented. NASA considers these 43 transaction types implemented because NASA has no current plans to automate them in the core financial module. Although manual accounting entries are sometimes necessary to record unusual or infrequent events, many of NASA’s manual entries are made to record routine events that should be processed electronically. For example, NASA uses summary-level manual processes to record all transactions occurring throughout the year related to its reported $37 billion of property. Such a large proportion of manual procedures runs contrary to the purpose of an automated system and makes the agency more vulnerable to processing errors and delays. In fact, prior to implementation, NASA’s consultant responsible for performing an independent compliance review of the core financial module raised concerns about the excessive number of transactions processed with manual journal voucher entries. Despite these concerns, NASA did not alter its implementation plan for the module. The core financial module may provide some improvements to NASA’s current accounting system environment by reducing the extensive amount of time and resources currently required to consolidate NASA’s 10 different reporting entities and close the books each accounting period. However, NASA did not thoroughly test or implement key requirements prior to implementation and has not used the new system as an opportunity to drive needed changes in its management practices and business processes. Therefore, the core financial module, as implemented in June 2003, does not (1) properly capture, record, and account for PP&E and materials balances or (2) provide key system requirements needed to prepare the agency’s Statement of Budgetary Resources. The core financial module, as implemented in June 2003, does not appropriately capture and record PP&E and material in the module’s general ledger at the transaction level. According to SGL requirements and NASA’s own accounting policy, recording PP&E and material in the general ledger at the transaction or item level provides independent control over these assets. However, NASA currently updates the core financial module’s general ledger using periodic summary-level manual entries. Although NASA plans to implement an integrated asset management module in 2005, this alone will not ensure that transaction-level detail is used to update the core financial module. NASA’s PP&E and materials are physically located at many locations throughout the world, including NASA centers, contractor facilities, other private or government run facilities, and in space. NASA’s most significant challenge, with respect to property accounting, stems from property located at contractor facilities, which accounts for almost $11 billion, or about one-third, of NASA’s reported $37 billion of PP&E and materials and consists primarily of equipment being constructed for NASA or items built or purchased for use in the construction process. NASA has not reengineered the agency’s processes for capturing contract costs associated with PP&E and material, though, and therefore, does not record these property costs in the general ledger at the transaction level. Instead, according to NASA officials, the agency plans to continue to (1) record the cost of PP&E and materials as expenses when initially incurred, (2) periodically determine which of those costs should have been capitalized, and (3) manually correct these records at a summary level. To illustrate, NASA’s contractors provide NASA with monthly contractor cost reports, which contain accrued cost information for any work performed during the month. However, these reports do not contain enough information for NASA to determine what portion of the reported cost pertains to the construction or acquisition of property and therefore, NASA initially records all costs reported by its contractors as an expense. Then, on a quarterly or annual basis, NASA receives a property report from its contractors that provides summary-level information on the amount of property constructed or purchased and currently in the contractor’s possession. Based on these reports, NASA records the cost of contractor-held assets in its general ledger and reverses the expense previously recorded from the contractor cost reports. The problem with NASA’s current process for capturing, recording, and accounting for property in the possession of contractors is that it provides no way for NASA to ensure that the money it spends on the construction of its property is actually recorded as discrete property items. Although NASA plans to implement an integrated asset management module in 2005, the new system will not change the way NASA captures, records, and accounts for property in the possession of contractors. As noted above, because this problem stems from NASA’s inability to link accrued costs reported by its contractors with specific equipment items being constructed, the problem will not be alleviated when physical custody of the equipment is ultimately transferred to NASA and recorded in NASA’s property records. The core financial module does not capture and report certain key budgetary information needed to prepare the agency’s Statement of Budgetary Resources. Although the software that NASA purchased for the core financial module was certified by JFMIP as meeting all mandatory system requirements, NASA may have relied too heavily on the JFMIP certification. JFMIP has made it clear that its certification, by itself, does not automatically ensure compliance with the goals of FFMIA. Other important factors that affect compliance with Federal Financial Management System Requirements (FFMSR) include how well the software has been configured to work in the agency’s environment and the quality of transaction data in the agency’s feeder systems. When NASA later tested specific requirements related to adjustments to prior year obligations, the core financial module failed the test. Consequently, NASA deferred implementation of those requirements and opted to rely on manual compilations, system queries, or other workarounds to compensate for the system’s inadequacies. These workarounds are known to have caused reporting problems in the past. According to FFMSR, an agency’s core financial module should automatically classify and record upward and downward adjustments of prior year obligations to the appropriate general ledger accounts. However, NASA’s core financial module, as implemented in June 2003, does not provide this capability. For example, if an upward adjustment is required because an invoice includes costs not previously included on the purchase order, such as shipping costs, the system erroneously posts the upward adjustment to a prior year obligation instead of a current year obligation. Because the system does not properly capture and report these adjustments, NASA must rely on manual compilations and system queries to extract the data needed to prepare the agency’s Statement of Budgetary Resources—just as it did using its legacy general ledger systems. As we reported in March 2001, this cumbersome, labor-intensive effort to gather the information needed at the end of each fiscal year was the underlying cause of a $644 million misstatement in NASA’s fiscal year 1999 Statement of Budgetary Resources. During its initial test of system requirements but prior to implementation at Marshall Space Flight Center and Glenn Research Center in October 2002, NASA became aware of the software’s limitations regarding upward and downward adjustments to prior year obligations. In order to meet its schedule, NASA IFMP officials deferred further system modifications to meet these requirements and opted to rely on a manual workaround to satisfy the federal requirement for upward and downward adjustments. NASA’s consultant responsible for performing an independent compliance review of the core financial module raised concerns about this approach. Despite these concerns, NASA went forward with its plans. At the time, NASA had hoped that a “patch” release or future software upgrade would remedy the problem and then NASA could incorporate the fix into the phased agency rollout of the core financial module. However, the upgrades incorporated after the initial implementation at Marshall and Glenn did not resolve all of the issues related to upward and downward adjustments. As a result, NASA continued to face significant problems in this area. According to NASA officials, the agency continued to work with the software vendor to reconfigure the software as necessary to accommodate adjustments to prior year obligations. NASA expected a new software patch to resolve any remaining problems by October 1, 2003. However, in mid-October, NASA officials acknowledged that it might be some time before this issue would be resolved completely. Until then, NASA will continue to rely on manual workarounds. NASA’s implementation of the core financial module has also created new reporting issues. Specifically, the core financial module does not appropriately capture accrued costs and record the corresponding liabilities as accounts payable. In addition, the core financial module records obligations to the general ledger before the obligations are legally binding. Although NASA knew about these problems prior to implementation, the agency went forward with its implementation plans. The core financial module, as implemented in June 2003, does not appropriately capture and record accrued contract costs and accounts payable information in accordance with federal accounting standards and NASA’s own financial management manual. Specifically, the core financial module does not capture accrued costs or record accounts payable if cumulative costs are in excess of obligations for a given contract. As of June 30, 2003, NASA had not processed approximately $245 million in costs that exceeded obligations, nor recorded the corresponding accounts payable, even though this amount represented a legitimate liability for NASA. Instead, these transactions are held outside of the general ledger in suspense until additional funds can be obligated. Thus, any report containing information on NASA’s costs or liabilities would likely be understated by the amount of costs held in suspense at the time of the report. Federal accounting standards and NASA’s own financial management manual require costs to be accrued in the period in which they are incurred and any corresponding liability recorded as an account payable, regardless of amounts obligated. Further, federal standards require that agencies must disclose unfunded accrued costs—or costs in excess of obligations. However, NASA has designed the core financial module such that it will not post costs to the general ledger if they exceed the amount obligated. According to NASA officials, this is intended to be a “red flag” or internal control that alerts agency managers to potential cost overruns. While we agree that NASA could benefit from information that provides an early warning sign of possible cost or schedule problems, we disagree with NASA’s approach. Appropriately posting costs and accounts payable to the general ledger does not preclude NASA from monitoring unfunded accrued costs. Further, as we reported in April 2003, to adequately oversee NASA’s contracts, program managers need reliable contract cost data—both budgeted and actual—and the ability to integrate this data with contract schedule information to monitor progress on the contract. However, because program managers were not involved in defining system requirements or reengineering business processes, the core financial module is not being designed to integrate cost and schedule data needed by program managers. The core financial module was intended to streamline many of NASA’s processes and eliminate the need for many paper documents. However, in some areas, the new system has actually increased NASA’s workload. Specifically, because the core financial software allows obligations to be posted to the general ledger before a binding agreement exists, NASA must process purchase orders and contract documents outside the system until they are signed, or otherwise legally binding. At that point, NASA initiates the procurement action in the system and repeats the steps that were manually performed outside the system previously. Federal law requires that no amount be recorded as an obligation unless it is supported by documentary evidence of, among other things, a binding agreement. However, the processes that are embedded in the core financial module for processing purchase orders and contract documents do not accommodate this requirement. To illustrate, authorized users create electronic purchase requests in the system and release or forward the request to the appropriate approving official for electronic signature. Once signed, the purchase request is forwarded electronically to the purchasing department where purchasing staff create an electronic purchase order, secure a vendor, and place the order. According to federal appropriations law, a purchase order constitutes an obligation when the order is placed and when all relevant parties sign the purchase order. However, if a purchase order is entered into the system before it is finalized, the module automatically records the obligation. Similarly, if a contract or contract modification is entered into the module before it is signed and legally binding, the module automatically records the obligation. According to NASA officials, they are working with the software vendor to develop a solution and expect that the new software upgrade to be released on October 1, 2004, will alleviate this problem. In the meantime, they will manually process documents outside of the system and monitor any documents that have been recorded without signatures to ensure that obligations are not overstated at month-end. The system limitations discussed previously related to full-cost accounting, property accounting, budgetary accounting, accrued costs, and accounts payable—combined with the findings from our April 2003 report—indicate that NASA’s new core financial module and related systems, as implemented in June 2003, do not substantially comply with the requirements of FFMIA. This act provides agencies a blueprint for building fully integrated financial management systems that routinely provide decision makers with timely, reliable, and useful financial information. FFMIA requires agencies to implement and maintain financial management systems that substantially comply with (1) FFMSR, (2) the SGL at the transaction level, and (3) applicable federal accounting standards. Although NASA has made progress in addressing some of its financial management system weaknesses, the agency’s core financial module does not yet provide all the building blocks needed to achieve the ultimate goal of FFMIA. The core financial module, as implemented in June 2003, does not comply substantially with FFMSR. To ensure that automated federal financial management systems comply with this standard and provide the critical information needed for decision making, JFMIP issued specific functional requirements that core financial systems must meet in order to substantially comply with FFMIA. Compliance with this standard, at a minimum, means the core financial module must be configured to (1) ensure consistent and accurate processing, reporting, and tracking of program expenditures and budgetary resources, and (2) ensure that transactions are processed and recorded in accordance with laws and regulations, and federal accounting standards. However, the core financial module—although it uses software certified by JFMIP—does not perform all mandatory functions. Specifically, the module: does not capture and record upward and downward adjustments of obligations incurred in prior fiscal years, and posts obligations to the general ledger prior to approval. Among other things, FFMSR requires federal financial management systems to produce accurate and reliable information for budgetary reports, including the Statement of Budgetary Resourcesand the Report on Budget Execution and Budgetary Resources (Standard Form 133). As previously discussed, the core financial module does not capture and record upward and downward adjustments of obligations incurred in prior fiscal years, which is essential for producing both the Statement of Budgetary Resources and Standard Form 133 reports. In addition, FFMSR requires federal financial management systems to process transactions in accordance with federal appropriations law, which states that no amount may be recorded as an obligation unless it has been approved and is supported by documentary evidence. As a result of system limitations we have discussed, the core financial module erroneously posts obligations to the general ledger prior to approval. The core financial module, as implemented in June 2003, does not substantially comply with the SGL at the transaction level. The SGL requirements ensure consistency in financial transaction processing and external reporting. Compliance with this standard, at a minimum, means that the core financial module must be configured such that (1) reports produced by the systems containing financial information can be traced directly to general ledger accounts, (2) transaction details supporting general ledger account balances are available and can be directly traced to specific general ledger accounts, and (3) the criteria (e.g., timing, processing rules/conditions) for recording financial events are consistent with accounting transaction definitions and processing rules defined in the SGL. As discussed previously, the core financial module does not accumulate transaction-based support for adjustments to prior year obligations, which is essential for producing the Statement of Budgetary Resources and Standard Form 133 reports. Instead, NASA must rely on estimates, manual compilations, and system queries to extract the data needed to prepare these required budgetary reports. As a result, key budgetary information reported on the Statement of Budgetary Resources and Standard Form 133 cannot be traced directly to NASA’s general ledger accounts. NASA also does not properly record PP&E and materials as assets when they are first acquired. Instead, NASA initially records these items as expenses and then later corrects these records using manual procedures. Although this manual process provides NASA a vehicle for reporting PP&E and material costs for financial statement reporting, it is not sufficient for compliance with the SGL. Finally, NASA does not maintain transaction-level detail for its contractor-held property. Instead, it relies solely on its contractors to maintain such records and to periodically report summary-level information on these assets to NASA. This situation has resulted in material weaknesses over this property, as previously reported by NASA’s current independent auditor. The core financial module and related systems, as implemented in June 2003, do not substantially comply with federal accounting standards. Compliance with these standards is essential to providing useful and reliable financial information to external and internal users. Federal accounting standards are the authoritative requirements that guide agencies in developing financial management systems, as well as preparing financial statements. However, as discussed previously, the core financial module did not, as of June 2003, process and report financial information in accordance with federal accounting standards. The major reasons for the module’s noncompliance with federal accounting standards are as follows. The core financial module does not comply with SFFAS No. 1, Accounting for Selected Assets and Liabilities. This standard states that a liability should be recognized and recorded as an account payable when contractors construct facilities or equipment for the government. The liability should be based on an estimate of work completed. However, the core financial module does not capture accrued costs or record accounts payable when the cumulative costs for a given contract exceed obligations. Instead, these transactions are held outside the general ledger, in suspense, until additional funds are obligated, thus understating NASA’s reported program costs and liabilities. The core financial module does not yet provide full-cost accounting capabilities in accordance with SFFAS No. 4, Managerial Cost Accounting Standards. This standard requires agencies to report the full cost of their programs in their general-purpose financial reports. However, as previously discussed, NASA, as of June 2003, had not defined, configured, or tested the appropriate cost pools and cost allocation structure, which are critical to implementing full-cost accounting. The core financial module does not comply with the broader objective of SFFAS No. 4, Managerial Cost Accounting Standards. The concepts and standards included in SFFAS No. 4 are aimed at achieving three general objectives: (1) providing program managers with relevant and reliable information relating costs to program outputs, (2) providing relevant and reliable cost information to assist the Congress and executives in making decisions about allocating federal resources and evaluating program performance, and (3) ensuring consistency between costs reported in general purpose financial reports and costs reported to program managers. However, as we reported in April 2003, the core financial module does not provide program managers, cost estimators, or the Congress with managerially relevant cost information that they need to effectively manage and oversee NASA’s contracts and programs. As a result, NASA’s continuing inability to provide its managers with timely, relevant data on the cost, schedule, and performance of its programs is a key reason that GAO continues to report NASA’s contract management as an area of high risk. Because this information is not available through the core financial module, program managers will continue to rely on hard copy reports, electronic spreadsheets, or other means to monitor contractor performance. Consequently, NASA risks operating with two sets of books—one that is used to report information in the agency’s general-purpose financial reports and another that is used by program managers to run NASA’s projects and programs. Compliance with federal accounting standards goes far beyond receiving a “clean” opinion on financial statements. A key indicator that an agency’s financial management systems do not substantially comply with federal accounting standards is the existence of material weaknesses in the agency’s internal controls. As noted earlier, NASA has not addressed material weaknesses in its internal controls and processes over PP&E and materials, which make up nearly 85 percent, or $37 billion, of NASA’s assets. Instead, NASA plans to rely on existing legacy systems and processes—including the extensive use of manual accounting entries—that the agency’s independent auditor has found to be inadequate for property accounting. As a result, NASA faces serious challenges in complying with these standards. Although NASA plans to implement an integrated asset management module in 2005, most of NASA’s issues related to property accounting have little to do with the lack of an integrated system. Instead, NASA faces two key challenges with respect to property accounting: (1) reengineering its processes for capturing and recording transaction-level detail in the core financial module’s general ledger and (2) addressing material weaknesses in its internal controls over property previously identified by NASA’s independent auditors. To date, NASA has yet to define specific requirements for its asset management module or determine how it plans to overcome the previously identified material weaknesses in NASA’s internal controls over PP&E and material. If NASA continues on its current track, the core financial module and IFMP will fail to achieve the agency’s stated objective of providing reliable, timely financial information for both internal management decision-making and external reporting purposes. Thus far, NASA has focused on deploying the system on its established schedule, rather than ensuring that it satisfies the agency’s internal management and external reporting requirements. To meet its schedule, NASA has put off addressing user requirements that would necessitate significant business process reengineering or extensive software configuration. While NASA is meeting its implementation milestones, it is only able to do so because the agency has deferred critical system capabilities, such as the ability to properly capture, record, and account for its PP&E and material; process budgetary accounting entries; and provide managerially relevant cost information. Until, and unless, the agency deals with these issues, NASA risks making a substantial investment in a system that will fall far short of its stated goal of providing meaningful information for both internal management and external reporting purposes. Based on the findings from this review, in conjunction with our April 2003 report, we reiterate our April 2003 recommendation that NASA: engage stakeholders—including program managers, cost estimators, and the Congress—in developing a complete and correct set of user requirements; and reengineer its acquisition management processes, particularly with respect to the consistency and detail of budget and actual cost and schedule data provided by contractors. We also recommend that the NASA Administrator direct the Program Executive Officer for IFMP to implement a corrective action plan in coordination with NASA’s Chief Financial Officer that will produce financial management systems that comply substantially with the requirements of FFMIA, including capabilities to produce timely, reliable, and useful financial information related to: property, plant, equipment, and materials; budgetary information including adjustments to prior year obligations; accounts payable and accrued costs; and the full cost of programs for financial reporting purposes. This plan should include time frames and details on how any changes will be monitored, tested, and documented. In written comments, reprinted in appendix II, NASA disagreed with all of our conclusions and recommendations in part because we reviewed the status of the core financial module as of June 23, 2003, instead of September 30, 2003—the date used for FFMIA reporting. Although NASA takes issue with the date of our review, it is important to note that we selected June 2003 because NASA represented that the core financial module was fully operational at all of its centers at that time. In making that representation, NASA officials acknowledged that, as part of their implementation strategy, they had not yet converted the system to support full-cost accounting. However, they did not disclose any other deferred capabilities. Moreover, NASA’s comments assert that for PP&E and budgetary reporting, the manual processes or workarounds it has developed to produce year- end balances for the agency’s annual financial statements also satisfy the requirements of FFMIA. We disagree with this assertion. The development of significant manual workarounds in these areas masks the fact that NASA’s core financial module is not designed to, and cannot, produce timely and reliable PP&E and budgetary data with traceability to transaction-based support. The ability to produce reliable numbers once a year for financial reporting purposes does not by itself constitute FFMIA compliance. In its written comments, NASA indicated that it has made changes to the module since June and that the core financial module as implemented in October 2003 has many of the capabilities that were lacking in the June 2003 module. Although we requested status updates between June and October to track NASA’s progress, we did not reassess the module’s capabilities as of October 2003. However, with the possible exception of full-cost accounting, which was planned for October 1, 2003, the changes NASA has cited still involve manual workarounds for producing year-end numbers. FFMIA goes beyond producing auditable financial statements once a year and requires financial systems that ensure accountability on an ongoing basis throughout the year. In response to our April 2003 recommendation, which we have restated in this report, to reengineer its acquisition management processes, particularly with respect to the consistency and detail of budgeted and actual cost and schedule data provided by contractors, NASA indicated that it is in the process of addressing a number of our concerns. Specifically, NASA stated that it (1) has extended the data structure embedded in the core financial module to capture more detailed cost data, (2) is currently assessing its contractor reporting requirements, and (3) is evaluating the possibility of accommodating contract cost and schedule data in an integrated environment. While it is too early to assess the significance or impact of NASA’s current effort, we are encouraged that NASA is considering the possibility of reengineering its acquisition management processes. This would be an important first step toward ensuring that NASA’s contractors provide the appropriate level and type of cost data needed for both internal management and external reporting purposes and that the core financial module is properly configured to support the agency’s information needs. However, we continue to believe it would have been more effective and efficient if NASA had conducted its assessment of contractor reporting requirements as part of a larger reengineering effort prior to configuration of the core financial module. Further, any effort that falls short of end-to-end business process reengineering will likely not result in a system that substantially improves the data available for contract oversight or ensures consistency between costs reported in general purpose financial reports and costs reported to program mangers. In its written comments, NASA also emphasized that the core financial module alone cannot meet all of the functional requirements needed to manage a program or to prepare cost estimates and asserts that applications such as Erasmus, an executive-level program performance reporting tool, will enable NASA to meet the full depth and breadth of user requirements. We agree that the core financial module alone cannot meet all of NASA’s information needs and that an executive-level reporting tool such as Erasmus may provide NASA executives with greater visibility over program performance. However, Erasmus does little to help program managers oversee contractor performance, and like the core financial module, may contain cost data that are not consistent or reconcilable with cost data used by program managers to manage contracts. The underlying problem, as we reported in April 2003, is that NASA uses one set of contractor-reported cost data to update the core financial module while program managers use a separate set of contractor-reported cost data that resides outside the system to monitor contractor performance. Consequently, the cost data maintained in the core financial module and reported in NASA’s external financial reports are not consistent or reconcilable with cost data used by program managers to manage contracts. Finally, NASA stated that the asset management module, scheduled for implementation in 2005, will make a significant contribution to its program management and cost estimating activities. This module is primarily intended to maintain detailed property records for NASA-held property. Thus, we do not believe an asset management module would have any impact on the cost, schedule, and performance data needed for program management and cost estimating. NASA disagreed with our recommendation related to IFMP’s ability to produce timely, reliable, and useful information for PP&E and materials in accordance with FFMIA requirements. NASA represented that its current processes for capturing and recording property for financial statement reporting purposes also meet the requirements of FFMIA because it has begun requiring more frequent and detailed property reporting by its 55 largest contractors. We disagree with NASA’s assertion. Because NASA’s current contractor cost-reporting processes do not provide the information needed to distinguish between capital and non-capital expenditures, NASA currently records as expenses all contractor costs as they are incurred and then manually adjusts previous entries to record assets based on periodic summary-level contractor property reports. While this process may satisfy NASA financial statement reporting needs, the development of significant manual workarounds in this area masks the fact that NASA’s core module is not designed to and cannot produce timely and reliable PP&E data with traceability to transaction-based support. The ability to produce reliable numbers once a year for financial reporting purposes does not equate to FFMIA compliance. In accordance with FFMSR, federal accounting standards, and the SGL, when an agency incurs costs for the purchase or construction of PP&E and material, those costs should be recorded in both the agency’s asset management system and its core financial management systems’ general ledger. The only difference for contractor-held property is that the asset management system belongs to the contractor. The asset management system, whether NASA’s or its contractors’, would maintain the agency’s detailed logistical property records for PP&E and materials—including information related to asset location, date of purchase, useful life, quantity, cost, and condition—and the core financial module’s general ledger would maintain a cumulative balance of all purchased or constructed property based on the cost incurred for individual items. The ability to reconcile detailed transactions in the asset management system with amounts recorded in the general ledger provides an efficient way to maintain independent general ledger control over these assets. As mentioned above, NASA first expenses all PP&E in the core financial module, and then later, makes adjustments to record the costs of PP&E as assets at a summary level. There is currently no traceability from the core financial module general ledger to the detailed logistical property records of PP&E and materials. NASA also stated that one of the objectives of the asset management module, now in formulation, is to significantly improve reporting for contractor-held property. While it is our understanding that NASA’s new asset management module, as planned, will maintain detailed property records for NASA-held property and be integrated with other IFMP modules, including the core financial module, we know of no plans to add contractor-held property to this system. In fact, the Federal Acquisition Regulation requires contractors to maintain the logistical property records for government property in their possession and prohibits government agencies from maintaining duplicate property records. Under these circumstances, as part of an overall effort to reengineer its acquisition management process, we believe that NASA must capture the cost and other information it needs from its contractors and develop traceability to contractor logistical records to ensure accountability over its contractor- held property on an ongoing basis. NASA disagreed with our recommendation regarding its ability to produce reliable, timely, and useful budgetary information, including adjustments to prior year obligations. NASA stated that although it identified certain transactional reporting limitations in its initial deployment of the core financial module, it developed alternative or “workaround” procedures to ensure the accurate and timely reporting of the identified transactions. However, as stated previously, we do not believe that the manual processes or workarounds NASA uses to produce year-end balances for the agency’s annual financial statements satisfy the requirements of FFMIA. While NASA’s written comments indicate that many of these deferred capabilities were largely enabled by September 30, 2003, they also indicate that more time will be required before the module can process adjustments to prior year obligations. As a result, NASA must use manual workarounds to process these transactions related to fiscal year 2003 activity. We note that these are the same manual procedures used to compensate for deficiencies in NASA’s legacy systems that resulted in the $644 million error in NASA’s fiscal year 1999 Statement of Budgetary Resources. NASA disagreed with our conclusion that its overall financial management system does not properly capture and report all accrued costs and accounts payable. However, we did not report that the information was not contained within the system; rather, we reported that it was not posted to the general ledger. We recognize that NASA records costs that exceed current obligations in the IFMP business warehouse until additional funds are obligated and in order to highlight or detect potential program cost overruns. While we encourage NASA’s effort to monitor costs in excess of obligations, we do not believe its method for doing so is appropriate. We continue to believe that these costs should be properly recorded in the general ledger in the period in which they are incurred. The risk in NASA’s method is that when costs and liabilities are not properly recorded in the general ledger, these balances are likely to be understated in any financial reports produced during the year, as well as at year-end. It is also important to note that comparing costs with obligations will not necessarily detect a cost overrun. For example, this strategy would not have alerted NASA to its largest cost overrun in recent years—the $5 billion cost growth in the International Space Station program reported in 2001. This overrun was not the result of incurring more costs than the funds obligated. Instead, it was due to the cost growth projected to occur in the future—i.e., growth in the estimated costs to complete the program. This cost overrun went undetected for a long period of time because of NASA’s deeply-rooted culture of managing programs based on current year budgets rather than total costs. As we reported in 2002, for NASA to manage its program costs properly, it needs to focus on the total costs of a program rather than just annual budgets. Thus, NASA’s plan to hold costs in suspense when they exceed obligations will not make such cost overruns any easier to detect or manage. Instead, as we reported in April 2003, to adequately oversee NASA’s contracts, program managers need reliable contract cost data—both budgeted and actual—and the ability to integrate these data with contract schedule information to monitor progress on the contract. However, because program managers were not involved in defining system requirements or reengineering business processes, the core financial module was not designed to integrate cost and schedule data needed by program managers. NASA also disagreed with our recommendation concerning its system’s ability to account for the full cost of its programs and asserted that it completed implementation of its full-cost accounting capability within IFMP as of October 1, 2003. However, IFMP management told us in early October that this capability would not become operational until October 26, 2003, after NASA completed its year-end closing procedures. Because of our reporting time frame, we did not conduct the detailed procedures that would have been necessary to determine whether or not this function had begun operating. As agreed with your offices, unless you announce its contents earlier, we will not distribute this report further until 30 days from its date. At that time, we will send copies to interested congressional committees, the NASA Administrator, and the Director of the Office of Management and Budget. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions concerning this report, please contact me at (202) 512-9505 or [email protected], Keith Rhodes at (202) 512- 6412 or [email protected], or Diane Handley at (404) 679-1986 or [email protected]. Key contributors to this report are acknowledged in appendix III. The objective of this report was to assess whether the National Aeronautics and Space Administration (NASA) Integrated Financial Management Program’s (IFMP) core financial module, as implemented on June 2003, would satisfy NASA’s external reporting requirements, such as reliable and auditable financial statements, congressional information needs, and other reporting requirements. Specifically, we assessed whether the core financial module (1) accurately accounts for Property, Plant, and Equipment (PP&E) and materials and supplies, (2) properly accounts for the full cost of NASA’s projects and programs, (3) captures and reports certain key budgetary information, (4) accurately records accounts payable, and (5) complies substantially with the requirements of the Federal Financial Management Improvement Act (FFMIA) of 1996. We did not assess other aspects of the core financial module’s capabilities. We interviewed officials from NASA’s financial management division and the NASA Office of Inspector General to identify various reporting requirements and weaknesses in meeting these requirements, and to determine how the core financial module will provide the data needed to meet these requirements. We evaluated fiscal year 2002 internal control weaknesses reported by PricewaterhouseCoopers, NASA’s independent auditors, related to PP&E, material and supplies, and financial reporting. However, for the purposes of this report we did not review the auditors’ underlying work paper support. We also reviewed NASA’s process for preparing the Statement of Budgetary Resources and reporting accounts payable, and any related issues identified by auditors. We reviewed applicable Treasury, Office of Management and Budget, and NASA guidance, and related federal accounting standards as well as federal financial management system requirements promulgated by the Joint Financial Management Improvement Program. At two NASA centers, we observed how transactions are recorded in the general ledger within the core financial module and discussed these processes with users of the system. We reviewed nonrepresentative selections of transactions for PP&E, materials, accounts payable, and budgetary transactions. We traced selected transactions to their source documents, and also traced selected source documents to the general ledger. We assessed whether transactions were recorded consistently with the Treasury Financial Manual. We also observed and discussed how information on contractor cost reports is recorded in the core financial module. We interviewed various officials from IFMP and its core financial project design and implementation teams, including the IFMP Deputy Program Director, the Core Financial Project Manager, and the Core Financial Deputy Project Manager to clarify our understanding of the core financial module’s functions and obtain the most recent information on the status of various implementation issues as of June 2003. We also reviewed relevant audit reports from the NASA IG and the results of an independent compliance review on the core financial module performed by NASA’s consultant. We performed our work primarily at NASA headquarters in Washington, D.C. and the two NASA centers—Marshall Space Center in Huntsville, Alabama and Glenn Research Center in Cleveland, Ohio—where the core financial module was implemented first. Our work was performed from April 2003 through September 2003 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the NASA Administrator or his designee. Written comments from the NASA Deputy Administrator are presented and evaluated in the “Agency Comments and Our Evaluation” section of this report and are reprinted in appendix II. Staff members who made key contributions to this report were Shawkat Ahmed, Fannie Bivins, Kristi Karls, Chris Martin, and Maria Storts. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
In April 2000, the National Aeronautics and Space Administration (NASA) began its Integrated Financial Management program (IFMP), its third attempt at modernizing its financial management processes and systems. In April 2003, GAO reported that NASA's acquisition strategy has increased the risk that the agency will implement a system that will cost more and do less than planned. This report is one of a series of reviews of NASA's acquisition and implementation of IFMP, and focuses on the core financial module's ability to provide the information necessary for external financial reporting. The core financial module of IFMP provides NASA its first agencywide accounting system--a significant improvement over the 10 disparate systems previously used. However, to meet IFMP's aggressive implementation schedule, NASA deferred testing and implementation of many key requirements of the core financial module. Consequently, when NASA announced, in June 2003, that this module was fully operational at each of its 10 centers, about two-thirds of the financial events or transaction types needed to carry out day-to-day operations and produce external financial reports had not been implemented in the module. NASA officials acknowledged that, as part of their implementation strategy, they had not yet converted the module to support full-cost accounting. In addition, we found that NASA also deferred implementation of other key core financial module capabilities. Because NASA did not use disciplined processes for defining, managing, and testing key system requirements, or substantially reengineer its business processes prior to implementation, the core financial module, as implemented in June 2003, does not address several long-standing external reporting issues and has created some new problems. Long-standing external financial reporting issues have not been addressed. NASA has not used its implementation of the core financial module as an opportunity to drive needed changes in its management practices and business processes. Therefore, the system does little to address NASA's ability to properly account for $37 billion of reported property or certain aspects of the agency's $15 billion annual budget. New financial reporting problems have emerged. NASA went forward with its aggressive implementation plans even though agency managers knew of problems with the module's ability to properly process and record certain transactions. As a result, the module does not appropriately capture critical information on the cost of NASA's operations, such as certain accrued costs, accounts payable, and obligation transactions. In April 2003, GAO reported that the core financial module did not address key internal management information requirements. Now, GAO has found that the module cannot reliably provide key financial data needed for external financial reporting. Although NASA intends to address many of these issues, its implementation approach raises concerns over its ability to do so. These deferred external reporting capabilities, combined with the findings from our April 2003 report, indicate that NASA's June 2003 core financial module and related systems do not substantially comply with the requirements of Federal Financial Management Improvement Act (FFMIA). FFMIA addresses the need for agencies' financial systems to provide value to those who use financial data. NASA must address these issues if the core financial module and IFMP are to achieve the objective of providing reliable, timely financial information for both internal management decision-making and external reporting purposes.
The National School Lunch Program, established in 1946, is intended to safeguard the health and well-being of the nation’s children. The program provides nutritionally balanced low-cost or free lunches in participating schools to about 31 million children each month. At the federal level, USDA’s Food and Nutrition Service oversees the program, which is administered by states and local SFAs. In fiscal year 2012, the federal government spent over $11 billion on the National School Lunch Program. Specifically, USDA provides reimbursement in the form of cash subsidies and donated commodities based on the number of lunches served that meet certain federal requirements. Although federal requirements for the content of school lunches have existed since the program’s inception, as research has documented changes in the diets of Americans and the increasing incidence of overweight and obesity in the U.S., federal lunch requirements have become increasingly focused on improving the nutritional content of lunches. The Healthy, Hunger-Free Kids Act of 2010, which most recently reauthorized the National School Lunch Program, required changes to the federal lunch requirements with the intention of reducing childhood obesity and improving children’s diets. Since 1994, federal law has required SFAs to serve school lunches that are consistent with the Dietary Guidelines for Americans, and in 2004, federal law required USDA to issue federal rules providing SFAs with specific recommendations for lunches consistent with the most recently published version of the Guidelines. As a result of that requirement, USDA asked the Institute of Medicine to review the food and nutritional needs of school-aged children in the United States using the 2005 Dietary Guidelines for Americans and provide recommended revisions to meal requirements for the National School Lunch Program. The Institute published its final report in 2010, and also in that year, the Healthy, Hunger-Free Kids Act of 2010 required USDA to update the lunch requirements based on these recommendations. The Institute’s report recommended changes to the lunch component and nutrition requirements in place at the time. Regarding the lunch components— fruits, vegetables, grains, meats, and milk—the Institute recommended offering both fruits and vegetables daily, increasing whole grain-rich foods, offering only fat-free and low-fat milk, and limiting the amount of grains and meats/meat alternates served each week. Regarding the nutrition requirements, the Institute recommended including both minimum and maximum calorie levels for lunches, increasing the emphasis on limiting saturated fat and minimizing trans fat, and reducing sodium content. USDA issued a proposed rule on the new lunch requirements in January 2011 and a final rule in January 2012. The final rule required implementation of many of the new lunch requirements beginning in school year 2012-2013. Since the final rule was issued, USDA has provided extensive guidance, as well as technical assistance and training, to states and SFAs to assist with implementation of the new requirements. Because regulations issued in January 2012 by USDA placed limits on the amounts of meats/meat alternates and grains that can be included in a school lunch, all eight SFAs we visited modified or eliminated some popular menu items, leading to negative student reactions in some districts. USDA’s new regulations specify the minimum and maximum weekly number of ounces of meats, cheese, or other meat alternates and the minimum and maximum weekly number of ounces of grains to be served with lunch, which differ by grade level. In comparison, the previous regulations only specified the minimum number of ounces of meats and grains required to be served with lunch each week. (See table 1.) Officials in one of the districts we visited told us that, in response to the new limits, cheeseburgers were removed from the elementary and middle school lunch menus because adding cheese to the district’s burger patties would have made it difficult to stay within the weekly meat maximums. In another district, the SFA reported that it switched from using shredded cheese on the chili dog to processed cheese sauce because it does not count as a meat alternate. A similar type of switch occurred in one of the districts we visited because of the grain maximums. That SFA reported that it changed from serving a whole grain chip to a potato chip because the potato chip did not count as a grain. The grain maximums also affected popular lunch items, such as sandwiches. For example, four districts we visited reduced certain grain options used for sandwiches, such as the sub roll and the tortilla wrap, and two districts stopped serving peanut butter and jelly sandwiches as a daily option in elementary schools because the weekly grain maximum did not allow for a sandwich to be served every day. SFAs in four of the districts we visited noted that student reactions to these menu item changes were generally negative, and some said the changes had impacts on participation, that is, the number of students purchasing school lunches. For example, the tortilla wrap size change in one district was followed by a significant decrease in the number of students selecting their lunches from the previously popular deli sandwich line in the high schools, as well as a decrease in the overall percentage of students purchasing school lunches in those schools. Another district’s change to its sub roll contributed to a middle and high school student boycott of school lunch that lasted for 3 weeks. To comply with both the meat and grain maximums and the required calorie minimums for lunches, some districts added foods that generally did not improve the nutritional value of lunches. In the new requirements, USDA specified daily minimum and maximum calorie levels for school lunches by grade group (K-5, 6-8, and 9-12), which lunch menus must meet on average over the school week. However, because the entrée, typically consisting of meat and grain, generally provides the majority of the calories in the meal, the weekly meat and grain maximums that limit the size of entrées in effect also limited the calories of the lunches. As a result, five SFAs we visited reported that the meat and grain maximums made it difficult to plan menus that met the minimum calorie requirement for grade 9-12 lunches—750 calories. To comply, some SFAs added foods to the menus that, while allowable, generally do not improve the nutritional value of lunches. For example, in three of the districts we visited, the SFAs reported adding pudding to certain high school menus to bring the menus into compliance with the calorie minimum. Some SFAs also added gelatin, ice cream, or condiments such as butter, jelly, ranch dressing, or cheese sauce to become compliant, according to the districts we visited and the SFA and industry groups we spoke with. While these additional menu items provided needed calories to lunches, they also likely increased the amount of sugar, sodium, or fat in the meal, potentially undercutting the federal law’s goal of improving the nutritional quality of lunches. Some SFAs noted that obtaining meat and grain products from food vendors that complied with the new requirements was a continual and evolving process during school year 2012-2013 because vendors were continually modifying products throughout the year. For example, four SFAs we visited said they met regularly with vendors during school year 2012-2013 as vendors worked to bring their products into compliance. One of those SFAs reported working closely with food manufacturers and vendors throughout the summer of 2012 to find appropriate products, including a 1.5 ounce burger patty—which is less than half the size of a ¼ pound burger—that allowed the district to continue to serve cheeseburgers to all students. Representatives from a group of food manufacturers and other relevant industries we spoke with indicated that the meat and grain maximums were challenging to respond to in part because the grain maximums had unexpectedly changed between the proposed and final rules, and the time between issuance of the final regulations and required implementation was short. Some noted that while they were eventually able to reformulate their products to comply with the new requirements, the process took longer than the 6 months available between issuance of the final rule and the required implementation date. In response to feedback from states and SFAs regarding operational challenges caused by the meat and grain maximums, USDA lifted the maximums temporarily. First, in December 2012, USDA issued guidance allowing states to consider SFAs to be in compliance with the requirements for school year 2012-2013 if their menus exceeded the weekly meat and grain maximums. A few months later, in February 2013, USDA provided the same flexibility for school year 2013-2014, acknowledging that SFAs needed guidance to help with meal planning and food procurement for the coming school year, as SFAs often plan menus and order or contract for food beginning in the winter of the previous school year. The February guidance also stated that USDA understands the need for longer term guidance on this issue and is considering options for addressing the meat and grain maximums beyond school year 2013-2014. In May 2013, USDA officials told us that the Department wanted to be responsive to the challenges they had heard about, and they did not see a problem making the temporary change to help with implementation because the meat and grain maximums and the calorie maximums both accomplish the goal of addressing portion size, making them somewhat redundant. Although this implies that USDA may permanently remove the meat and grain maximums, USDA officials told us that the Department is still considering options for a long-term solution to the meat and grain maximums and has not yet made a permanent decision. None of the eight SFAs we visited made substantial changes to their menus in response to USDA’s temporary removal of the weekly meat and grain maximums. Reasons that SFAs cited for this decision included: the flexibility was temporary, districts had already modified their menus to comply with the new requirements, products were already ordered for those menus, staff were already trained, and students had been educated about the new requirements. Instead, those SFAs that made some modifications after the flexibility was allowed focused on marginal changes that would ease menu planning and improve student acceptance of lunches. For example, in the district in which students reacted strongly to the decreased size of the tortilla wrap for sandwiches, the SFA brought in a larger wrap, though it was still smaller than the wrap used previously. Further, in the district that experienced a student boycott of lunch in part because of the change to the sub roll, the sub roll used in prior school years returned to the high school lunch menus. In another district that had decreased the amount of mini corn dogs they provided to each elementary school student because of the maximums, additional mini corn dogs were added to each student’s portion. SFA directors, food manufacturers, and other relevant industry representatives indicated the need for a timely and permanent federal decision on these maximums. Specifically, some SFA directors we visited told us that it is difficult to know how to proceed with menu planning under the new requirements when the flexibility provided over the maximums continues to be temporary. The School Nutrition Association, which represents SFAs across the country, has indicated that it supports the permanent elimination of the meat and grain maximums, because their removal will give cafeterias more flexibility to design healthy menus that meet nutrition standards and student tastes. Although the flexibility exists for school year 2013-2014, because USDA has given SFAs mixed messages regarding the Department’s future plans for the meat and grain maximums, SFAs are currently left guessing about the future outcome, making planning future budgets and food ordering difficult. Several industry representatives said that because some SFAs are planning menus that comply with the maximums, while others are planning menus that include larger meat and grain portion sizes, industry is experiencing difficulties forecasting demand, which leads to food production, inventory, and storage challenges. This situation will soon become more complicated because of the impending federal changes to the content of meals served through the School Breakfast Program and other foods sold in schools. Because the required calorie ranges for grades 6-8 and 9-12 do not overlap, schools with students in both these grade groups faced challenges complying with the calorie requirements. While the grades K-5 and 6-8 average daily calorie ranges for school lunches overlap at 550- 650 and 600-700, the grades 6-8 and 9-12 ranges, which are 600-700 and 750-850, do not. This creates a challenge for schools that include students from both grade groups, including schools in two of the districts we visited. One SFA director, whose district includes schools serving 7th through 12th graders, noted that complying with both of the calorie range requirements is particularly difficult when students in different grades use the same serving lines and share a lunch period. The director noted that cashiers at the point-of-sale may not know each student’s grade level, which complicates the accurate identification of a meal that complies with the requirements. In addition, if certain food items are offered to some students and not to others depending on their grade, students may react negatively to the differential treatment. Because of these implementation issues, this district planned its menus to generally provide 725 calorie lunches for all students in these schools, which are not in compliance with either of the required ranges, and could potentially result in fiscal action against the SFA in the future. USDA’s response to this issue, provided in part through the Department’s guidance on menu planning under the new lunch requirements, has been limited. In the proposed rule on the new lunch requirements, USDA indicated that the new requirements are expected to bring about positive outcomes, including simplification of school lunch administration and operations. However, in comments on the proposed rule, some school districts expressed concerns that the lack of overlap in the calorie ranges may lead to increased costs and administrative burden. Although USDA did not change the ranges in the final rule, in its guidance on the new requirements, the Department acknowledges that the lack of overlap in the calorie ranges for these grade groups can be challenging. Because of this, USDA’s guidance suggests that districts serve a menu appropriate for the lower grade level and add a few additional foods for students in the upper grade level. This differs from the previous requirements, which allowed schools to comply with meal requirements for the predominant grade group in schools that included students from two different groups. USDA’s guidance also differs to some extent from the approach recommended by the Institute of Medicine in its report on which the federal requirements are based. The report’s authors suggested that, for schools serving students from multiple grade groups on the same serving line, the SFA should work with the state agency to find a solution that ensures the basic elements of the standards for menu planning will be maintained, including moderate calorie values. While all eight SFAs we visited expressed support for the goal of improving the nutritional quality of lunches and felt the new requirements were moving in that direction, all eight experienced various challenges related to student acceptance of some of the foods served to comply with the requirements. Under the new requirements, lunches must include whole grain-rich products and vegetables from 5 sub-groups each week, and districts we visited noted that obtaining student acceptance of some whole grain-rich products and vegetables in the beans and peas (legumes) and red-orange sub-groups have been challenging. For example, six districts mentioned student acceptance of whole grain breads or pasta as being a challenge. Regarding vegetable sub-groups, five districts we visited said that they have had difficulty obtaining student acceptance of the beans and peas (legumes) sub-group, and two districts expressed difficulty with sweet potatoes, in the red-orange sub-group. Some noted that they have continued to try new recipes throughout the year to address these challenges, but acceptance has been limited. Challenges with student acceptance of these foods were foreseen by the Institute of Medicine in its report recommending they be required components of school lunch, as national data showed that few students reported eating these types of foods. The researchers noted that implementation of effective educational, marketing, and food preparation strategies, as well as the increased availability of suitable and appetizing products, may improve student acceptance of these foods. Some districts reported that, if the past is an indicator, student acceptance of these foods may improve over time, and student comments regarding other healthy foods they like suggest this as well. In four of the districts we visited, SFA directors noted that they had begun adding whole grains into their menus before the current school year, and they have seen student acceptance of whole grain products improve over time. In addition, one district’s SFA director also noted that acceptance of foods in the beans and peas (legumes) sub-group has improved over time. When we talked to students in the schools we visited and asked them about lunch foods they do not like, these specific foods were mentioned by some students in four of the eight districts, but most students focused their comments on other vegetables or specific entrees. Further, most of the students we talked to indicated that they like to eat healthy and nutritious foods, and they think that school lunches generally provide such foods. Although school year 2012-2013 is the first year that students were required to take a fruit or a vegetable with school lunch nationwide, when we asked students what they like about school lunch this year, students in 13 of the 17 schools we visited to observe lunch reported liking certain fruit and vegetable options. Food waste is also an indicator of lack of student acceptance of the new lunch requirements. Students may take the food components they are required to as part of the school lunch, but they may then choose not to consume them. Although none of the districts we visited had fully analyzed food waste over the past few years to determine if it changed during school year 2012-2013, six of the SFAs we visited told us they believe food waste has increased because of the new lunch requirements. In particular, SFAs said that the fruits and vegetables students are now required to take sometimes end up thrown away, and in our lunch period observations in 7 of 17 schools, we saw many students throw some or all of their fruits and vegetables away. However, at the same time, we observed other students take and consume sizable quantities of fruits and vegetables and the other lunch components in the remaining 10 schools in which we observed lunch, resulting in minimal food waste. Four of the SFAs we visited talked about food waste being more of an issue with the youngest elementary school students, possibly because of the amount of food served with the lunch and the amount of time they have to consume it. The Institute of Medicine report acknowledged differences in food intake among elementary students, noting that the amounts of food offered under the new recommendations may be too large for some of the younger elementary school children because they are more likely to have lower energy needs than the older children in the same grade group. In USDA’s final rule, the Department discussed the offer versus serve policy, which has been required for senior high schools and optional for all other schools since 1975, as a way to minimize food waste. Under the current regulations, this policy allows students to decline two of the five meal components offered with the lunch, rather than requiring students to be served all five components. However, the SFA director in one of the districts we visited noted that the district has chosen not to implement the offer versus serve policy for the youngest students because they have difficulty making choices, which extends the time spent in the serving line and decreases the time students have to consume their lunch. Student participation in lunch has decreased to some extent in school year 2012-2013, which is another indicator that student acceptance of school lunches may have declined since the changes. Most of the SFAs we visited reported that they experienced decreases in lunch participation in school year 2012-2013 in part because of the new lunch requirements and other factors. USDA’s national data, which do not account for adjustments related to changes in monthly serving days or student enrollment across years, also generally show that student lunch participation was lower in school year 2012-2013 than it was the year before. Later this year, when we complete our study of the school lunch changes, we plan to provide additional information on lunch participation trends. SFAs also faced concerns in school year 2012-2013 that the new lunch requirements were leaving some students hungry—an issue raised in five of the districts we visited. For example, in one district, a high school principal told us that during school year 2012-2013, athletic coaches expressed concerns that student athletes were hungrier after school than they were in previous years, and staff reported that more students were distracted during the final period of the school day than in previous years. In the district we visited in which middle and high school students boycotted school lunch at the beginning of the year, the boycott was led by two student athletes in part because they indicated that the lunches were leaving them hungry. These concerns were likely related to decreased entrée sizes. During our visits to schools, students in six schools mentioned that they have been hungry this year after eating school lunch because of various reasons. For example, students in three schools attributed this to the smaller entrees, and students in one of those schools also noted that it may be related to the timing of their lunch periods, as their school’s first lunch period began around 10:30 a.m. and the school day ended at about 2:30 p.m. In another school, students acknowledged that they had not taken or eaten all of the items offered with the lunch, which we observed resulted in a smaller sized lunch. (See figure 1.) In contrast, when students served themselves all of the lunch components in the districts that we visited, their lunches were substantially larger in size, primarily because of the large amounts of fruits and vegetables they selected. (See figure 2.) School lunches generally provide fewer calories under the new requirements than in past years, likely because of smaller entrée sizes. Specifically, the new required lunch calorie maximums for each grade group are either lower or comparable to the calorie minimums previously required. As a result, school lunches generally provided more calories in the past, according to national data, than they are allowed to in school year 2012-2013, particularly for younger students. Although the previous nutrition standards were developed to align school lunches with the Dietary Guidelines for Americans, they were developed in the mid 1990s. Since then, the percentage of children who are overweight and obese has increased, and research has shown that excess food consumption, poor food choices, and decreased physical activity contribute to these trends. The Institute of Medicine’s 2010 recommendations for the lunch pattern were developed using a data- based approach, which assessed data on healthy weights and heights, physical activity, and the distribution of calories among meals, and the authors indicate that the recommended lunches are appropriate for the level of physical activity of most children. SFAs also expressed concerns about the impact of compliance with the new lunch requirements on food costs and their budgets. All eight SFAs we visited reported that they have incurred increases in fruit and vegetable costs this year because of the requirement that students take at least one fruit or vegetable with lunch. Further, most indicated that overall costs for school lunch were greater in school year 2012-2013 than in the past, and three expressed concerns about the impact of these changes on their overall financial stability. Because we conducted our visits before the end of the school year, we have not yet obtained data from these SFAs on how they ended the year financially, though we plan to provide information on those results in our final report. All eight SFAs we visited also discussed other challenges implementing the lunch changes during school year 2012-2013, such as additional menu planning issues, food procurement, new requirements related to the price of lunches, the pace of implementation, and USDA’s assistance with the changes. When we complete our study of the lunch changes later this year, we will provide additional information about implementation challenges and USDA’s assistance to states and SFAs with implementation. In addition to the school lunch changes, the Healthy Hunger-Free Kids Act of 2010 required that USDA specify and require nutrition standards for all foods and beverages sold outside the school meals programs on the school campus during the school day, which are commonly referred to as competitive foods because they compete with school meal programs. Competitive foods are often sold through vending machines, school stores, and fundraisers, and also include SFA sales of a la carte items in the cafeteria. In school year 2009-2010, competitive foods were sold in an estimated 93 percent of schools nationwide, according to a recent USDA study. The proposed rule containing these standards was published by USDA in February 2013, and during our visits to SFAs, many expressed concerns that certain aspects of the proposed rule would be challenging to implement, if finalized. Specifically, seven of the eight SFAs we visited expressed concerns about what they viewed as a lack of clarity in the proposed rule regarding how the nutrition standards for competitive food sales administered by entities other than the SFA will be enforced. In our 2005 report on competitive foods, we found that many different people made decisions about competitive food sales, but no one person commonly had responsibility for all sales in a school. At that time, in a majority of schools nationwide, district officials made competitive food policies, while SFA directors and principals made decisions about specific sales. Other groups, such as student clubs and booster groups, also made competitive food decisions through their direct involvement in sales. The number and variety of groups involved in these sales typically increased as the school level increased. For example, an estimated 48 percent of middle schools nationwide had three or more groups involved in these sales compared to an estimated 83 percent of high schools. Although a 2004 law required districts to implement wellness policies in school year 2006-2007 that addressed nutritional guidelines for all foods available in schools during the school day, some of the SFAs we recently visited told us that these policies have generally not been enforced, in part because no one person was granted enforcement responsibility over all such sales. SFAs we visited also expressed concern that the proposed rule’s inclusion of differing nutrition standards based on the type of competitive foods sale will put the SFA at a competitive disadvantage relative to other food sales within a school. For example, five SFA directors expressed concerns about the proposed rule’s provision allowing states discretion to make decisions about fundraisers that are exempt from the federal nutrition standards for competitive foods. Some SFA directors expressed concerns that this would potentially result in inconsistent treatment, whereby SFAs’ competitive food sales would be required to follow the nutrition standards and fundraisers would not. Similarly, some SFAs expressed concerns about the proposed rule’s inclusion of different standards for beverages sold in food service areas during meal periods— which are typically sold through SFA a la carte sales—and beverages sold outside of meal service areas—such as those through vending machines. Specifically, although the proposed rule allows the sale of milk, water, and juice through any competitive food venue at any time, the rule also allows the sale of other beverages, except for in food service areas during meal periods. However, this restriction is somewhat similar to the current federal requirements on competitive food sales. Across the country, more nutritious school lunches likely were provided to students during school year 2012-2013. All eight SFAs we visited expressed support for the goal of improving the nutritional quality of lunches and felt the new federal requirements were moving in that direction. Many students’ positive comments on healthy foods, their views that school lunches generally provide such foods, and their consumption of sizeable quantities of fruits and vegetables in the majority of schools we visited indicate that acceptance of the new lunch requirements will improve over time. However, as the first year of implementation of the new requirements for the content of school lunches has unfolded, the SFAs we visited also faced a variety of challenges. While some of the challenges SFAs faced this year have been addressed and others may become less difficult as time elapses, those caused by the required weekly maximum amounts of meats and grains permitted in lunches and the lack of overlap in the allowable calorie ranges for grades 6-8 and 9-12 likely will not. Because of the meat and grain maximums, some districts made menu decisions that are inconsistent with the goal of improving children’s diets, as they added desserts and condiments that increased the amount of sugar, salt, or fat in lunches in order to comply with the required calorie minimums. Acknowledging that the meat and grain maximums created challenges for SFAs, USDA lifted them through school year 2013-2014 and indicated that the maximums may not be needed to accomplish the nutrition goals of the new requirements. However, although USDA has acknowledged the need for a permanent decision on the maximums, they have yet to provide one, hindering the ability of school districts to plan menus, food purchases, budgets, staff training, and student education because they do not know whether the meat and grain restrictions will be reinstated in the future or not. In addition, the requirements that lunches served to students in grades 6-8 provide different amounts of calories than lunches served to students in grades 9-12—even in schools that serve students in both grade groups— is inconsistent with past practices, expert recommendations, and USDA’s intent of simplifying the administration and operations of the school lunch program. Most significantly, the inflexibility of these calorie requirements substantially hinders certain SFAs’ ability to comply, which may potentially result in fiscal action against SFAs in future years. Absent a permanent USDA decision to remove the meat and grain maximums and increase flexibility for schools that serve meals to students in both the 6-8 and 9-12 grade groupings, SFAs will continue to face challenges implementing the regulations, potentially impeding their efforts to meet their key goals—healthier foods in school for healthier students. To improve SFAs’ ability to design menus that comply with the new lunch requirements, we recommend that the Secretary of Agriculture: permanently remove the weekly meat/meat alternate and grain maximums for school lunch defined in federal regulations, and modify federal regulations or guidance to allow school districts flexibility in complying with the defined calorie ranges for schools with students in both the grades 6-8 and 9-12 groups. We provided a draft of this testimony to USDA for review and comment. In oral comments, USDA officials indicated that they generally agreed with our recommendation regarding meats and grains, and they are currently developing an approach for permanently lifting the meat and grain maximums. Officials added that while they recognize the need to address the challenges posed by lack of overlap in the calorie ranges for grades 6-8 and 9-12, it is important to identify a solution to this issue that ensures calorie ranges remain appropriately targeted to students based on their ages—a point emphasized by the Institute of Medicine. USDA officials also said that they have been collecting information on implementation of the new lunch requirements throughout the year from many school districts and have heard about implementation challenges. However, according to USDA officials, official reporting by states indicates that a majority of districts have been able to comply with the new requirements. USDA also expressed concern that the findings in the testimony did not reflect a nationally representative sample of school districts. We continue to believe that our site visits to eight school districts and our interviews with eight SFA directors from across the country, state officials, and industry representatives enabled us to identify some of the challenges school districts are facing in implementing the new nutrition standards. Our final report will provide additional information and data to inform these issues. Chairman Rokita and Members of the Subcommittee, this concludes my statement. I would be pleased to respond to questions you may have. For further questions on this testimony, please contact me at (202) 512- 7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this statement include Jessica Botsford, Robert Campbell, Rachel Frisk, Kathy Larin, Jean McSween, Dan Meyer, and Zachary Sivo. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The National School Lunch Program served 31.6 million children in fiscal year 2012, in part through $11.6 billion in federal supports. The most recent reauthorization of the program, the Healthy, Hunger-Free Kids Act of 2010 required that nutrition standards for school lunches be updated. As a result, USDA issued final regulations aimed at providing lunches high in nutrients and low in calories that better meet the dietary needs of school children and required that they be implemented beginning in school year 2012-2013. The new rules provide detailed requirements for meal components--fruits, vegetables, grains, meats, and milk; update requirements for calories, sodium, and fats; and require that each student's lunch contain a fruit or vegetable. To provide information on challenges that school districts have faced, this testimony draws on work GAO conducted as part of its ongoing study of implementation of the changes. Specifically, GAO reviewed relevant federal laws, as well as USDA regulations, guidance, and studies; interviewed USDA officials and groups of food service officials and relevant industry representatives; and visited eight school districts. The districts varied by geographic location, size, and certain student and food services characteristics. School districts faced several challenges implementing the new lunch requirements in school year 2012-2013, according to the eight districts GAO visited and food service and industry officials GAO interviewed from across the country; and the U.S. Department of Agriculture's (USDA) response to some of these challenges has been limited. For example, because USDA regulations restrict the amounts of meats and grains that can be served in school lunches each week, all eight districts GAO visited needed to modify or eliminate popular menu items. These changes sometimes led to negative student reactions. The meat and grain restrictions also led to smaller lunch entrees, making it difficult for some schools to meet minimum calorie requirements for lunches without adding items, such as gelatin, that generally do not improve the nutritional quality of lunches. In response to feedback from states and districts regarding operational challenges caused by the meat and grain restrictions, USDA lifted the limits temporarily, first for the remainder of school year 2012-2013 and then for school year 2013-2014. USDA officials said they did not see a problem making the temporary changes to help with implementation because the limits on meats and grains and the limits on the calories in lunches are somewhat redundant, as both address portion size. However, because the change was seen as temporary, the eight districts GAO visited made only marginal changes to their menus. Rather, several district food services officials, as well as relevant industry representatives, indicated the need for a permanent federal decision on these restrictions, which USDA has also acknowledged. The calorie range requirements for lunches also challenged some districts, particularly those with schools that include students from both grades 6-8 and 9-12. Because the required lunch calorie ranges for these two grade groups do not overlap, districts with such schools face difficulties planning menus and serving lunches that comply with both requirements. For example, one food services official, whose district includes schools serving 7th through 12th graders, developed menus with calorie counts between the grades 6-8 maximum and the grades 9-12 minimum, leaving the lunches out of compliance with both sets of restrictions. Although USDA has acknowledged that menu planning in such schools can be challenging, USDA's current guidance does not provide these districts flexibility to assist their efforts to comply. Rather, guidance suggests that students from different grades be provided with different lunches, a solution that may be impractical in schools in which students of different grades share lunch periods and serving lines. Although the eight districts GAO visited expressed support for the improvements to the nutritional quality of school lunch, they reported additional challenges meeting the new requirements, such as student acceptance, food waste, costs, and participation. For example, USDA requires that meals include whole grain-rich products and certain vegetables, but most districts noted that obtaining student acceptance of foods like whole grain pasta and beans has been challenging. If students do not accept these items, the result may be increased food waste or decreased participation in the lunch program, which were concerns in most districts GAO visited. However, student acceptance of the changes will likely improve over time, as indicated by their positive comments about healthy food and consumption of fruits and vegetables in most districts GAO visited. GAO recommends that USDA permanently remove the meat and grain maximum requirements and allow flexibility to help districts comply with the lack of overlap in the calorie ranges for grades 6-8 and 9-12 lunches. USDA generally agreed with GAO's recommendations.
As of December 31, 1996, a total of 281 FBOs based in 59 countries had banking operations in the United States that were subject to the procedural requirements of the FBO program. FBOs operate in the United States through a number of types of offices with differing powers and oversight. The most common of these types of entities are described in table 1. As shown in table 2, branches and agencies are the most common types of FBO banking offices in the United States, and they account for about 51 percent of the total foreign bank assets in the United States as of December 31, 1996. An individual FBO may have a variety of these types of offices operating in the United States, and each individual office may be supervised by a different federal or state regulator, with FRS having overall authority. Figure 1 shows the organizational structure of the U.S. operations of a hypothetical FBO. It also shows the U.S. supervisor for each office. To address the objectives of this report, we reviewed examination manuals, relevant laws, and guidance issued by the Board of Governors of FRS (Federal Reserve Board). We interviewed officials from the Federal Reserve Board and the Federal Reserve Banks of Atlanta, Chicago, New York, and San Francisco. We also interviewed state bank supervisors from California, Florida, Illinois, and New York, and officials from FDIC, OCC, and the Institute of International Bankers—an association of foreign banking organizations with U.S. operations. The Federal Reserve Banks and state bank supervisors we interviewed are responsible for overseeing most FBO operations in the United States. In addition to our interviews, we developed a data collection instrument (DCI) to help us systematically collect information from each of the FBO products: the country reports, SOSAs, examinations, overall assessments of U.S. offices, and the comprehensive exam plans. The type of information we collected included basic financial information on the FBO and its U.S. operations, results of past examinations of U.S. operations, and information on the supervisory and financial system of the foreign country, among other things. This DCI was designed to help us compare the content of these reports and determine the extent of use of information from SOSAs and country reports in comprehensive exam plans. We used the DCI to review the FBO products from 18 different countries. We chose countries located in Europe, Asia, and North and South America to obtain variation in geographic location and levels of financial development. For each country, we chose two FBOs, if two existed, and reviewed their SOSAs, comprehensive exam plans, and overall U.S. assessment, if available. We chose the FBOs included in our judgmental sample to obtain variation in size, SOSA ranking, and types of offices they had operating in the United States. We obtained written comments on a draft of this report from the Federal Reserve Board. These comments are discussed at the end of this letter and are reprinted in appendix I. We did our work in Washington, D.C.; New York; California; Illinois; and Florida in accordance with generally accepted government auditing standards from September 1996 to January 1997. The FBO Program was designed to provide the U.S. banking supervisory agencies with a collective mechanism for supervising the U.S. operations of FBOs in a highly coordinated, thorough, and efficient manner, according to the Federal Reserve Board. FRS began to implement the FBO Program in March 1995, when it issued its initial guidance on the program. Federal Reserve Board officials told us that the program was scheduled to be implemented over a 3- to 5-year period, but that they hoped to have it fully operational within 3 years. The interagency program—which consists of a number of supervisory steps and assessments that each have their individual requirements regarding content, procedures, and timing—calls for the development and distribution of six supervisory products. The six supervisory products of the FBO Program are to provide information about the home countries of the FBOs, the FBOs themselves, and the FBOs’ operations in the United States. The six products are Review of Home Country Financial System, Review of Significant Home Country Accounting Policies and Practices, Strength-of-Support Assessment, Comprehensive Examination Plan, and Summary of Condition and Combined Rating. We refer to the first two products, which focus on a country’s financial system and accounting policies and practices, as “the country reports.” The contents of the six supervisory products are summarized in table 3. The Federal Reserve has assigned responsibility for preparing the FBO products to the various Reserve Banks that have offices of foreign banks in their districts. Responsibility for the products is generally assigned according to the location of the FBO offices in the United States. Given the preponderance of FBO offices in New York, the Federal Reserve Bank of New York was preparing the majority of products. Draft country and SOSA reports are to be circulated to other relevant U.S. supervisors for comment. Final versions of the reports are also to be provided to the relevant U.S. supervisors. Based on FRS guidance issued in August 1996, SOSA rankings are to be considered final only when they have been formally reviewed and approved by a committee headed by officials of the Federal Reserve Board’s international supervision function. One of the principal goals of the SOSA is to identify FBOs that may pose risks to their U.S. operations or to U.S. financial markets due to financial, operational, or other concerns at the FBO as a whole. As table 3 shows, the SOSA utilizes a two-component assessment ranking system for financial and managerial support. Financial support is summarized by A to E rankings, with A representing the lowest level of supervisory concern and E the highest. An asterisk is to be placed beside the letter assessment on an as-needed basis to identify whether there are any factors that raise questions about the ability of the FBO to maintain adequate internal controls and compliance procedures at its U.S. offices, irrespective of the overall financial condition of the FBO. The SOSA—which is supported by the two country reports—is to provide information to the U.S. bank supervisory agencies that they can take into account in reaching decisions regarding the scope and frequency of examinations and whether other supervisory initiatives may be appropriate. The SOSA assessment serves to categorize all FBOs with U.S. banking operations by levels of supervisory concern, highlighting those whose U.S. operations are thought to warrant higher levels of supervisory attention. An FBO’s SOSA, along with other information, is to be taken into consideration in setting the examination plan for the FBO’s U.S. operations. For example, the U.S. operations of FBOs whose assessments are marked by an asterisk, denoting potential internal controls or compliance risks, may receive examinations in which supervisors investigate those risks. The FBO’s SOSA analysis and ranking are to be considered in implementing supervisory follow-up action for the U.S. operations, although specific SOSA rankings are not linked to mandatory supervisory actions. According to procedural guidance for the program, an assessment of C or lower is expected to imply a level of concern that would subject the FBO’s U.S. offices to at least periodic monitoring of their net due to/due from positions. Any additional supervisory step, such as imposing an asset pledge or asset maintenance requirement, is to be implemented largely based on the condition and nature of the U.S. operations. If an FBO is accorded an assessment of D or lower, this is generally expected to indicate a higher level of supervisory concern, with some presumption of asset maintenance regardless of the condition of the FBO’s U.S. operations. As part of the FBO Program, FRS is to maintain a database containing information on the financial system and on significant accounting policies and practices of each country with bank representation in the United States. The information in the database is to be provided by FRS and other supervisory agencies, and FRS is to make the information available to all of the supervisory agencies. The comprehensive examination plan and the overall assessment of an FBO’s U.S. operations—that is, the Summary of Condition and the Combined Rating—are designed to help coordinate agencies’ efforts in supervising FBO offices in the United States. To ensure coordination of supervisory efforts and avoid duplication, the FBO Program calls for U.S. banking supervisory agencies to increase interagency communications regarding their examination plans, examination results, and any proposed supervisory follow-up actions. Also, to fulfill its responsibilities for the overall U.S. operations of individual FBOs, FRS is to prepare annually an overall assessment of the combined U.S. operations of each FBO, based largely on input from and discussions with the examining agencies. As noted in figure 2, the comprehensive examination plan is to cover all U.S. operations of an FBO with the exception of commercial banks, which are to be treated as domestic institutions for the purpose of examination planning during the initial implementation of the FBO Program. The FBO Program is to provide for the coordination of examination schedules through the development of an annual comprehensive examination plan for each FBO with banking offices licensed by more than one supervisory agency and/or with significant U.S. nonbanking activities.Other U.S. supervisors of FBO offices in the United States are to provide responsible Federal Reserve Banks with a copy of their preliminary examination schedules. FRS is to use these, in conjunction with the preliminary examination schedules of the Reserve Banks, to derive a draft comprehensive examination schedule for all U.S. operations of individual FBOs. This draft schedule, to be provided to all the supervisory agencies, is designed to permit each agency to coordinate its own schedule with those of other agencies. FRS is to provide the final comprehensive examination schedule to all the supervisory agencies. Likewise, the various supervisors are to provide individual examination plans to be used by FRS in drafting a comprehensive examination plan. According to FRS officials, FBOs that operate in the United States through multiple offices often will have all offices examined using the same “as of” financial statement date; this will provide the supervisory agencies with increased information on the interrelationship among the various offices and can enhance the examination of individual offices and the FBO’s overall operations. The U.S. supervisory agencies have committed to advising other agencies’ supervising offices of the same FBO of any critical examination findings prior to the exit meeting with FBO officials for that examination. The overall assessment of an FBO’s combined U.S. operations is intended to provide the FBO and the U.S. supervisory agencies with a view of the overall condition of the FBO’s U.S. operations and help put into context the strengths and weaknesses of individual offices. The assessment is to be prepared by FRS for all U.S. offices supervised by more than one agency. The assessment is to address all risk factors, including (1) all elements of the ROCA rating system, (2) the quality of risk management oversight employed by all levels of management in the FBO’s U.S. operations, and (3) the examinations of all offices of the FBO conducted during the year. The system for rating the FBO’s combined U.S. operations is to result in the assignment of a single-component rating between 1 and 5, with 1 being the highest. The rating system contains language describing the level of supervisory concern and required supervisory attention. See table 4 for a description of the ratings. This composite assessment is intended to apprise the various U.S. supervisory authorities of the overall condition of the U.S. offices of individual FBOs. These agencies can then factor this information and that in the Summary of Condition into their supervision of the U.S. offices under their jurisdiction. Banking supervisors have made progress in implementing the FBO Program. They have developed and distributed procedural requirements and guidance. As of December 31, 1996, about 43 percent of the SOSA reports and their related home country reports had been finalized, and supervisors were just beginning to use the information in these reports in developing comprehensive examination plans. Supervisors identified some broad benefits of the program—particularly increased communication and cooperation among supervisors and improved access to information about FBOs and their home countries. At the same time, supervisors told us that determining how to use this information to improve their supervision was clearly the biggest challenge they face as they move forward. In addition, we identified a number of weaknesses in SOSA and country reports that could limit the program’s effectiveness. These included inconsistent, incomplete, or outdated information, as well as SOSA rankings that did not appear to be justified by data in the report. In late March 1995, the Federal Reserve Board distributed to the Reserve Banks initial guidance for implementing the FBO Supervision Program. Additional guidance was issued as implementation progressed from March 1995 to August 1996. As of December 31, 1996, SOSA reports and accompanying home country reports were completed for 120 (about 43 percent) of the 281 FBOs subject to the requirements of the program.We found only limited use of country and SOSA report information in the comprehensive examination plans that we reviewed. However, at the time of our review, supervisors had just begun to incorporate SOSA and country report information into the supervisory process. Although the FBO Program has not been fully implemented, FRS staff and other banking supervisors told us of a number of benefits of the program—most importantly, improved communication and cooperation among supervisors and bank management, both domestic and foreign, and improved access to information about FBOs and their home countries. Regulators reported many instances of increased coordination and cooperation among federal and state supervisors. Supervisory officials told us that implementing the FBO Program has, in some cases, required supervisors from different agencies to coordinate with each other—whereas before the program, they said coordination was more ad hoc. For example, because an FBO may have subsidiaries or offices in several locations across the United States, the development of a coordinated examination strategy for a given FBO has required supervisors to work cooperatively, sharing information about the subsidiaries or offices they individually supervise. This is important because problems identified at a particular office could manifest themselves at other offices of an FBO. This improved coordination and communication is intended to result in improved supervision of the U.S. operations of FBOs. FRS officials also said preparing home country reports and SOSAs had helped them develop valuable relationships with foreign regulators and foreign central banks. These officials said such preparation has helped them supervise the U.S. operations of foreign banks. They also said the relationships they have developed with foreign regulators have helped them obtain better information on how U.S. banks are doing abroad. Finally, officials at a Federal Reserve Bank told us that providing foreign bank management with a summary of the condition of the FBO’s U.S. operations and a combined rating has helped them communicate more effectively with foreign bank officials and has resulted in quicker and better compliance by the foreign banks. These summaries are to be sent directly to the foreign bank’s head office and are to highlight the issues that need the most attention. Several supervisors stated that the program has been beneficial in centralizing information about an FBO and its home country. For example, staff at one Federal Reserve Bank said the FBO Program helps examiners by providing a single contact for information about an FBO. The SOSAs and country reports also have provided a benchmark of information on FBOs and home countries—so that all supervisors would have access to the same information about a particular FBO or country. At another Reserve Bank, staff said the reports have also provided a ready and complete source of information for U.S. officials in their meetings with foreign banks and officials from other countries. Staff at another Federal Reserve Bank stated that the FBO Program has given “more form” to their system of supervision. For example, this Reserve Bank has been monitoring the FBOs’ conditions in a particular country since 1992. However, the staff were not sure whether other supervisors were doing similar monitoring, and state supervisors told us they did not have adequate resources for such monitoring. Reserve Bank officials said the FBO program reduced the likelihood that problems would fall through the cracks in the supervisory system. According to supervisory officials, the new program’s products and information have also helped supervisors get information about FBOs and countries that is not commonly known. FRS has had long-standing relationships with most of the central banks and bank regulators of the major industrialized nations. Particularly for the G-10 countries, officials told us that information sharing has occurred in the past, and that their accounting standards and practices are generally similar to those in the United States. FRS officials said such is not the case for other countries, however, particularly many of the countries with developing financial and supervisory systems that have banking presences in the United States. For this reason, some supervisory officials said that the development of SOSAs and home country reports—including reports on accounting and auditing standards and practices—have been particularly useful. An important goal of the FBO Program is to enhance supervision by integrating the information in SOSAs and country reports into the supervisory and enforcement processes. Officials told us that this phase of the program was just starting at the time of our review. However, based on our review of completed SOSA and country reports and our interviews with supervisory officials and staff, we identified a number of concerns and weaknesses that could limit the program’s effectiveness in improving supervisory and enforcement processes in oversight of U.S. operations of FBOs. While supervisors and supervisory staff recognized a variety of benefits of the FBO Program, as discussed earlier, they also expressed concern about the usefulness of information in the SOSA and country reports and about how this information could be integrated into the examination planning process. Further, they expressed concerns about how to use this information to help them make enforcement decisions. Regarding examination planning, some examiners told us that the SOSAs were useful mainly for general background information, while others said the information was not particularly useful. Officials from one Federal Reserve Bank told us they had been experimenting with a new comprehensive examination plan format that incorporated more information from the SOSA and country reports, such as key strengths and weaknesses related to the FBO’s lines of business. Officials from another Reserve Bank said they were in the process of developing a strategy for integrating the information from SOSA and country reports into the exam planning process. They said they were also in the process of considering how the SOSAs could be improved to make them more useful. They said some possible improvements might include making the reports shorter and more user friendly for examiners; updating the reports just before the beginning of an exam cycle; and focusing the reports more on risk—for example, analyzing the impact the FBO’s overall business strategy might have on its U.S. operations. Many supervisors told us that determining how to use information from the SOSA and country reports to improve their oversight was clearly the biggest challenge they face as they move forward. In order to help meet this challenge, Federal Reserve Board officials told us they commenced development of an FBO training seminar in late 1996 that will emphasize that the FBO Program is a process directed towards ensuring an appropriate supervisory strategy for the U.S. operations of each FBO. Among other things, they said the seminar will place emphasis on creating a greater linkage between the SOSAs and the comprehensive examination planning process. With regard to enforcement decisionmaking, some supervisors told us that they would like to be able to use the SOSAs to some extent to adjust their supervisory requirements, such as capital equivalency deposits. In order to do this, SOSAs must be accurate, consistent, and up-to-date. However, officials told us that they have concerns about whether this will be possible in the future because of the difficulties involved in obtaining consistent information from FBOs and home country regulators. As we reviewed the 36 SOSAs, we found some examples of inconsistent information in individual country reports as well as examples of inconsistency in information between SOSAs and their associated country reports, as illustrated by the following: A discussion of financial disclosure practices in a certain country report mentioned that nonperforming loan (NPL) ratios provide a limited indication of the country’s problem loan situation because public disclosure of substandard loans was not required. In addition, the report said the monetary authority’s manipulation of accounting practices to ease pressure on bank performance undermines reported financial figures and renders year-to-year analysis difficult. Yet, the final SOSA report for an FBO in that country stated that capital was adequate—with ratios slightly exceeding Bank for International Settlements (BIS) minimum standards—without providing a clear, explicit qualification of the statement. One country report pointed out that external auditors had not yet developed the status or the degree of independence they had in the United States. The report said that qualified audit reports were virtually unheard of in this country and warned that the lack of independence may potentially hinder the reliability of audited financial statements. However, a SOSA for one of the FBOs in the country said that the financial statements were deemed reliable due to an unqualified opinion rendered by an audit firm. Another report on home country supervision stated that banking supervision is considered relatively strong. Yet, the same report noted that reporting of certain key data—such as NPLs, hidden reserves, off balance sheet items, and risk-based capital ratios—was not a supervisory requirement in that country. Based on our review of Federal Reserve Board guidance and discussions with staff at the Federal Reserve Board, Reserve Banks, OCC, and select state banking departments, we developed a basic list of information that most supervisors would expect in the SOSAs. We then reviewed 36 SOSAs and their corresponding country reports to determine whether this information was provided. In reviewing SOSA and country reports, we expected some variation in the types of information provided because of differences in (1) the availability of information and the financial and supervisory systems in various countries and (2) the weight that supervisors would place on different types of information. Although we expected some variation in the information provided, we found that nearly all of the SOSAs failed to provide all of the information on our basic list. Moreover, many seemed incomplete in ways that would reduce the reliability of the reports for supervisory use. For example, some of the SOSAs lacked information central to the purpose of the reports, such as statements of the likelihood of home country support. Important details that we found lacking in some SOSA and country reports were those to clarify the date of the financial data; whether the data were consolidated and, if so, at what level; the date the reports were written and finalized; whether the risk-based capital standards referred to the BIS standards, and if not, how the capital ratios related to the BIS standards. Our findings were consistent with statements of some supervisory officials we interviewed who expressed concern about the usefulness of the reports to supervisors. For example, an official from one banking supervisor told us reports lacked important detail. The same official also said that the reports lacked candor and did not always address controversial issues. A staff member of a federal supervisor also told us that relevant information for planning examinations of some U.S. operations of FBOs could be almost wholly lacking in the SOSA report. This staff member told us that the country reports and SOSAs for the banks he supervised were useless in preparing examination plans because the country and SOSA reports focused on credit and asset quality, while the primary business of the banks in this country is trading in financial products. Some SOSAs and country reports contained outdated information on the FBO’s financial condition or the economic or political condition of the FBO’s home country. In our review, we found that a number of products completed in 1996 relied on December 1994 or March 1995 data. In addition, some products presented discussions of outdated political or economic conditions. In discussing these products and their usefulness, we found that supervisory officials we interviewed often agreed that outdated information is a problem. Also, staff at a Federal Reserve Bank identified as a problem the time lag between when the information was being analyzed and receipt of the finished product. To help correct this problem, the Federal Reserve Board is in the process of pilot testing a program, called FBO Desktop, with the Federal Reserve Bank of San Francisco. This program is designed to put all of the FBO Program products on-line. The goal of the program is to make it more efficient to share information and review FBO products. An official from the Federal Reserve Board told us the pilot was nearly completed, as of March 1997, and would be rolled out soon to the other Federal Reserve Banks and then to the other state and federal supervisors. However, Federal Reserve Board officials pointed out that, even though this system is expected to help improve timeliness, timeliness will continue to be impaired to some degree due to the fact that FBOs are required to file full financial statements with FRS only on an annual basis, English translations of such filings are often not available until mid-year, and disclosure problems may continue to exist. During our review of SOSA reports, we found some cases where the SOSA rankings did not appear to be justified by the information in the SOSA reports. For example, the program guidance states that an “A” SOSA ranking would indicate an FBO with a financial profile that is regarded as strong, with superior risk-based capital ratios, and that is comprehensively supervised, among other things. Yet, several FBOs that received “A” SOSA rankings were based in countries in which (1) banks were not required to disclose asset quality in reports to supervisors and (2) the reports said that the efficacy of supervision was questionable and that the supervisory system lacked an effective early warning system to identify financially weak institutions. Many of the supervisors we interviewed told us that they expect that the assignment of all SOSA rankings will eventually be consistent with the criteria in program guidance. However, they said achieving this level of consistency may be difficult because of the differences in financial and supervisory systems and types of information available among countries. Banking supervisors have made progress in implementing the FBO Program. Supervisors have identified a number of benefits of the program—most importantly, improved communication and cooperation among supervisors and improved access to information about FBOs and their home countries. At the same time, supervisors have just begun to use the information in SOSA and country reports to improve supervision and enforcement, and some supervisors indicated some skepticism about how useful the information from SOSA and country reports will be in improving FBO oversight. The various Federal Reserve Banks are developing different formats and strategies for integrating the information into the supervisory process. In addition, we identified a number of weaknesses in SOSA and country reports that could limit the program’s effectiveness, including inconsistent, incomplete, or outdated information, as well as SOSA rankings that did not appear to be justified by information in the report. SOSA rankings that are unsupported or inconsistent with the ranking system criteria and report information that is inconsistent, incomplete, and out-of-date are obstacles to achieving a principal goal of the SOSA—to identify FBOs that may pose risks to their U.S. operations or to U.S. financial markets. Supervisory use of unreliable SOSA rankings could lead to inefficient levels and types of monitoring and to unequal treatment of FBOs’ U.S. operations in enforcement actions, as well as potentially leading to ineffective oversight. The identified weaknesses could also cause supervisors to doubt the credibility of SOSA rankings and reports and thus limit supervisory use of the information resources the FBO supervision program is designed to provide. As FRS continues its implementation of the FBO Program, we recommend that the Board of Governors of the Federal Reserve System identify best practices for using the information in the SOSA and country reports to improve supervision and enforcement, and disseminate these best practices to all Federal Reserve Banks; and monitor the report process to help ensure that SOSA and country reports are consistent, complete, and timely, and that the SOSA rankings are consistent with the ranking system criteria. The Federal Reserve Board provided written comments on a draft of this report, and these comments and our responses are reprinted in appendix I. It also provided technical comments, which we incorporated where appropriate. The Federal Reserve Board generally agreed with the conclusions reached regarding the need for certain improvements in the content and use of the SOSA reports. In a subsequent conversation, a senior Federal Reserve Board official stated that the Federal Reserve Board had no objection to the recommendations. In its written comments, the Federal Reserve Board also noted that its work going forward will be largely concentrated on refining certain areas of the FBO Program to enhance its overall effectiveness, particularly in the areas of integration of the SOSA into examination planning and ensuring that appropriate linkages are established between all products in the program to promote the program’s objectives. The Federal Reserve Board noted four steps that are being taken to help achieve this program improvement, which we have incorporated into the report. The Federal Reserve Board also observed that our efforts were directed principally toward a review of the SOSAs and emphasized that the SOSA is one of several tools in the FBO Program designed to assist bank supervisors in meeting the objectives of the program. While we did review a judgmental sample of finalized SOSAs and discuss the weaknesses we found, our efforts were not principally directed toward this review. The report describes each of the products of the FBO program and their interrelationships, and it discusses the benefits of all parts of the program realized to date. We are sending copies of this report to the Chairmen and Ranking Minority Members of the Senate Committee on Banking, Housing, and Urban Affairs and the House Committee on Banking and Financial Services, the Chairman of the Federal Reserve Board, the Chairman of the Federal Deposit Insurance Corporation, the Comptroller of the Currency, and other interested parties. We will also make copies available to others on request. Major contributors to this report are listed in appendix II. Please contact me at (202)512-8678 if you or your staff have any questions. The following are GAO’s comments on the Federal Reserve Board’s April 28, 1997, letter. 1. We added a footnote on page 19 that states that there is no prescribed list of required information for SOSAs. 2. As we stated on page 19, we expect some variation in the information provided in SOSA reports and as these reports are updated annually for any material changes, we expect this variation will continue. However, this variation is not necessarily a problem provided that important information—such as the likelihood of home country support or other details necessary for accurate analysis—is included in the SOSA reports. 3. We added a description of FRS’ training seminar on page 18. 4. We added information on the supervisory implications section of the SOSA report in footnote 7 on page 9. 5. We added information on the procedures to review SOSAs on page 9. However, for the case of inconsistency between the country report and final SOSA we described on page 19, the SOSA had been approved through these new procedures, and this problem had not been corrected. 6. We added information on the likelihood that some problems with the timeliness of information will continue on page 21. Susan S. Westin, Assistant Director Kristi A. Peterson, Evaluator-in-Charge Charles G. Kilian, Senior Evaluator Desiree W. Whipple, Communications Analyst The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the oversight of the U.S. operations of foreign banking organizations (FBO), focusing on the: (1) FBO program; and (2) banking supervisors' progress in implementing this program. GAO noted that: (1) the FBO Program focuses on integrating into supervisory procedures a common understanding of a given FBO in its entirety, including policies and practices in the FBO's home country as well as the overall condition of the FBO's combined U.S. operations; (2) the program calls for coordinated development and common use of five new products; (3) GAO refers to two of these as "the country reports"; (4) one country report is to provide information about the financial system and the supervisory and governmental policies in the FBO's home country, and the other is to provide information about significant accounting policies and practices in the home country; (5) a third product, the Strength-of-Support Assessment (SOSA), which is to be based on the country reports and other financial data, is to provide analysis and a ranking to reflect the U.S. supervisors' judgment about the FBO's ability to provide its U.S. operations necessary financial and managerial support; (6) a fourth product, the Summary of Condition and Combined Rating, is designed to provide FBO management and U.S. supervisors with an overall assessment of the FBO's U.S. operations; (7) the last new supervisory product, an annual comprehensive examination plan, is intended to better coordinate examinations of U.S. offices of FBOs with multiple U.S. banking operations and/or significant U.S. nonbanking operations; (8) in GAO's review of the FBO Program, GAO found that banking supervisors had made progress in implementing the program and had begun to realize benefits from it; (9) however, GAO also identified areas where improvements could be made; (10) supervisors identified some broad benefits of the program, particularly increased communication and cooperation among supervisors and improved access to information about FBOs and their home countries; (11) at the same time, comments of supervisory officials and staff indicated some skepticism about how useful the information from the SOSA reports will be in improving FBO supervision; (12) however, they also said that the various Federal Reserve Banks are developing different formats and strategies for integrating the information into the supervisory process; and (13) in addition, GAO identified a number of weaknesses in SOSA and country reports that could limit the program's effectiveness, including inconsistent, incomplete, or outdated information.
OBO is responsible for the acquisition, design, construction, maintenance, utilization, and sale of U.S. government diplomatic property abroad. Through its Capital Security Construction Program, administered by OBO, State replaces and constructs diplomatic facilities to provide U.S. embassies and consulates with safe, secure, functional, and modern buildings. According to State, from fiscal years 2009 through 2014, State awarded contracts and completed construction of nine new embassy or new consulate compounds world-wide. In addition, during this period State completed 26 other embassy or new consulate compounds and also awarded contracts for 25 new embassy or new consulate compounds that are in design or construction. OBO is responsible for ensuring that such diplomatic compound construction meets specific building codes and standards.State bureaus, or U.S. agencies undertake diplomatic construction abroad, OBO provides direction and guidance to include reviewing designs, issuing building permits, and conducting inspections to ensure its standards are met. In cases where overseas posts, other DS is responsible for, among other things, establishing and operating security and protective procedures at posts, developing and implementing posts’ physical security programs, and chairing the interagency process that sets security standards. Accordingly, DS is responsible for ensuring that new embassy construction meets security standards. In addition, at posts, DS regional security officers are responsible for protecting personnel and property, documenting threats and facility vulnerabilities, and identifying ways to mitigate those vulnerabilities. DS can also use its Worldwide Protective Services contract to address such vulnerabilities by establishing contractor-provided personal protection, guard, and support services at posts. In the case of Afghanistan, DS has used this contract to undertake some security-related construction, such as constructing physical security walls and guard housing. However, such construction is contingent upon relevant OBO design reviews and permitting to ensure that building codes are met. SCA is responsible for coordinating foreign policy related to countries in In that capacity, SCA guides the the region, including Afghanistan.operation of U.S. diplomatic missions—embassies and consulates, including Kabul—within those countries. SCA also serves as the headquarters liaison, on behalf of its assigned posts, with other State bureaus, such as OBO and DS. From 2002 through 2009, State took several actions to expand the U.S. embassy compound in Kabul. Initially, OBO refurbished an existing office building, built in the 1960s. OBO also constructed a new chancery office building, staff apartments, and support facilities. Additionally, OBO constructed temporary offices and housing for the U.S. Agency for International Development (USAID). As staffing increases outpaced available space, the embassy acquired hundreds of modified shipping containers for temporary housing and also compressed office space by adding more desks in the new chancery and the existing office building. In fiscal years 2009 and 2010, State awarded two contracts originally worth $625.4 million in total to meet growing facility requirements at the U.S. embassy in Kabul. The first contract, awarded to Contractor 1 in September 2009 for $209.4 million, was for the design and construction of temporary and permanent structures to include office annex A, apartment building 1, cafeteria and recreation center, perimeter security and compound access facilities, warehouse addition, and utility building. temporary offices and housing, The second contract, awarded to Contractor 2 in September 2010 for $416 million, was for the design and construction of: office annex B, apartment buildings 2 and 3, expansion of existing apartment building 4, compound access and perimeter security facilities, and parking facilities—to include a vehicle maintenance facility. State’s plans called for sequencing construction under the two contracts and demolishing older temporary facilities to make space available for new facilities. State’s plans also entailed acquiring the Afghan Ministry of Public Health site adjacent to the compound to build parking facilities for approximately 400 embassy vehicles. In September 2011, after the U.S. and Afghan governments did not reach agreement to transfer that site, State had to remove the parking and vehicle maintenance facilities from the project. In September 2011, State partially terminated elements of the first contract—specifically the permanent facilities, including office annex A and apartment building 1—for the convenience of the U.S. government, in part, due to concerns about contractor performance and schedule delays. Contractor 1 completed the temporary offices and housing units, but in September 2011, State transferred contract requirements for the permanent facilities not begun by Contractor 1 to Contractor 2’s contract. The U.S. embassy compound in Kabul comprises the east and west compounds separated by Great Massoud Road, as well as a 6.17-acre site (or “6.17 site”) connected to the east compound. Our July 2014 report provides further information on the construction phasing of the current project. Once the current construction is completed, the Kabul embassy’s permanent facilities—both older and newly constructed office and apartment buildings—will contain 1,487 desks and 819 beds. Those totals do not include the desks or beds in temporary offices and housing facilities, which we discuss later in the report. Figure 1 depicts the planned configuration of the compound upon completion of current construction. State has also acquired other real property off-compound. Major off- compound properties include Camp Sullivan, a 20.9-acre property located near Kabul International Airport; Camp Seitz, a 7-acre facility southwest of the embassy that serves as housing and office space for security contractors; and Camp Eggers, a 16.8-acre former Department of Defense (DOD) facility southwest of the embassy planned to serve as a contractor camp. The relative locations of some of these properties are shown in figure 2. In addition, State is upgrading Camp Alvarado, a property located near the airport that serves as the main aviation hub for the embassy’s air transport and counternarcotics operations. State’s past and planned capital construction investments in Kabul from 2002 through March 2015 total $2.17 billion in project funding, which includes awarded construction contracts and other costs State incurs that are not part of those contracts. Examples of other State project costs include federal project supervision, construction security, security equipment, and project contingencies. Figure 3 shows these investments. In the case of the current Kabul embassy expansion, as of March 2015, State has allocated $1.11 billion to cover the 2009 and 2010 contract costs as well as State’s project costs outside the two contracts. The original cost of the 2009 and 2010 construction contracts was $625.4 million. When we discuss increased costs in this report, we are referring to those costs agreed to between State and its construction contractors for the 2009 and 2010 contracts. The costs for the 2009 and 2010 contracts are now almost 27 percent higher than the original contract costs. The completed project will be delivered just over 3 years later than originally planned. State did not follow its cost containment and risk mitigation procedures, a fact that likely contributed, in part, to increased cost and extended schedules. As of March 2015, the 2009 and 2010 contracts have a combined total cost of $792.9 million, which represents an increase of $167.5 million, or almost 27 percent, since contract award. At award, the 2009 and 2010 contracts were worth $209.4 million and $416 million, respectively, for a total of $625.4 million. In September 2011, State partially terminated the 2009 contract for the convenience of the government due to concerns, in part, about performance and schedule delays and reduced the contract value by $121.4 million. Two weeks later, State issued the first modification of the 2010 contract, shifting the permanent facilities from the 2009 contract and modifying some of the planned work, adding $222.5 million to that contract. Subsequent contract modifications added almost $66.5 million to the total contract value, bringing the total value of the 2010 contract to $705.5 million. The additional work included reconfiguring the existing office building’s second floor, upgrading the security measures on temporary housing, upgrading embassy perimeter walls, improving life safety measures on apartments 2 and 3, and shipping some building materials by air to avoid problematic ground shipments through Pakistan. See table 1 for a summary of cost increases and decreases for the two contracts. As of March 2015, OBO and Contractor 2 were still negotiating the value of several contract changes that will likely result in increased costs. The changes being discussed include but are not limited to the following matters: the contractor’s assertion that site areas were not available to start construction as planned, upgrades to the compound’s electrical distribution systems, modifications to alter the height of apartments 2 and 3, and the addition of new work inside the 2006 chancery. costs to address design issues related to 2009 permanent facilities, changes to enhance some physical security measures, As of March 2015, State has allocated $1.11 billion to the project to cover the 2009 and 2010 contract costs as well as State’s project costs outside the two contracts. This figure represents originally allocated funding plus subsequent transfers from other State accounts. For example, in September 2014, State transferred, with congressional support, $40 million in funding to cover costs due to shipping disruptions and anticipated construction contingency shortfalls. Additionally, State has notified Congress of its intent to use $25 million of funds State had transferred in 2014 to cover project supervision and further replenish the project’s contingency funding. State reported that without the additional $25 million, it would be forced to stop the project in mid-2015 because of a lack of funds. According to State documents, State had originally planned to complete the entire Kabul construction project by summer 2014. State now estimates completion by fall 2017, although the 2010 contract has not yet been revised to reflect that date. Table 2 shows current estimated delivery dates for key buildings, compared with the estimated delivery dates in OBO’s original plan. Figures 4 through 6 show ongoing construction of office annexes A and B as well as apartment building 1 as of December 2014. Construction of apartments 2 and 3 has not yet begun. The U.S. Office of Management and Budget (OMB) and State both require cost containment studies for certain construction projects. Also, State requires OBO to assess risks posed to its construction projects. However, State did not properly follow these cost containment and risk assessment policies, a fact that likely contributed to increased costs and extended schedules in the 2009 and 2010 contracts. OMB policy requires federal agencies to use value engineering (referred to as cost containment in this report) as a management tool to ensure realistic budgets, control capital and operating costs, and improve and maintain acceptable quality in program and acquisition functions (e.g., in a construction project). The policy indicates that the value of cost containment is likely to be greatest when applied to the highest dollar value programs during the feasibility, planning, design, and other early phases of development and can also help to reduce overall risk. State implements this policy by requiring OBO to conduct two cost containment studies for each project costing more than $20 million: one study during the planning of the project and one study no later than the design review. OBO guidance requires the study team leader to formally record the disposition of cost containment study recommendations, identifying which will be implemented and providing a defensible rationale for rejecting other recommendations. In addition, OBO’s standard operating procedures require risk assessment studies to reduce risks through identification and assessment, mitigation, and contingency planning. The procedures state that risk assessment is a necessary and prudent management task. Risk assessments should be conducted (1) early in the project planning phase (as input and guidance for initial planning), (2) again when developing budget estimates, and (3) again when developing cost estimates in support of negotiating and awarding a contract. Risks are also to be tracked during project implementation. After a risk assessment has been conducted, results should be conveyed to project stakeholders through a report and, if needed, a risk mitigation plan that outlines how the organization plans to take action to mitigate risks from occurring, or how it will respond to identified risks should they occur. State awarded the 2009 and 2010 contracts for construction in Kabul without following its procedures for cost containment studies and risk assessments. Between the 2009 contract and the 2010 contract, State should have conducted four cost containment studies and six risk assessments. However, for the 2009 contract, State confirmed that it did not conduct either type of assessment. Because of the value of the 2009 contract, $209.4 million, two separate cost containment studies would have been required. Also, no risk assessments were performed and no risk mitigation plan was developed. State completed only one required cost containment study for the second contract and combined it with a risk assessment. The study was conducted by an outside firm in March 2010 on the conceptual design for the 2010 contract, which was planned for award in September 2010; that is, the study occurred while OBO was drafting State’s request for proposals for the 2010 contract. The objective of the study was to evaluate the project from the perspective of performance, cost, schedule, and risk and to identify viable alternative concepts to enhance the project. OBO’s consultant for this effort focused primarily on phasing construction, planning risk response, and improving the project’s long-term flexibility. Because of the accelerated nature of the project, the study did not focus on the programmatic elements (e.g., staffing, floor plan, and site layout). DS officials were not sufficiently involved in the cost containment study, contrary to established policy. OBO policy on cost containment requires the OBO value engineering manager and the project team leader to request and coordinate review of the consultant’s recommendations with technical team members and all interested offices to determine whether to accept, reject, or modify those recommendations. DS is cited in the policy as an interested office. According to attendee lists, no one from DS participated in the meetings related to the study, and DS officials we spoke with indicated they were not aware of the study and its security recommendations. The cost containment study made 31 recommendations to State to streamline construction and improve the safety and efficiency of the buildings. State provided us with a table summarizing the cost containment alternatives and indicating that State accepted 18, rejected 12, and partially accepted one. According to State, seven of the accepted recommendations were included in the request for proposals then being drafted. We did not assess the implementation of the recommendations. State’s policy states that cost containment disposition memos should include defensible rationale as to why a recommendation was rejected. The explanations for rejecting the twelve were brief. For eleven of the twelve rejected recommendations, State provided no further documentation for rejecting the proposals other than a preliminary and a final summary paper. Further, it was unclear from State’s documentation memo what construction and operating life-cycle cost savings OBO expected to achieve in relation to the consultant’s estimates and recommendations. The risk assessment identified over 30 risks to the project. In particular, it identified the interface between the 2009 and 2010 contracts as a major source of risk. Specifically, the study raised concerns about how State could best coordinate the 2010 contract with the 2009 contract without sufficient information about Contractor 1’s design plans, which were still under development. The study noted that effects could be severe for apartment buildings 2 and 3 in the 2010 contract if progress on the 2009 contract was delayed. Other major risks included the following: The 2009 contract might not provide adequate site utilities for the facilities in the 2010 contract, as the 2009 design was still under development. Site areas that State planned to acquire—such as the adjacent Afghan Ministries of Public Health and Defense sites—might not be available in time, or at all, to enable construction to proceed as planned. There might be insufficient space for two contractors to stage construction concurrently. On-going physical security threats in a conflict environment. The consultant recommended key risk mitigation actions, which State did not act on, that aligned with the recommendations for cost containment strategies related to the two contracts: Facilitate greater project coordination between the 2009 contract and planned 2010 contract; the consultant recognized that implementing the recommendation might require delaying the 2010 contract award to 2011. Divide the 2010 contract into two separate contracts to effectively defer award of apartments 2 and 3 so that if the 2009 contract was delayed, the 2010 contract would not also be delayed due to the tight sequencing of construction. One State project official indicated that, given concerns about security in Kabul and pressure to get permanent, hardened facilities built as soon as possible, State was not going to act on any recommendation that would delay getting the contracts awarded and the facilities built. Further, a senior State management official acknowledged that State did not fully follow its cost and risk policies, in part because of the urgency of the embassy’s facility needs, the security environment, and challenges in supporting the surge in embassy staffing that was occurring.to this official, had the cost containment and risk assessment study recommendations been more fully considered by senior management, there might have been a decision to delay award of the 2010 contract, which would have slowed efforts to provide facilities as quickly as possible. He also noted that budget pressures existed to get funding committed, contracts awarded, and projects started. He stated that OBO as an organization did the best it could, given the challenging circumstances. As noted in our July 2014 report, several risks eventually materialized, such as the loss of the Afghan Ministry of Health site and insufficient space that interfered with the sequencing of construction. These factors contributed to increased construction cost and extended schedule. Since 2002, State has spent over $100 million to construct temporary facilities on-compound in Kabul, and the post will likely continue to use some of those temporary facilities. Prior to building additional temporary facilities on the east compound, State informed Congress of its concerns about threats posed from incoming weapons fire and indicated that overhead protection was required to protect staff in existing temporary facilities on compound. However, while State has security standards for its facilities, it does not have security standards specifically tailored to temporary facility construction. As a result, State inconsistently applied alternative security measures that were insufficient and differed for temporary offices and housing. State subsequently took corrective action through contract modifications that increased the project cost and extended the schedule of the overall construction project. Since 2002, State has spent over $100 million to construct temporary facilities on the embassy compound to accommodate evolving staffing needs and provide temporary office and housing space as permanent facilities are built. As of February 2015, temporary facilities on the embassy compound provided nearly 1,100 desks and 760 beds. OBO building guidance from 2009 states that “temporary facilities” are facilities that will be occupied for no more than 5 years or until a permanent building is constructed, whichever is sooner. The guidance also indicates that temporary facilities include, but are not limited to, containerized housing/office units, modular units, modified shipping containers, and trailers. Most of the embassy’s temporary facilities are located on the east compound. Some of the earliest temporary facilities were built to provide office and housing space for USAID and are more than 10 years old. More recent temporary office and housing facilities were built in 2011—as part of the current embassy construction—to accommodate the staffing surge that began in 2009 and to provide temporary space while permanent facilities were constructed. Those temporary facilities were built under the 2009 contract. Additionally, in 2013, State constructed additional temporary housing—built by Contractor 2—on the 6.17 site. Figure 7 shows some of the temporary facilities that the post has used to meet interim space needs. State intends to demolish the older USAID temporary offices and some temporary housing built in 2011 on the east compound to build permanent apartment buildings 2 and 3. According to OBO officials, State has not finalized which temporary housing facilities will be demolished and which will remain. As a result, 5 two-story temporary office buildings and an estimated 12 to 17 multiunit temporary housing structures will likely remain at the completion of the current project. While State has not made a final determination on which temporary facilities will be demolished or repurposed for other functions (such as for use by support service contractors), consist of over a third of available desks and beds on-compound after current construction is completed in fall 2017. Temporary office facilities that are to remain can provide space for 875 desks. By comparison, permanent office facilities (existing and newly constructed) in fall 2017 will provide 1,487 desks. That is, temporary offices will continue to provide 37 percent of the 2,362 available desks on-compound in fall 2017. The number of temporary housing facilities that are to remain has not been finalized. The number of beds that are likely to remain within the temporary housing facilities will range from approximately 472 (if 12 housing facilities remain) to 640 beds (if 17 housing facilities remain.) Given this range, and the 819 permanent beds to be provided within permanent apartment facilities (existing and newly constructed) upon construction completion, temporary housing will continue to provide between 37 and 44 percent of the available beds on-compound. State officials report that some of the existing temporary offices may be converted to temporary housing space so that State can rehabilitate and upgrade existing staff apartment buildings in the future. Table 3 summarizes the numbers of desks and beds located in temporary and permanent facilities as of February 2015 and those likely to remain upon completion of the current construction project, currently estimated for fall 2017. State planning documents, as well as post and OBO officials, identify a continued need for some of the temporary facilities following completion of the permanent facilities in 2017. At that time, all temporary facilities on- compound will be nearly 5 years old or more, and a smaller subset on the west compound will be more than 10 years old. State officials indicated some may be used by State contractors that will provide support services following the U.S. military’s drawdown. Some facilities could also be used to relocate some of the Kabul Embassy Security Guard Force functions onto the compound. Further, State plans to invest at least $124 million in further investments in some of the east compound temporary facilities that are to remain. Some of those additional investments would correct what State reports as deficiencies in the temporary facilities and provide upgrades to electrical, sewer, and water systems. State has recognized the need for an established level of security protection for temporary facilities. When State requested funding to construct apartment building 1 in its fiscal year 2008 Supplemental Appropriations Justification, it reported to Congress that while some employees enjoyed the benefit of 146 permanent, hardened apartments, all other employees lived in temporary housing facilities. The 2008 justification also communicated State’s concerns about threats posed to temporary facilities from potential incoming weapons fire—amid increasing attacks around Kabul by the Taliban and al-Qaeda—and indicated that overhead protection was required to protect staff in the existing temporary facilities on-compound, such as the USAID temporary offices. State reiterated its concerns about the security of Kabul temporary facilities and threats posed to those facilities in its fiscal year 2009 Congressional Budget Justification when it requested additional funding for the current project. Also, State security standards in 2009 indicated physical housing constructed as an integral part of or adjoining the chancery (i.e., office building) should be constructed to meet chancery physical security standards.measures include features such as forced entry protection and ballistic resistance. Examples of some physical security However, according to DS officials, State does not have a set of minimum security standards specifically for temporary facilities it constructs. State has physical security standards governing construction of offices and housing that State seeks to meet regardless of whether a facility is permanent or temporary. For practical purposes, DS officials stated that State’s physical security standards governing new construction— regardless of whether a facility is permanent or temporary—are standards that only permanent construction can meet. temporary facilities—unlike newly constructed permanent facilities— cannot be constructed to meet all State’s security standards, State has the discretion to grant exceptions from those standards. To the extent that security criteria cannot be met, mitigating solutions (i.e., alternative security measures) must be developed in writing and approved by DS in advance of constructing new facilities. Temporary facilities cannot meet State security standards because the construction materials and methods used in building a temporary facility are different from those used in building a permanent facility. For example, wood or metal wall systems may be used— rather than concrete—in constructing temporary facility structures. In the absence of minimal security standards (or guidance) to guide planning for temporary facility construction, State inconsistently applied alternative security measures, resulting in insufficient and different levels of security between temporary offices and housing. When awarding the 2009 contract, State did not specify that overhead protection was required for either the temporary housing or temporary offices, even though State had previously expressed to Congress concerns about the threat posed from incoming weapons fire in its fiscal year 2008 justification. The only security protection measure specified in the 2009 contract for the temporary housing was shatter-resistant window film. By comparison, State specified temporary offices were to receive forced entry and ballistic protection. DS officials we spoke with indicated that staff living on- compound should receive the same level of protection in their housing as in their offices. OBO and DS did not finalize the security measures for the temporary facilities before State’s award of the 2009 contract, contributing to cost increases and schedule extensions. In December 2009—3 months after award of the September 2009 construction contract—the two bureaus were still seeking to reach agreement on the security measures for temporary facilities. At that time, in a memorandum to OBO, DS stated that the physical security requirements for the new temporary facilities should comply with State’s physical security standards to as great an extent as feasible and that the temporary facilities should be designed and constructed to provide forced entry and ballistic protection as required for any other new construction. After awarding the 2009 contract, State had to modify contract requirements to address the insufficient and different security requirements for the temporary housing and offices, which added cost and extended the project schedule to address this disparity. State likely paid more than it would have had the security requirements been included in the original contract requirements. This is, in part, because this work was not subject to competition, as was the original contract, which can drive down price.contract to address additional security requirements led to increased cost and stated that conducting such work after the fact was difficult in limited space on an active compound. State officials agreed that modifying the State modified the 2009 contract in December 2009 to provide some overhead protection for all temporary offices and housing. Those changes contributed, in part, to the increased costs and extended schedule of the 2009 contract. In 2013, State further modified the 2010 contract, at a cost of $8.2 million, to develop a design to provide additional security protective measures for the temporary housing that had been constructed as part of the 2009 contract.sidewall barriers to increase the physical security protection of the temporary housing and to be more consistent with protection afforded the temporary offices. DS also has installed some concrete Several DS and OBO officials reported that State needs documented minimal security standards for temporary facilities in a conflict environment, and some of those officials identified “expeditionary” standards used by DOD as an example of such standards. OBO officials also commented that State only undertook the building of temporary expeditionary structures on a large scale beginning in 2010. One OBO facility engineer indicated State should study its experience managing construction in conflict environments and apply lessons learned based on experience in locations such as Afghanistan and Iraq. One OBO security engineer indicated that State would have been better able to address the temporary facility security needs in Kabul if State had had clearer standards (or guidance) for construction of such facilities. In addition, some DS management officials and project staff indicated that while State needs minimal standards to guide the construction of temporary facilities, State would still need to tailor physical security measures—such as increasing security wall heights or installing guard towers and bunkers—to specific site threats and as new threats evolve. Some DS officials we spoke with indicated that State could examine DOD’s building design criteria for temporary facilities and standardized designs for such facilities—in addition to examining DOD’s minimum security standards—as a possible model for improving delivery of such facilities.have been both security and design challenges in constructing temporary facilities in Kabul—as well as elsewhere—and opportunities to learn from those challenges and the need for making any changes to standards or developing guidance could be examined by State’s security standards committee. State has taken some actions that may help avoid some of the problems it encountered in constructing temporary facilities on-compound. In 2011, State awarded task order contracts to multiple firms to design and provide State with temporary, modular, containerized housing and office units (though not hardened) when tasked. This may help reduce the time it takes—from a contracting perspective—to procure such temporary facilities in the future. In 2012, State also worked with the U.S. Army to develop a conceptual, standardized design for a Hardened Alternative Trailer System (hardened trailers) that a DS official stated provides an improved level of physical security protection, although not the level required for a conflict location such as Kabul, where rockets and mortars pose threats. According to DS officials, hardened trailers could be required as part of State’s containerized housing and office unit task orders. One State official identified a commercial off-the-shelf, modular protective trailer that State could consider using. One OBO official indicated this off-the-shelf solution is used by at least one U.S. ally in Afghanistan. According to product information provided by this official, it can provide protection against rockets and mortars. State officials stated that the challenges in constructing temporary facilities in environments such as Kabul have led DS and OBO to work together to explore physical security measure “solutions”—such as overhead protective cover and sidewall systems—that could provide more consistency in future temporary construction. They further said that these solutions would also allow other security measures—such as increasing the heights of perimeter walls or providing bunkers—to be tailored to site needs and threats. According to these officials, the temporary housing constructed in 2013 on the 6.17 site reflects some improvements State has made in constructing temporary facilities in conflict environments and could inform the development of minimal standards, guidance, or procedures to inform planning and construction of temporary facilities in the future. State officials indicate that additional capital construction investments are needed to address interim and future facility needs of the U.S. embassy in Kabul, both on- and off-compound. State stakeholders in Washington and at the post are working to identify, prioritize, and address the post’s facility needs through various coordination meetings and working groups. However, this effort lacks a strategic facilities planning approach, as recommended by industry standards. Without such a plan, projects may have been addressed inefficiently. Additionally, while OBO formally assigns responsibility for post-specific strategic facilities planning, OBO lacks a policy governing implementation of such planning. Without a strategic facilities plan for the embassy—supported by a policy to guide its development, content, and approval—future progress in meeting the embassy’s facility needs will likely continue to be difficult in a location that is already challenging. State has made or plans to make approximately $2.17 billion in infrastructure investments in Kabul. Since the embassy reopened in 2002, the dynamic and unpredictable operating environment of Afghanistan has produced changing facility needs that have continually outpaced existing capabilities at the post. This has been due to various factors such as policy and program changes, staffing fluctuations, and changes in the security environment. During this time, the post has used a variety of off- compound facilities to meet some needs that could not be met on- compound. Key facilities include Camps Alvarado, Eggers, Seitz, and Sullivan, which, as of March 2015, represent a total State investment of almost $731.4 million. In addition, State plans to use $394.9 million from its Embassy Security, Construction, and Maintenance account for additional construction to address other unmet post facility needs in fiscal year 2015, the majority of which would be used to fund facility upgrades at Camp Alvarado. State is also seeking at least $124 million in fiscal year 2016 for further facility investments, such as upgrading the remaining temporary housing. State is also planning for further potential investments in 2017, such as constructing the parking facilities that State had to remove from the current construction project. The post’s current facility needs stem primarily from changing circumstances inherent to the dynamic operating environment in Afghanistan. For example, when the Afghan Ministry of Public Health site became unavailable for construction in spring 2011, OBO was forced to remove the parking garage, motor pool office, vehicle maintenance facility, and fuel point from the current project. Although the post has a temporary vehicle maintenance facility and fuel point on-compound, it is located where apartment buildings 2 and 3 will be built and must be demolished. State has explored interim solutions to provide a temporary vehicle maintenance facility at several off-compound sites, but a permanent location for the vehicle maintenance facility and other needed motor pool facilities has yet to be identified. Changes in the security environment in Kabul have also affected post needs. For example, changing security threats, including attacks against the compound in September 2011, led DS to request several compound security upgrades that as of March 2015 were still being finalized. In addition, security concerns were a primary factor in DS and the post’s acquisition of the Camp Seitz and Camp Eggers properties, as this would allow the relocation of both the Kabul Embassy Guard Force and the Protective Security Detail (movement protection) Guard forces to sites closer to the embassy. The withdrawal of the U.S. military from Afghanistan has also produced new needs for the post, as certain support services formerly provided by DOD are eliminated. For example, this has driven recent post requests for a medical trauma facility and helicopter landing zone, as well as past and future planned upgrades at Camp Alvarado, the post’s air transport hub. In addition, as of March 2015, State continued to develop its Afghanistan Life Support Services (ALiSS) contract, with which it intends to replace support services such as food, water, fuel, medical, fire protection, and miscellaneous support services previously provided by DOD. This transition will also require further utility and infrastructure upgrades on- compound. According to State officials, this transition also presents a housing challenge on- and off-compound, depending upon the size of the DOD Office of Security Cooperation to be housed on-compound, as well as the potential ALiSS contractor footprint in Kabul. This problem will be exacerbated when some of the temporary housing on the east compound is demolished to make way for apartment buildings 2 and 3. State facilities and management officials at the post noted that the future needs of the embassy will likely exceed the available space on-compound and will require prioritization of needs as well as high-level policy and management decisions on staffing presence. State stakeholders in Washington and at the post are working to identify, prioritize, and address these facility needs through various coordination meetings and working groups. For example, according to State officials, representatives from the post, DS, OBO, Office of the Under Secretary for Management, SCA, SRAP, Bureau of Budget and Planning, Office of Medical Services, and Office of the Legal Adviser meet weekly via video teleconference to discuss the status of all ongoing construction projects in Kabul. There are various working groups for specific issues, such as the medical working group, which meets monthly. According to State officials, DS and OBO have begun a regular meeting on DS-specific projects in Kabul. There are two weekly management calls with the post to review progress and a bi-weekly meeting with DOD to discuss the future DOD Security Cooperation Office on-compound. Construction issues are also discussed at a weekly executive steering group meeting. State does not have a strategic facilities plan for Kabul that documents current and future embassy needs, comprehensively outlines existing facilities, analyzes gaps, provides projected costs, and documents decisions made. Lack of such a plan has inhibited coordination and undermined the continuity necessary to address emergent needs at the Kabul embassy. International Facility Management Association (IFMA), GAO, and OMB guidance recommend that an organization view all real property asset investments as a single portfolio with strategic linkages when determining the right mix of projects to undertake. IFMA describes a strategic facility plan as a 2- to 5-year facilities plan encompassing an entire portfolio of owned and/or leased properties that sets strategic facility goals based on the organization’s strategic objectives. It contains a needs statement (i.e., mission need), analysis of all real property assets and their condition (owned and leased), analysis of gaps between needs and current asset capabilities, recommendations for new spaces or buildings, and facility cost projections. IFMA also indicates the plan should document findings to include expected timelines for implementation but allow flexibility for updates, as appropriate. Similarly, GAO and OMB capital planning guidance emphasize the importance of identifying current capabilities of real property assets, determining gaps between current assets and needed capabilities, deciding how best to meet the gap by identifying and evaluating alternative approaches, documenting decisions, and making updates as needed. State officials responsible for embassy management, facilities, security, and construction all cited the lack of an overarching plan as an obstacle to coordination intended to address emergent post needs. According to State officials in Kabul and Washington, coordination to address the Kabul embassy’s future needs is particularly difficult due to the large number of stakeholders in Kabul and in Washington. Additionally, the constant personnel turnover caused by the 1-year tours served by most management, facilities, and security staff in Kabul results in lack of continuity in decision making. As far back as January 2006, the State Office of Inspector General also identified “the near total lack of institutional memory” stemming from the lack of staff continuity and a “never-ending” learning curve as the most serious impediment to good executive direction at the U.S. embassy in Kabul. State officials in Kabul noted the growing number and frequency of coordination meetings and teleconferences intended to address the embassy’s future facility needs. However, they also reported that communication at such meetings can be difficult as parties seek to reconcile planning differences on proposed projects. Without a comprehensive plan that provides a strategic framework to document mission needs, catalog existing facilities, analyze gaps, provide projected costs, and document recommendations, the competing proposals of the post’s many stakeholders are difficult to manage, prioritize, and reconcile. As a result, State officials in Kabul said that these meetings suffer from no common vision and a lack of decision making. Consequently, State has been challenged to efficiently address changing embassy needs in several instances on- and off-compound. For example: Interference with on-compound construction—OBO officials in Kabul expressed frustration that proposals for new projects would often conflict with plans previously agreed to by previous post management staff. For example, during our fieldwork, post management proposed to locate a helicopter landing zone near the embassy warehouse. However, according to OBO officials on-site, they had arranged with the previous management team to reserve that space as a staging area for the contractor to build the warehouse expansion. When asked about this, post management officials stated that they had no continuity document that informed them of this earlier decision. On-compound physical security upgrades—DS first requested changes to the embassy compound’s security perimeter in December 2010 and added more requirements in response to attacks against the compound in September 2011. In February 2013, the post urged OBO to provide a project schedule and expedite the upgrades. However, that was not done and as of March 2015 OBO and DS had not reached agreement on schedules and costs for some security upgrade projects. Camp Seitz—In 2013, DS and post management decided to relocate the Kabul Embassy Guard Force from Camp Sullivan and the Protective Security Detail (movement protection) Guard forces from another camp to sites closer to the embassy compound due to security concerns. To facilitate this, DS initiated the acquisition of the Camp Seitz site through OBO. However, according to State officials, DS then began construction of temporary housing at Camp Seitz without submitting the design to OBO for review or applying for a building permit. After OBO became aware of the completed construction, it identified fire safety deficiencies that DS had to correct. Camp Sullivan, Camp Eggers, Qasemi Lot Vehicle Maintenance Facility—As part of the security contractor relocation, post management and DS proposed removing several support facilities, including a vehicle maintenance facility, from an ongoing construction project at Camp Sullivan and transferring them to Camp Eggers. Post management and DS officials stated that once the temporary vehicle maintenance facility on-compound is demolished to make way for apartment buildings 2 and 3, it would be better for security and logistics to build the replacement vehicle maintenance facility close to the compound rather than at Camp Sullivan. However, OBO proceeded to build the Sullivan vehicle maintenance facility because negotiations for the 30 leases required at Camp Eggers were not complete, and OBO was concerned that if an alternative vehicle maintenance facility was not in place, construction of apartments 2 and 3 could be delayed and their costs increased. Discussions continued among OBO, DS, and post management, and the proposed vehicle maintenance facility was shifted to Qasemi Lot, a site adjacent to Camp Seitz. OBO decided not to descope the Camp Sullivan vehicle maintenance facility until plans for a replacement facility at Qasemi Lot were approved by OBO and DS had awarded a construction contract with a scheduled completion date prior to the demolition date for the existing vehicle maintenance facility on- compound. As a result, State is funding two new, temporary vehicle maintenance facilities—one at Camp Sullivan (built by OBO) and one at Qasemi Lot (to be built by DS). A strategic facilities plan could have facilitated coordination in the above cases by providing a common vision of embassy needs, comprehensively cataloging existing assets and alternatives considered for meeting those needs, documenting expected timelines and projected costs, and facilitating continuity by documenting decisions made, while allowing for updates. When asked about strategic facilities planning, State officials provided a series of planning coordination tools as alternatives. These included OBO’s 2010 site master plan for the embassy compound, a 2014 draft update of that master plan, a 2014 interactive site plan (web-browser based) showing the phased development of the compound, and an Afghanistan project plan used by State’s facilities working group for Kabul. Although these tools did perform some coordination functions, they do not substitute for a strategic facilities plan. According to IFMA, a strategic facility plan contains a needs statement, analysis of all real property assets (owned and leased), their existing condition, analysis of gaps between needs and current capabilities, recommendations for new spaces or buildings, and facility cost projections. OBO’s use of the term “master plan” created some false expectations among non-OBO stakeholders in Kabul and Washington. For example, officials from post management and DS believed the 2014 master plan update would comprehensively identify the post’s needs and take into account all facilities—to include off-compound projects—when determining capabilities and alternatives for meeting those needs. However, according to IFMA, a master plan in this context is limited to illustrating the physical layout of buildings on only one specific site and may portray aesthetics of buildings and grounds, as well as construction phasing and timing for that site. We found that OBO’s 2010 master plan appears to meet certain IFMA criteria for a site master plan, rather than a strategic facility plan for a portfolio of real property assets. For example, it showed how the unclassified office annex would need to be completed before the temporary USAID building could be demolished to allow apartments 2 and 3 to be built. It also showed the construction of parking facilities on the Afghan Ministry of Public Health site, which were removed from the current project in 2011. It did not address the use and future development of State’s off-compound properties, or the associated elements of a strategic facilities plan. In January 2014, OBO’s Office of Project Development and Coordination (PDC) began work on an update to the 2010 master plan for the embassy compound (i.e., the 2014 Master Plan Update). The scope for this update was limited to developing a physical site plan that could incorporate the elements that OBO had planned to construct on the Afghan Ministry of Public Health site (i.e., the parking facilities) somewhere on the embassy compound. The 2014 Master Plan Update listed known needs of the embassy and broadly suggested some might be incorporated onto the east compound or the 6.17 site. When OBO presented the 2014 Master Plan Update to the post in September 2014, post officials told OBO that the site plan did not address all of the embassy’s needs. In addition, they told us that limited space on-compound requires the continued use of off- compound facilities. OBO continues to work with stakeholders in Kabul and in Washington to find ways to incorporate as many post needs on- compound as possible. While the 2014 Master Plan Update may eventually be used to inform a series of new construction projects for the compound, it remains a compound-specific document and does not address how embassy needs will be met at off-compound facilities in the interim. According to State officials, the future use of off-compound facilities is discussed routinely during stakeholder teleconferences and working groups established for Kabul embassy planning. After we inquired about the limited nature of OBO’s 2014 Master Plan Update, SCA officials stated that going forward they need a compound “master plan” and a series of “addendums” that outline future plans for off- compound sites and facilities. Additionally, SCA officials in Washington presented an Afghanistan Project Plan to us, which they identified as the primary coordination and continuity document for project discussions involving off-compound facilities at the various Kabul coordination meetings, such as the Afghan Facilities Working Group. Our review of the Afghanistan Project Plan found it to be useful for tracking the status of active construction projects in Kabul and determining next steps at the project level. However, it did not catalog all existing real property assets, express interim or long-term embassy needs, or make recommendations on fulfilling those needs. Developed by SCA’s contractor in October 2014, the Afghanistan Project Plan instead depicts a broad listing of ongoing State construction projects both on- and off-compound. Each project contains sub-tasks with deadlines, progress to completion, and notes on project status. For example, SCA officials noted the lack of progress on the trauma center to be built at Camp Seitz by DS due to physical design challenges. Finally, OBO officials provided us with a 2014 Interactive Site Plan tool (web-browser based) that officials indicated OBO developed with the intent to provide the post with a continuity tool for the construction planned on-compound. The tool contained numerous interactive three- dimensional diagrams of the embassy compound with background information, construction timelines and phasing, preliminary space usage plans, and site utility information. Although the tool focused solely on the embassy compound, OBO officials stated that it was meant to be easily updated as circumstances demand and could have been expanded to included off-compound properties. According to OBO officials, they provided this tool to post management in February 2014 with the intent that it would be uploaded to the embassy’s internal website, where it could be viewed and updated by stakeholders. However, OBO officials with access to the post’s internal website reported the embassy never used the tool and OBO is not planning to make any further updates to it. When asked about continuity documents, post management officials directed us solely to the 2010 master plan and did not mention this interactive tool. According to State policy, OBO’s Office of Master Planning and Evaluations (MPE) is responsible for directing and preparing both master plans and long-range facilities plans for posts abroad, not PDC, which is OBO’s project coordination and management office. However, MPE has not been involved in PDC’s on-compound master plan update or State’s stakeholder meetings on embassy development. OBO, Policy and Procedures Directive PPD 01, Long Range Facility Planning Program. These criteria included such things as significant staffing changes, need to collocate State and other agencies, political changes (e.g., post openings/closings,) security issues, and posts where a significant investment was to be made. of individual projects. However, OBO produced no long-range facilities plans after 2008. In December 2013, OBO rescinded its long-range facilities plans policy and procedures directive based on an explanation that the office responsible for that function no longer existed and that the function had been replaced by master planning. what master planning entailed within OBO, nor did it explain and justify how master planning could substitute for strategic facilities planning. According to OBO officials, master planning is defined and conducted via stakeholder meetings and generally accepted practices within the organization. However, OBO was unable to provide any current policy governing either post strategic facilities planning or site master planning. A senior OBO official acknowledged that MPE had generally not conducted strategic facilities planning in the past few years. Without policies that clearly define strategic facilities planning and master planning, as well as outline the content and methods to conduct such planning, it will be difficult for OBO to fulfill these responsibilities. According to OBO policy, a policy and procedures directive may be rescinded when replaced or superseded by a new directive or at the request of the proponent office. The responsible office must sufficiently explain and justify why the directive is no longer needed. schedule reflects the value of State’s existing risk management process for construction. As State pursues further construction to address the facility needs of the U.S. embassy in Kabul, it is imperative that it follow its current policy to contain costs and manage risk where possible. Future State construction in Kabul and other high-threat posts will likely entail the continued use of temporary office or housing facilities, especially in conflict areas. However, without clear standards or guidance detailing minimal physical security measures for the temporary facilities it constructs, State is at risk of encountering security design, cost, and schedule extensions similar to what has already occurred in Kabul. While State would still require sufficient flexibility to tailor physical security protection measures to the specific and possibly changing threats encountered at different posts, State should consider establishing clear minimal standards or guidance for physical security on temporary facilities, as this could yield more consistent application of security measures at posts, more efficient procurement, and potentially contain cost increases and schedule extensions. Furthermore, it is clear that the changing facility needs of the Kabul embassy will require a combination of permanent and temporary construction on- and off-compound. Although State uses various coordination mechanisms to manage this effort, coordination would be further strengthened by the development of a strategic facilities plan that catalogs existing facilities, identifies embassy needs and gaps, and documents decisions made. Such a plan for Kabul would need to be tailored to the specific context of the post and would likely go through repeated updates. However, such a common framework would strengthen existing coordination and facilitate greater continuity of decision-making. While past OBO policy recognized the value of such strategic planning, it was rescinded in December 2013. No formal policy on its stated substitute—master planning—was established, even though State continues to assign responsibility for both strategic facilities planning and master planning to OBO. By establishing policies that clearly define strategic facilities planning and master planning, as well as explain the content and methods to conduct such planning, OBO can better ensure the usefulness of any such efforts undertaken in Kabul or in other posts abroad. To maintain State’s adherence to construction risk management policy, guide future construction of temporary facilities, strengthen coordination efforts to address facility needs of the U.S. embassy in Kabul, and clarify strategic planning policy, we recommend the Secretary of State take the following four actions: Ensure existing cost containment and risk assessment policies are followed in future Kabul construction projects. Consider establishing minimum security standards or other guidance for the construction of temporary structures, especially those used in conflict environments. Develop a Kabul strategic facilities plan. Such a plan should comprehensively outline existing facilities, identify embassy needs, establish gaps between facilities and needs, and document decisions on meeting those needs. Establish policy and procedure directives governing the definition, content, and conduct of post-wide strategic facilities planning and master planning. We provided a draft of this report to State for comment. State provided written comments that are reproduced in appendix II. State concurred with our recommendation to ensure existing cost containment and risk assessment policies are followed in future Kabul construction projects, stating that it will better administer cost containment and risk assessment by adhering to relevant OBO policies. State also concurred with our recommendation to develop a Kabul strategic facilities plan. According to State, OBO will continue to work with post and State stakeholders to formalize current and future embassy needs into a plan that outlines existing facilities, identifies embassy needs, establishes gaps between facilities and needs, and documents decisions on meeting those needs. Finally, State concurred with our recommendation to establish policy and procedure directives governing the definition, content, and conduct of post-wide strategic facilities planning and master planning. According to State, OBO is currently developing a policy and procedures directive that will outline the new master planning program and post-wide strategic facilities planning. State partially concurred with our recommendation to consider establishing minimum security standards or other guidance for the construction of temporary structures, especially those used in conflict environments. State does not support separate standards for temporary structures, reiterating that it aims to meet Overseas Security Policy Board security standards in all environments. Where this is not possible, State asserts it works to meet the intent of these standards through alternative security mitigation measures via its “waivers and exceptions” process. However, State does believe that there is value in documenting standard operating procedures and best practices associated with the deployment and protection of temporary structures in high-threat and conflict environments. State noted that while such documentation would not constitute security standards and would not circumvent risk management integral to its waivers and exceptions process, it would provide templates from which to base the design of future projects in exigent environments. Should State produce such documentation, we believe that this could meet the intent of our recommendation. State also provided technical comments, which were incorporated into the report as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of State, and other interested parties. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact either Michael J. Courts at (202) 512-8980 or at [email protected] or David J. Wise at (202) 512-5731 or at [email protected]. Contact points for our Office of Congressional Relations and Office of Public Affairs can be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. We reviewed State Department (State) construction efforts at the U.S. embassy in Kabul under the authority of the Comptroller General to conduct evaluations on GAO’s initiative because of broad congressional interest in the oversight and accountability of U.S. funds used in Afghanistan. In the report we examine (1) the extent to which construction cost and schedule have changed and why, (2) State’s use of temporary facilities on-compound, and (3) State’s planning for projected embassy facility needs. To conduct this review, we obtained information from agency planning, funding, and reporting documents and interviewed officials from State’s Bureau of Overseas Building Operations (OBO); Bureau of Diplomatic Security (DS); Office of Acquisitions Management; Bureau of South and Central Asian Affairs (SCA); Office of the Special Representative for Afghanistan and Pakistan (SRAP); and Office of Management Policy, Rightsizing, and Innovation. Within OBO, we met with officials from Construction Management, Design and Engineering, Master Planning and Evaluations, Project Development and Coordination, Real Property Leasing, Security Management, Strategic Planning, and Financial Management. Within DS, we met with officials from High Threat Programs, Overseas Protective Operations, and Physical Security Programs. In February 2014, we traveled to Kabul, Afghanistan, to observe construction progress and meet with U.S. embassy officials responsible for construction, facilities management, post management, and security. We also met with contractor officials in Kabul and in the United States. In addition, our Kabul Field Office conducted follow-up meetings with officials in Kabul and their successors through December 2014. We incorporated audit work from our February trip and relevant material gathered for our July 2014 report into this audit. In addition, we obtained State funding information on all such projects over $1 million in Kabul. We determined that these funding data were sufficiently reliable for the purposes of this report. To examine the extent to which construction cost and schedule have changed and why, we collected and analyzed State and contractor documents and met with relevant officials. We analyzed contract files for the fiscal years 2009 and 2010 Kabul construction projects, including requests for proposals, site surveys, project authorization documents, design drawings, contract modifications, cost estimates, approved schedules, and other contract documentation. We also examined OBO and other State planning and oversight documents, such as space requirements programs, trip reports, rightsizing reviews, site plans, OBO briefings to State management, and progress reports. In addition, we examined Office of Management and Budget and State policy and procedures governing construction planning and implementation, including those pertaining to value engineering (cost containment) and risk assessment. We also met with relevant officials in OBO, DS, and SCA, and in Kabul to discuss the original planning of the 2009 and 2010 contracts, as well as current construction progress. To examine State’s use of temporary facilities at the embassy, we inspected the temporary offices and housing currently on-compound and reviewed related State planning, design, construction, and contract documents for the temporary facilities within the 2009 contract. We also reviewed State budget justifications to Congress related to State’s use of temporary facilities and security concerns about those facilities. In addition, we examined State physical security and building standards for State-built facilities, as well as Department of Defense security and building standards for temporary facilities. We also obtained funding information from State on what it has allocated to the construction of temporary facilities in Kabul since 2002. In addition, we interviewed embassy management officials, OBO’s on-site project director for construction, and OBO facility managers in Kabul. We also met with OBO, DS, and SCA officials in Washington to discuss State’s construction, use, and plans for temporary facilities. To examine State’s planning for projected embassy facility needs, we analyzed State coordination and planning documents, as well as funding proposals for new construction in Kabul. In addition, we reviewed State policy regarding master planning and strategic facilities planning. We also consulted best practices for such planning established by the International Facility Management Association (IFMA), as well as GAO and Office of Management and Budget capital planning guidance.discuss changing post facility needs and the various coordination efforts to address those needs, we met with State officials from OBO, SCA, SRAP, and DS, as well as with post officials responsible for management, facilities, and security in Kabul. We conducted this performance audit from July 2014 to May 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Michael J. Courts, (202) 512-8980 or [email protected] David J. Wise, (202) 512-5731 or [email protected]. In addition to the contacts named above, Michael Armes (Assistant Director, Physical Infrastructure), Leslie Holen (Assistant Director, International Affairs and Trade), David Hancock, Eugene Beye, John Bauckman, Jacob Beier, Jon Fremont, and Marc Schwartz made key contributions to this report. Technical assistance was provided by Lynn Cothern, Kristine Hassinger, Ernie Jackson, Tina Cheng, and Gwyneth Woolwine.
Since re-opening in 2002, the U.S. embassy in Kabul, Afghanistan, has experienced a dramatic increase in staffing, followed by a gradual drawdown. State has invested or plans to invest a total of $2.17 billion in U.S. facilities to address current and projected space needs. State awarded two contracts in 2009 and 2010 to construct additional on-compound housing and office facilities. State partially terminated one contract for the convenience of the U.S. government, and expanded the construction requirements of the second, affecting cost and schedule. State's Bureau of Overseas Building Operations is responsible for the planning, design, and construction of U.S. embassies. This report updates and expands upon GAO's previous work. This report examines (1) the extent to which construction cost and schedule have changed and why, (2) State's use of temporary facilities on-compound, and (3) State's planning for projected embassy facility needs. GAO evaluated construction planning and contract documents and interviewed State and contractor officials in Washington, D.C., and Kabul. Cost and schedule have increased for the Kabul embassy construction project, in part due to incomplete cost and risk assessment. Cost for the 2009 and 2010 contracts has increased by about 27 percent, from $625.4 million to $792.9 million, and is likely to increase further. Projected completion has been delayed over 3 years to fall 2017. The Department of State (State) did not follow its cost containment and risk assessment policies, resulting in lost opportunities to mitigate risks. These risks, such as delays in the sequencing of the two contracts, eventually materialized, increasing cost and extending schedule. Unless State follows its policy, it may be unable to avoid or mitigate risks to cost and schedule on future projects. Architect's Rendering of Embassy Compound upon Project Completion Since 2002, State has built over $100 million in temporary buildings (intended for no more than 5 years' use) to meet space needs on-compound but has no security standards tailored to those facilities. On completing the project in 2017, all temporary facilities will be 5 to 10 years old, and their continued use is likely. Without security standards or other guidance to guide temporary facility construction in conflict environments, State inconsistently applied alternative security measures that resulted in insufficient and different levels of security for temporary offices and housing, as well as increased cost and extended schedules. Without temporary facility security standards or guidance, future construction in conflict environments could encounter similar problems. State's lack of a strategic facilities plan and policies governing such planning has led to coordination challenges in addressing the embassy's future facility needs. Industry standards cite the value of plans that comprehensively assess existing facilities, identify needs, and document decisions on meeting those needs. In Kabul, however, State constructed a guard facility without proper design review or applying for a building permit, leading to fire safety deficiencies that State corrected at extra cost. Finally, State formally assigns responsibility for strategic facilities planning but lacks policy that governs implementation of such planning. State intends to make additional facility investments to address future facility needs. Without a strategic facilities plan and policy to guide its development, coordination to address these needs will continue to be difficult. GAO recommends that State (1) adhere to its cost containment and risk assessment policies, (2) consider establishing security standards or guidance for temporary buildings in conflict zones, (3) develop a strategic facilities plan for Kabul, and (4) clarify its strategic facilities and master planning policy. State concurred with the first, third, and fourth recommendations and partially concurred with the second.
Under DOD’s supply chain materiel management policy, the secondary item inventory should be sized to minimize DOD’s investment while providing sufficient inventory to support both peacetime and war requirements. The Offices of the Secretary of Defense and the Navy share the responsibility for management and oversight of the secondary item inventory. The Under Secretary of Defense for Acquisition, Technology, and Logistics is responsible for the uniform implementation of inventory management policies throughout the department, while the Secretary of the Navy is responsible for implementing DOD inventory policies and procedures. Navy inventory management functions are primarily the responsibility of the Naval Inventory Control Point, a component of the Navy Supply Systems Command that has offices in Philadelphia and Mechanicsburg, Pennsylvania. Aviation and maritime items are managed in Philadelphia and Mechanicsburg, respectively. The Navy prescribes guidance and procedural instructions for computing requirements for its secondary inventory. Navy managers develop inventory management plans for their assigned items, which include developing budgetary requirements for procurement and repair, monitoring and discussing inventory performance with contractors and repair depots, evaluating requests for stocking from individual DOD activities, and processing requisitions for materiel that cannot be satisfied by automated processes. DOD requires each service and DLA to semiannually prepare inventory stratification reports, which are primarily used to determine procurement and repair budget requirements, and potential excess or reutilization stock. Stratification is a process that identifies and prioritizes requirements and allocates inventory to those requirements based on availability. DOD annual stratification reports show that for the 4 years covered in our review, the value of the Navy’s secondary inventory decreased both in dollar amounts and as a percentage of DOD’s overall secondary inventory (see table 1). While the total reported value of DOD’s secondary inventory decreased by almost $2 billion from fiscal year 2004 through fiscal year 2007, the reported value of the Navy’s inventory decreased by more than $7 billion. According to Navy inventory managers, this decrease was attributable to the following factors: (1) a greatly accelerated disposal rate for items in the F-14 program, (2) an accounting cleanup of records on unserviceable parts in transit, (3) sales of inventory that had accrued in support of major war operations in 2002 and 2003, (4) an increase in aviation assets that could not be repaired and therefore were disposed of, and (5) the transfer of inventory control for consumable aviation items from the Navy to DLA. The Navy uses a process called requirements determination to calculate the respective amounts of inventory it either needs to have available in storage (on hand) or needs to purchase (on order). A central database called the Master Item File provides data for the requirements determination process. The Navy also uses the Master Item File to develop a stratification report showing the amount of inventory allocated to meet specific requirements, including operating and acquisition lead time requirements. Operating requirements include the war reserves authorized for purchase; customer-requisitioned materiel that has not yet been shipped (also known as due-outs); a safety level of reserve to be kept on hand in case of minor interruptions in the resupply process or unpredictable fluctuations in demand; minimum quantities for essential items for which demand cannot normally be predicted (also referred to as numeric stockage objective or insurance items); and inventory reserve sufficient to satisfy demand while broken items are being repaired (also referred to as repair cycle stock). Acquisition lead time requirements include administrative lead time requirements, which refer to inventory reserves sufficient to satisfy demand from the time that the need for replenishment of an item is identified to the time when a contract is awarded for its purchase or an order is placed; and production lead time requirements, which refer to inventory reserves sufficient to satisfy demand from the time when a contract is let or an order is placed for inventory to the time when the item is received. When the combined total of on-hand and on-order inventory for an item drops to a threshold level—called the reorder point—the item manager may place an order for additional inventory of that item, to avoid the risk of the item’s going out of stock in the Navy’s inventory. The reorder point includes both operating requirements and acquisition lead time requirements. An economic order quantity–-the amount of inventory that will result in the lowest total costs for ordering and holding inventory–-is automatically calculated by a computer program and is added to the order. The reorder point factors in demand for inventory items during the reordering period so that Navy managers can replace items before they go out of stock, and a safety level to ensure a supply of stock during interruptions in production or repair. A purchase request can be terminated or modified if requirements change. These requirements collectively constitute the requirements objective, which we refer to as the Navy’s current requirements in this report. An assessment of the Navy’s requirements or requirements determination process was outside the scope of our review. In accounting for its inventory, the Navy uses the stratification process to allocate, or apply, inventory to each requirement category. On-hand inventory in serviceable condition is applied first, followed by on-hand inventory in unserviceable condition. On-order inventory is applied when on-hand inventory is unavailable to be applied to requirements. We refer to situations when on- hand inventory is insufficient to satisfy reorder point requirements as inventory deficits. Inventory that exceeds current requirements may include inventory that satisfies 2 years of projected future demand, which together with current requirements is known as the approved acquisition objective; economic retention inventory, which exceeds the approved acquisition objective but has been deemed more economical to keep than to discard because it will likely be needed in the future; contingency retention inventory, which exceeds the economic retention inventory but is retained for specific contingencies; and potential excess materiel, which exceeds contingency retention inventory and has been identified for possible disposal but has potential for reutilization. Our analysis of Navy secondary inventory data for the 4-year period we examined showed that, on average, about $11.3 billion (60 percent) of the average annual total inventory value of $18.7 billion was needed to meet current requirements and $7.5 billion (40 percent) exceeded current requirements. About half of the inventory that exceeded current requirements was being retained for demands anticipated within 2 years, and the remainder was held as economic retention inventory, contingency retention inventory, or marked as potential excess. According to the Navy’s demand forecasts for items exceeding current requirements in fiscal years 2004 and 2007, inventory levels of some items were sufficient to meet many years and sometimes decades of demand. A large proportion of items that exceeded current requirements had no projected demand. Reparable inventory that exceeded current requirements included both serviceable and unserviceable parts, and the proportion of items associated with steady programs—that is, programs that were not significantly growing or declining—was similar for inventory meeting and exceeding current requirements. Relatively few inventory deficits were identified, but these persisted for some items during the 4 years we reviewed. Our analysis of Navy secondary inventory data showed that, on average, about $11.3 billion (60 percent) of the total annual inventory value was needed to meet current requirements, whereas $7.5 billion (40 percent) exceeded current requirements. Measured by number of parts, these percentages were reversed: 40 percent of the parts applied to current requirements on average each year, and the remaining 60 percent exceeded current requirements. Our data for the 4-year period revealed that 121,380 (65 percent) of the Navy’s 186,465 unique items with reported inventory had parts in excess of current requirements. Table 2 shows the stratification of Navy secondary inventory for the 4-year period, including inventory meeting requirements and inventory exceeding requirements. The data in table 2 show that the Navy has applied a significant amount of inventory to future demand as well as to current requirements. On average, about 1.1 million parts comprising 6 percent of total parts and 20 percent of total inventory value were designated for future demand. Furthermore, the average value of these parts ($3.7 billion) was nearly half the average value of the parts needed to meet annual operating requirements ($7.6 billion). The balance between inventory meeting current requirements and inventory exceeding current requirements stayed relatively constant from year to year (see fig. 1). The secondary inventory data further showed that while the aviation community had fewer spare parts than the maritime community, these parts constituted a higher average value; conversely, the maritime community had more parts but at lower average value. Table 3 shows the average number and value of parts exceeding current requirements for each of these communities at the end of each fiscal year. Of the nearly $7.5 billion in Navy secondary inventory that exceeded current requirements in the time frame we examined, about half was being retained for demands anticipated within 2 years, while the remainder was being retained either as economic retention inventory, contingency retention inventory, or potential excess (see fig. 2). With regard to on-order inventory, the Navy marked approximately $10 million (1 percent) of this inventory each year as potential excess to be reviewed for possible disposal. This means that demands had decreased significantly since the time the order was placed, yet the Navy had not terminated the order. Navy managers told us that on-order inventory marked as potential excess is routinely cancelled to prevent the immediate disposal of new inventory. We did not independently verify whether this practice was consistently followed. Table 4 shows the amount of potential excess inventory the Navy had on order at the end of fiscal years 2004 to 2007. The Navy’s forecasts for items with a recurring demand in fiscal years 2004 and 2007 showed that inventory for some items exceeded the current requirements necessary to meet many years and sometimes decades of demand. In addition, a substantial amount of this inventory showed no projected demand. The results of this analysis are shown in figure 3. As shown in figure 3, about $1.9 billion (27 percent) of the inventory exceeding current requirements in fiscal year 2007 was sufficient to satisfy 2 years of demand, $2.5 billion (36 percent) was sufficient to meet between 2 and 10 years of supply, and $0.5 billion (8 percent) was sufficient to meet demand for 10 years or more. In addition, the Navy in fiscal year 2007 had $1.9 billion (28 percent) of inventory exceeding current requirements for which there was no forecasted demand. About $1.1 billion (60 percent) of these items were being retained because of economic or contingency retention requirements, and the remaining $0.8 billion (40 percent) were considered for disposal or reutilization. In commenting on a draft of this report, the Navy stated that a majority of these items are in low demand, are used on older weapon systems, and can no longer be procured, so the Navy will retain inventory as requirements trend down. We could not independently verify the Navy’s statement using the stratification data, and the Navy did not provide supporting data. Reparable inventory that exceeded current requirements included both serviceable and unserviceable parts. The Navy pays storage costs for all items regardless of condition. Based on DLA data, we estimate that the Navy incurred at least $18 million in storage costs for its wholesale secondary inventory that exceeded current requirements in fiscal year 2007. In fiscal year 2007, serviceable parts constituted about 45 percent of the total reparable parts exceeding current requirements and about 39 percent of the total value (see fig. 4). The proportion of Navy secondary inventory associated with steady programs was similar for inventory meeting and exceeding current requirements. Each Navy inventory item is assigned a program status that indicates whether the item or the item’s higher assembly is part of a weapon system program that is growing, staying steady, declining, or obsolete. In fiscal year 2007, 81 percent of the value of aviation parts and 79 percent of the value of maritime parts which met current requirements were associated with steady programs. For items exceeding current requirements, these proportions were similar—79 and 73 percent for aviation and maritime items, respectively. Table 5 shows the percentage of items in each category by program status. The Navy had inventory deficits for some items—that is, an insufficient level of inventory on hand to meet the reorder levels identified in its current requirements. As of the September 30 stratification report date for fiscal years 2004 through 2007, the Navy had insufficient on-hand inventory to meet reorder-level requirements for an average of about 15,000 items annually, totaling about $570 million in inventory deficits each year. Normally, inventory managers will place an order for new parts when an item’s inventory falls to the reorder level, but in fiscal year 2007 there were a total of 13,775 items with an inventory deficit, of which 6,315 (46 percent) had no inventory on order. In commenting on our report the Navy said some of these deficit items will not be procured because they are obsolete or have been replaced by other items. However, of the 6,315 items on order, only 840 were in declining programs where items would not be procured. Further, 21 percent of items with deficits had unfilled requisitions from previous time periods, indicating that some items had persistent deficits over time. Navy inventory managers said that deficits occur and can persist for various reasons, including cases in which a supplier is no longer in business or producing the part needed, and a new, qualified supplier must be identified to produce the item. Our random sample of items with inventory deficits in fiscal year 2007 showed that 35 percent of these items had an inventory deficit in each of the 4 years we reviewed. We could not determine the criticality of these deficits because this information is not available in stratification reporting. In terms of number of parts, the Navy had fewer inventory deficits for aviation items than for maritime items, but the aviation items constituted a higher average value. Figure 5 shows the value of Navy’s inventory deficits for each of the fiscal years included in our review. However, the Navy would need considerably more inventory to meet its total requirements objective for these items. For example, when both on- hand and on-order inventory are included, in fiscal year 2007 the Navy had a total deficit against the total requirements objective of about 880,000 parts valued at about $1.5 billion This amount is about three times the level of its on-hand deficits alone. Our review identified several factors that contributed to the Navy’s having secondary inventory that did not align with current requirements, including significant levels of inventory that were in excess of these requirements over the 4-year period. While the Navy strives to provide effective supply support in meeting warfighter needs and reports meeting or almost meeting many of its own supply availability targets, it has placed less emphasis on doing so at least cost. The Navy has not established metrics and goals for tracking and assessing the cost efficiency of its inventory management. In addition, although changes in demand account for much of the inventory in excess of current requirements, the Navy has not systematically evaluated why demand forecasting is unpredictable and how to better manage it. Further, the Navy has not adjusted certain inventory management practices to allow for flexibility in responding to unpredictable demand. In addition, our review noted that although the Navy’s newly established chief management officer and deputy chief management officer will oversee business transformation, the Navy has not yet defined their respective roles in overseeing inventory management improvement efforts. These new designations provide an opportunity to enhance oversight of such efforts. Although the Navy has emphasized the need to meet warfighter needs as measured by supply support performance metrics and goals, it has not established metrics and goals to track and assess the cost efficiency of its inventory management practices. As a result, the Navy does not know whether it is meeting inventory requirements at least cost as required by DOD’s supply chain management regulation. DOD’s supply chain management regulation requires the military services to take several steps to provide for effective and efficient end-to-end materiel support. The regulation also sets out a number of management goals and directs the components to take a number of steps including sizing secondary item inventories to minimize the DOD investment while providing the inventory needed; considering all costs associated with materiel management in making best-value logistics decisions; balancing the use of all available logistics resources to accomplish timely and quality delivery at the lowest cost; and measuring total supply chain performance based on timely and cost-effective delivery. To ensure efficient and effective supply chain management, the regulation also calls for the use of metrics to evaluate the performance and cost of supply chain operations. These metrics should, among other things, monitor the efficient use of DOD resources and provide a means to assess costs versus benefits of supply chain operations. However, the regulation does not prescribe specific cost metrics and goals for the services to use to track and assess the efficiency of their inventory management practices. According to Navy officials, they have processes and controls for efficiently managing secondary inventory. For example, they use a requirements-setting process for determining secondary items necessary to meet performance goals, while evaluating the trade-offs between the requirements and acceptable risk of being out of stock. They also compare requirements to available assets and identify funding needed during the next 2-year budget period. After budget approval, they use a supply demand review process and repair workload forecasting to initiate procurements and plan repairs throughout the year. The supply demand reviews enable them to determine significant requirement changes and recommend additional procurement or termination of existing procurements. They also stated that the semiannual stratification review acts as a check and balance. They noted that Navy item managers are required to meet goals that ensure that the Navy does not unnecessarily build inventories but rather balances the costs for terminating a contract against that of initiating a new contract in the near future. They said they are confident that these processes and controls work because the Navy is able to meet required performance goals at budgeted costs. Moreover, the Navy uses metrics to track and assess performance toward meeting inventory support goals. These include metrics showing supply material availability and customer wait time. For example, the Navy tracks the extent to which it is meeting supply material availability goals— which are set at 85 percent (except for nuclear propulsion-related material, which has a goal of 95 percent)—as well as average customer wait time. Recent data show that the Navy generally meets or almost meets these goals, although we did not independently verify these performance data during our review. The Navy also measures financial performance by the extent to which budgeted amounts are obligated and net sales plans are achieved. In this way inventory managers may be accountable for goals related to supply material availability and customer wait time, as well as budget performance. The Navy, however, has not established metrics and goals for determining whether it is meeting its performance goals at least cost. For example, it has not established a metric related to its cost efficiency in meeting the supply material availability goal. The overall secondary inventory data we analyzed show that the Navy carried about $1.66 in inventory for every $1 in requirements to meet its supply material availability goal during the 4- year period of fiscal years 2004 through 2007. Such a metric, in combination with other cost metrics and established goals, could give the Navy the capability to track trends and assess progress toward achieving greater cost efficiency. Because cost metrics and goals have not been established, Navy managers are not held accountable and lack incentives for achieving efficient supply support. Measuring performance goals such as supply material availability and average customer wait time without also tracking cost metrics encourages higher levels of inventory. As a result, the Navy carries billions of dollars in excess inventory against current requirements each year without having to demonstrate that these inventory levels are cost effective. Our review showed that unpredictability in forecasting demand for spare parts was a primary cause of the Navy’s inability to align inventory levels with current requirements. DOD’s supply chain regulation states that customer demand shall be part of all DOD components’ inventory management decisions, components shall not stock an item that does not have any possibility of future demand, and variance in demand forecasts outside established parameters should be flagged for management analysis and action. According to Navy managers, demand is the single most significant data element for forecasting requirements and a driving factor in identifying the reorder point. While Navy managers agreed that accurately forecasting demand is a long-standing difficulty, they said that they forecast demand as best as they can and could not readily identify ways to significantly improve on their current procedures. However, they could not show where the Navy has systematically evaluated its demand forecasting procedures to identify areas where forecasts have been consistently inaccurate in order to correct any systemic weaknesses. Another related difficulty, according to Navy managers we interviewed, is a lack of timely communications among stakeholders, including promptly relaying changes in programs and other decisions that affect purchases of spare parts. More prompt communication of demand updates could help to mitigate the effects of demand fluctuations, they said. Navy item managers who responded to our survey most frequently cited changes in demand as the reason inventory did not align with current requirements. Changes may include demand decreasing, fluctuating, or not materializing at all, resulting in inventory exceeding current requirements; or demand increasing, resulting in inventory deficits. Table 6 shows the results of our representative survey of items with inventory excesses (384 items), and table 7 shows the results of our survey for items with inventory deficits (40 items). Responses in the “other” category varied but included issues related to procuring and retaining minimum quantities of parts, obsolescence, or other explanations of demand changes. Regarding parts excess to current requirements, for example, one respondent said the Navy has upgraded to a new module, but support is still required to meet Air Force requirements. Regarding a deficit, for example, one respondent said they are working with a sole source vendor and the estimated shipping date slipped. In follow-up discussions Navy managers confirmed that changes in demand were the main cause of inventory exceeding current requirements or inventory deficits. In some cases, they attributed these changes to incomplete or inaccurate demand data, owing to a lack of communication among the various key participants in the demand-forecasting process. In several cases, they cited poor communications with other service components that were generating the demand. The following cases illustrate challenges Navy managers face in predicting demands for items: An example of an item in excess due to demand changes was the blades used in the F404 engine that goes into the Navy’s F-18 model A/D aircraft. The Navy had 13,852 of these parts valued at $3.6 million as excess to current requirements. The next higher assembly is now on a contract under which the contractor supplies the item, so the demand for the blades disappeared. Thus, the Navy’s anticipated demand for these parts never materialized. In commenting on our draft report, the Navy stated that all 13,852 parts were being offered to the contractor in return for a cost reduction on the contract. Another example of an item with inventory excess to current requirements was a special cable assembly also used on the Navy’s F- 18 model A/D aircraft’s forward-looking infrared radar. The item was being phased out by the Navy, and the last purchase was in fiscal year 2006 for six parts valued at $76,087 to support the Coast Guard’s continued use of the item. However, since the Navy did not know the Coast Guard requirements for this item, it did not determine the proper level of inventory to carry for this item. An example of an inventory deficit that should have been more predictable because it involved a planned program alteration was a valve assembly used on various ship hulls for firefighting and air conditioning systems. The item is being phased in to support a planned ship alteration. We identified it as having an on-order excess of 16 parts valued at $77,021 in our analysis as of September 30, 2007, but by March 2008 this item was in a deficit position. This case illustrates the challenges Navy managers face in predicting demand for an item, even when demand is driven by a planned program change. Navy managers said that demands Navy-wide have been decreasing for reasons they did not fully understand, and they provided data submitted by managers of ships’ inventory showing that two-thirds of demand forecasts were incorrect by more than 10 percent. In order to meet materiel availability support goals, managers said, they need to err on the side of having rather than not having the items. Furthermore, incomplete or inaccurate data can cause widespread problems in cases where the Navy relies on automated data processing for past recurring demand requisition history to predict future customer demands, then adjust these data when changes occur that are significant enough to be flagged. Navy managers said they actively manage items that are in high demand, costly, or identified for other reasons; the remaining items often require less attention. They said that Navy policy allows for automated procurements of all items costing less than $50,000. In the aviation community, these buys represented an average of about 52 percent of the total buys and 7 percent of the total value of procurements between fiscal years 2005 and 2007. With thousands of items to manage and generally little time to spend on all but the highest value, most significant, or problem items, Navy managers rely on the historical demand data provided electronically from requisitions. Navy managers observed that some customers and some secondary inventory items are more predictable than others. They cited problems, including a lack of communication and coordination among key personnel. For example, they said that the nuclear propulsion community is better coordinated because the engineers, contract managers, and inventory managers are collocated and work closely with program officials, maintenance locations, and contractors. However, for aviation and maritime support equipment such as mobile generators or test equipment, a variety of issues have made demands more difficult to predict. For example, support equipment is used on multiple platforms, needs periodic calibration, and may have more obsolescence issues. They observed that having timely, complete, and reliable data, as well as coordinated communications among contract, maintenance, program, inventory, and contractor officials and other suppliers, can improve demand data predictability. While the Navy recognizes that unpredictable demand is a driving factor in the lack of alignment between inventory and current requirements, it has not systematically evaluated why its demand-based forecasts fluctuate, why demands across the Navy inventory are decreasing, and how demand fluctuations vary among item manager groups or across items. The Navy does not formally track the accuracy of its demand forecasts or what can be done to improve them. Navy officials also said that many Navy secondary inventory items require long production lead times, rendering orders for these items more vulnerable to inaccuracy due to demand fluctuations. In addition, they said that while they could improve demand forecasting, this would increase administrative support costs and would not be affordable across the Navy supply system. However, the Navy could not provide data specifying what these costs would be. In addition, the Navy has not determined the extent to which it could avoid costs by purchasing fewer items in accordance with more accurate, updated demand data. Although the Navy acknowledged that demand unpredictability is a driving factor in the lack of alignment between inventory and current requirements, it has not adjusted certain inventory management practices to incorporate flexibility for accommodating demand fluctuations. We identified three specific areas—initial provisioning management, on-order management, and retention management—where current practices contributed to the Navy having significant amounts of inventory in excess of current requirements. Under DOD’s supply chain management regulation, calculated risks may be taken during the initial provisioning for selected items when program uncertainties or other circumstances make such risks acceptable. Navy inventory managers told us they rely on weapon system program managers to identify inventory requirements needed to meet initial provisioning estimates. However, they said these estimates often prove to be inaccurate. For example, configuration changes may be made to the system or parts may last longer or shorter than initially estimated. As a result, some items that are purchased based on the initial provisioning estimates are ultimately not needed to meet requirements. For example: One item, a sonar set used on the Los Angeles Class submarine, had nine parts in inventory of which seven (valued at $69,314) were identified as excess to current requirements. The estimated demand for these parts—which went through initial provisioning in 1991—did not materialize. The parts have been in inventory since that time. Navy managers noted this was not uncommon with initial provisioning. Another item, an electronic module used in a number of ship and air combat systems by the Navy and the Air Force, was last purchased in 1988. Nineteen parts were purchased, of which 15 (valued at $48,363) were currently identified as excess. Initial provisioning demand was based on engineering estimates that proved to be inaccurate. Navy managers said that inaccurate high or low estimates happen with some regularity. The Navy’s inventory management practices for on-order items limit flexibility in modifying purchase decisions in cases where demand has changed. Modifying purchase decisions can include reducing or canceling the quantities being purchased. The Navy identifies purchase requests and contracts for modification when quantities being purchased exceed the sum of requirements and an added “termination protection level.” The amount of a contract that is canceled is the portion that exceeds the protection level. Because the protection level often exceeds an item’s economic order quantity, purchase requests and contracts for inventory that exceeds requirements often are not considered for cancellation or the amount of a contract that is canceled is limited by a protection level. Thus, while modification of purchase contracts can be triggered when assets exceed protection levels, these protection levels are often set so high that they limit modification actions. Navy managers said they reduce or cancel purchases only when quantities of an item exceed established protection levels. They added that protection levels provide an effective safeguard against canceling a purchase decision only to have to place new orders when demand for an item increases. In our follow-up discussions with 10 Navy aviation managers who had on-order inventory that exceeded current requirements, none of the items involved a termination action. In one example involving a holdback bar assembly, the Navy had 31 on-order parts valued at $103,124 that exceeded current requirements. Although items are reviewed at least quarterly for termination, managers took no action on this item because of the established protection level. Also, managers had been informed that some of these items might potentially be needed for use in Iraq. While cancellation of on-order inventory can reduce purchases of unneeded inventory in response to changes in demand, a relatively small proportion of the Navy’s total inventory exceeding requirements is on order compared to the amount that is already on hand. As shown in figure 6, about 98 percent of the value of the Navy’s secondary inventory that exceeded current requirements was on hand and just 2 percent of the value was on order in the years we reviewed. DOD’s supply chain materiel management regulation addresses management of on-order items, and includes a number of provisions intended to provide for effective and efficient end-to-end support. For example, when economic order quantity methods are used in making purchase decisions, the regulation requires that every attempt shall be made to purchase materiel under indefinite delivery and indefinite quantity contracts so that the order quantity and delivery times are reduced. Our analysis of Navy inventory data showed that the preponderance of items purchased as economic order quantity was already on hand. Of the $1.63 billion applied to economic order quantity in fiscal year 2007, about $1.37 billion (84 percent) was on hand and $260 million (16 percent) was on order. More closely managing the purchase of economic order quantities can add some flexibility in minimizing investments in secondary inventory. However, the Navy loses this flexibility once the inventory is delivered. Although prior studies by our office and LMI have identified weaknesses in DOD components’ inventory retention practices, the Navy has not implemented corrective actions recommended in these reports. As a result, the Navy’s inventory retention practices have contributed to the significant levels of secondary inventory exceeding current requirements, including a substantial amount of inventory that had no projected demand. As discussed earlier, our analysis showed the Navy annually held about $1.9 billion of its secondary inventory in economic and contingency retention categories in fiscal years 2004 through 2007. The Navy has a retention and disposal program aimed at identifying inventory that should be retained and inventory that is potential excess and should be considered for disposal or reutilization. The Navy’s inventory retention policy calls for an economic retention level to ensure that an item is available for a specified number of years. Economic retention formulas are applied to inventory items based in part on program status. For example, in a steady program the Navy wants a minimum of three items to be available for economic retention for 8 years. Different formulas would apply to secondary inventory associated with increasing or declining programs. According to Navy managers, they annually review the program status of inventory items to ensure correct economic retention formulas are applied to each. Additionally, the Navy has contingency retention requirements to preclude disposal of assets that might be needed for future nonrecurring demand, such as outfitting or planned maintenance actions; items used primarily in wartime which have limited use in peacetime; and future foreign military sales. The Navy policy also directs that material normally not be disposed of within 7 years of its material support date with some exceptions, to prevent premature disposal decisions based on initial provisioning forecasts. These economic and contingency retention requirements, along with potential excess stock, are to be reviewed on a semiannual basis and prior to disposal and the results of these reviews are to be provided in briefings to the Naval Supply Systems Command prior to the final stratification report. Prior reports by our office and LMI have identified weaknesses in DOD components’ retention practices and recommended corrective actions. In 2001, we reported that DOD components had not properly documented the approaches they have taken in making economic retention decisions, lacked sound analytical support for the maximum levels they used, and had not annually reviewed their methodologies for making economic retention decisions as required by DOD’s supply chain regulation. We recommended that DOD establish milestones for reviewing approaches used for making decisions on whether to hold or dispose of economic retention inventory and to annually review their approaches to meet DOD regulations to ensure that they have sound support for determining economic retention inventory levels. In responding to our report, DOD stated that further study of retention practices was needed, noting that the National Defense Authorization Act for Fiscal Year 2000 directed DOD to sponsor an independent study on secondary inventory and parts shortages. DOD subsequently tasked LMI in 2001 and again in 2003 to examine whether current economic retention policy requirements and procedures could be improved. LMI’s review yielded recommendations similar to ours. In 2006, we reported that DOD had yet to implement our 2001 recommendations on economic retention inventory management, and we reiterated the need to implement them. We noted in that report that DOD places emphasis on purging from its inventory items which no longer support its mission and needlessly consume warehouse space. We further found that some DOD components had not followed DOD policies and procedures to ensure they were retaining the appropriate amount of contingency retention inventory. A separate LMI study of the Air Force’s economic retention practices identified the need to incorporate new techniques for accommodating demand uncertainty. DOD then tasked LMI to repeat the analysis for the other components and to address the retention of materiel in the DOD supply system. LMI reported in July 2007 that the question of retaining or disposing of inventory is subject to demand uncertainties. It found that the DOD regulation correctly defines the economics of retention and the need to use economic analysis and up-to-date cost factors when deciding what to retain. Among other things, LMI linked retention practices with demand forecasting and called for components to use additional techniques for more accurately determining the probability of future demand or repurchase. For example, it called on the services to determine whether an item with no recent demand history is still part of a weapon system configuration and said that items with extended periods of no demand should be candidates for item reduction. LMI also recommended augmenting traditional demand forecast accuracy metrics with a measure of bias to identify the potential for overforecasting, and adjust forecasting methods accordingly. It noted that some forecast methods have a tendency for positive bias, with the result that forecasts are too high more often than they are too low. This leads to inflated inventory levels, especially for low-demand items which can be harder to sell than high- demand items. LMI called for monitoring demand forecasting methods to identify bias which can lead to overinvestment in inventory. We found no evidence that the Navy had taken these actions. On the basis of our review, we believe they could strengthen the management of the Navy’s secondary inventory. For example, although the Navy continues to have a substantial amount of inventory each year for which it shows no projected demand (about 85,700 unique items valued at over $1.9 billion in fiscal year 2007), data have not been developed to show whether these items are still part of a current weapon system configuration, have had extended periods of no demand, and should be candidates for item reduction. In addition, the Navy could not document that it has conducted required annual reviews to validate its retention decision practices. DOD’s regulation states that to ensure that economic and contingency retention stocks correspond with current and future force levels, the components shall review and validate their methodologies for making economic and contingency retention decisions. The review shall occur at least annually, and the inventory management organization commander or designee shall attest to its validity in writing. The methodology used to set economic retention levels should be based on economic analysis that balances the cost of retention and the costs of disposal. Under the regulation, the service components’ reviews should focus on better analyses supporting retention decisions by using forecasting models that take into account potential upward or downward trends in demand and/or the uncertainties of predicting long-term demand based on historical data, and improved estimates for costs used in retention decision making. Contingency retention reviews should focus on verifying that the reason for contingency retention still exists and the reason is properly recorded. Navy officials said briefings provided to the Navy Supply Systems Command prior to the final stratification review include economic retention data. However, we do not believe these briefings fulfill the DOD requirement for an annual review which the commander attests to in writing. In addition, these briefings do not address the elements set out in DOD’s regulation, such as validation that retention levels are based on economic analysis balancing retention and disposal costs. Navy officials also said they performed a full study of the execution of the Navy’s economic retention policy in 2005. During the study they verified that the model was in compliance with policy. They also performed sensitivity analysis of the model, which confirmed the model continues to perform cost-effective retention computations. They provided a briefing that summarized the results of this study and recommended maintaining economic retention policy “as is,” continually monitoring the retention policy to identify methods to improve cost estimates, explore benefits of no-demand options, explore reductions in minimum retention limits, and continue to proactively dispose of obsolete material and monitor DLA warehousing costs. While this study may be useful to the Navy in managing retention inventory, as stated above, we do not believe it fulfills the requirement for an annual review which the commander attests to in writing. Although the Navy has established a chief management officer and deputy chief management officer for business transformation, it has not defined what, if any, role these individuals will play in overseeing inventory management improvement. The costs of DOD’s business operations have been a continuing concern. In April 2008, for example, the Defense Business Board raised concerns that DOD had not aggressively reduced the overhead costs related to supporting the warfighter, which it noted accounted for about 42 percent of DOD’s total spending each year. The Defense Business Board recommended that DOD align strategies to focus on reducing overhead while supporting the warfighter. In May 2007, DOD established a chief management officer position with responsibility for improving and evaluating the overall economy, efficiency, and effectiveness of the department’s business activities. The Navy also established a chief management officer, effective April 2008. Both DOD and the Navy planned to have a deputy chief management officer actively implementing business transformation by October 2008. Although the Navy’s chief management officer and deputy chief management officer would not likely have direct responsibility for inventory management, they have been assigned responsibility for transforming DOD’s business operations. Therefore, these newly designated positions provide an opportunity for an enhanced level of oversight of inventory management improvement. The Navy has accumulated and retained levels of secondary inventory each year that exceed current requirements without justifying that these inventory levels are sized to minimize DOD’s investment. When the Navy invests in the purchase of inventory items that become excess to its requirements, these funds are not available to meet other military needs. Taking steps to reduce the levels of inventory exceeding requirements could help to ensure that DOD is meeting supply performance goals at least cost. The Navy lacks metrics and goals for tracking and assessing cost efficiency along with supply availability, customer wait time, and other supply performance metrics and goals. Among other things, cost- efficiency metrics and goals could provide a basis for effective management and oversight of inventory reduction efforts. Much of the inventory that exceeded current requirements or had inventory deficits resulted from inaccurate demand forecasts, which the Navy attributed to unpredictability of demand. However, the Navy has not systematically evaluated and addressed demand unpredictability or adjusted certain inventory management practices to enhance flexibility in adapting to fluctuations in demand. In the absence of such actions, the Navy will likely continue to purchase and retain items that it does not need and then spend additional resources to handle and store these items. Finally, since inventory management is part of the Navy’s broader business operations and transformation, it is reasonable to expect the newly established chief management officer and deputy chief management officer to exercise some level of oversight of the Navy’s inventory management improvement efforts. Strengthening the Navy’s inventory management—while maintaining high levels of supply availability and meeting warfighter needs—could reduce support costs and free up funds for other needs. To improve the management of the Navy’s secondary inventory, we recommend that the Secretary of Defense direct the Secretary of the Navy, in conjunction with the Commander, Navy Supply Systems Command, and the Commander, Naval Inventory Control Point, to take the following four actions: Establish metrics and goals for tracking and assessing the cost efficiency of inventory management and incorporate these into existing management and oversight processes. Evaluate demand forecasting procedures to identify areas where forecasts have been consistently inaccurate, correct any systemic weaknesses in forecasting procedures, and improve communications among stakeholders, to include promptly relaying changes in programs and other decisions that affect purchases of spare parts. Further, the Commander, Naval Supply Systems Command, and the Commander, Naval Inventory Control Point, should develop an evaluation plan and interim milestones for assessing the impact of ongoing efforts and take additional corrective actions, if warranted, to improve demand forecasting for secondary inventory. Revise inventory management practices to incorporate the flexibility needed to minimize the impact of demand fluctuations. Specific attention should be given to revising practices regarding initial provisioning management, on-order management, and retention management. Further, the Commander, Naval Supply Systems Command, and the Commander, Naval Inventory Control Point, should develop an evaluation plan and interim milestones for assessing the impact of ongoing efforts and take additional corrective actions, if warranted, to incorporate flexibility into inventory management practices. Ensure that required annual reviews validating methodologies used for making retention decisions are performed and documented. We also recommend that the Secretary of the Navy direct that the Navy’s Chief Management Officer and Deputy Chief Management Officer exercise appropriate oversight of Navy inventory management improvement to align improvement efforts with overall business transformation and to reduce support costs. In its written comments on a draft of this report, DOD concurred with our recommendations and identified corrective actions and estimated dates for these actions to be completed. On the basis of DOD’s comments, we have modified two of our recommendations. The Navy also provided technical comments, which we have incorporated as appropriate. The department’s written comments are reprinted in appendix II. Although it concurred with our recommendations, DOD took issue with our finding that 40 percent of the Navy’s secondary inventory exceeded current requirements and stated that it was important to frame this finding in proper context. DOD commented that 50 percent of the inventory we portrayed as excess to current requirements is applicable to the 2-year budget horizon, another 10 percent is retained as economic retention stock which is less expensive to retain than to dispose and later procure, and 30 percent is contingency retention stock which is held for specific contingencies, leaving only 10 percent identified as potential excess. It said the department will continue to focus on reducing potential excess, as well as improving forecasts and ensuring a correct balance between the cost to hold inventory and the cost to dispose and repurchase. For the purposes of our analysis, we defined excess inventory as that portion of the inventory that exceeds the requirements objective, which is defined in the department’s supply chain materiel management regulation. As we noted in the report, we selected the requirements objective as our baseline because it includes the requirements used to determine when to order new parts. In other words, if the Navy had enough parts to meet the requirements objective, it would not purchase new parts. The inventory categories and data cited by DOD in its comment are discussed in the report. The department’s comment places too little emphasis on the need to reduce the accumulation and retention of inventory that exceeds current requirements, which amounted to about $7.5 billion each year. When the Navy invests in inventory sooner than it is needed, the chances increase that more inventory will become excess, and funds used to purchase inventory before it is needed are not available to meet other military needs. Thus, we continue to believe that taking steps to reduce the high levels of inventory exceeding current requirements could help ensure that the Navy is meeting supply performance goals at least cost. Some of the actions that DOD identified in its responses to our specific recommendations should help. DOD concurred with our recommendation that the Navy establish metrics and goals for tracking and assessing the cost efficiency of inventory management. It said the Navy Supply Systems Command will incorporate into existing management and oversight processes a metric and goal for tracking and assessing the cost efficiency of inventory management, and identified October 31, 2009, as the estimated completion date for this action. DOD concurred with our recommendation that the Navy improve demand forecasting procedures and communications among stakeholders. However, DOD cited ongoing Navy efforts to evaluate current forecasting procedures and tools, implement a long-planned enterprise business information system, and continue its annual training of inventory managers, and it did not identify additional corrective actions beyond those already planned. DOD estimated these actions would be completed by September 30, 2010. While the ongoing Navy efforts cited by DOD in its comment may have a positive impact, we continue to believe that the Navy could derive benefits from a systemic evaluation of its demand forecasting procedures. Therefore, the Navy should establish an evaluation plan and interim milestones for assessing the impact of ongoing efforts and take additional corrective actions, if warranted. We have modified our recommendation accordingly. DOD concurred with our recommendation that the Navy revise inventory management practices to incorporate flexibility needed to minimize the impact of demand fluctuations. However, DOD cited ongoing Navy efforts to improve inventory management practices, including those related to initial provisioning and on-order inventory management, and estimated these corrective actions would be completed by September 30, 2010. While the ongoing Navy efforts cited by DOD in its comment may have a positive impact, its comment provided no indication that the Navy plans any changes to the way it conducts business. Therefore, the Navy should establish an evaluation plan and interim milestones for assessing the impact of ongoing efforts and take additional corrective actions, if warranted. We have modified our recommendation accordingly. DOD concurred with our recommendation that the Navy perform and document required annual reviews validating methodologies used for making retention decisions. According to DOD, the Navy Supply Systems Command will modify its management internal control program to assure this requirement is met and estimated this corrective action would be completed by May 31, 2009. We believe this planned action is responsive to our recommendation. DOD concurred with our recommendation that the Navy direct its Chief Management Officer and Deputy Chief Management Officer to exercise appropriate oversight of Navy inventory management improvement to align improvement efforts with overall business transformation and to reduce support costs. DOD said the Navy is developing a business transformation implementation strategy to align with Office of the Secretary of Defense actions in this area. Through this development process, the Navy will determine the appropriate role the Chief Management Officer should exercise in inventory management oversight. DOD estimated that it would complete this corrective action by April 30, 2009. We believe this planned action is responsive to our recommendation. We are sending copies of this report to interested congressional committees; the Secretary of Defense; the Secretary of the Navy; the Secretary of the Air Force; the Director, Defense Logistics Agency; the Under Secretary of Defense for Acquisition, Technology, and Logistics; and the Director, Office of Management and Budget. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov/. If you or your staff have any questions concerning this report, please contact me on (202) 512-8365 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. To determine the extent to which the Navy’s on-order and on-hand secondary inventory reflected the amount needed to support current requirements, we obtained the Navy’s Central Secondary Item Stratification Budget Summary and item-specific reports for September 30 of each fiscal year from 2004 through 2007. The stratification reports serve as a budget request preparation tool and a mechanism for matching assets to requirements. Our analysis was based on analyzing the Navy’s item stratifications within the opening position table of the Central Secondary Item Stratification Reports. To validate the data in the budget stratification reports we generated summary reports using electronic data and verified our totals against the summary stratification reports obtained from the Navy. After discussing the results with Navy managers, we determined that the data were sufficiently reliable for the purposes of our analysis and findings. Upon completion of the data validation process, we revalued the Navy’s secondary inventory items identified in its budget stratification summary reports because these reports value useable items and items in need of repair at the same rate, and do not take into account the cost of repairing broken items. We computed the new value for items in need of repair by subtracting repair costs from the unit price for each item. We also removed overhead charges, called cost recovery rates, from the value of each item. Using information obtained from Navy managers, we identified and removed from our data set items managed under Performance Based Logistics (PBL) contracts. According to the Navy, published stratification data on PBL items are inaccurate because the Navy does not determine requirements for these items. Table 8 summarizes the Navy inventory data we used, showing the annual averages for items, parts, and value of the Navy’s inventory, organized by supply cognizance code. In presenting the value of inventory in this report, we converted then-year dollars to constant fiscal year 2007 dollars using Department of Defense (DOD) Operations and Maintenance price deflators. We considered Navy inventory to exceed current requirements if more inventory than needed is available to satisfy its requirements based on the opening position table of the Navy’s budget stratification report. Collectively, these requirements are referred to by DOD as the “requirements objective,” defined as the maximum authorized quantity of stock for an item. However, if more inventory is on hand or on order than is needed to satisfy its requirements, the Navy does not consider the inventory beyond current requirements to be unneeded. Instead, the Navy uses this inventory to satisfy future demands over a 2-year period, economic retention requirements, and contingency retention requirements. Only after applying inventory to satisfy these additional requirements would the Navy consider that it has more inventory than is needed and consider this inventory for potential reutilization or disposal. We do not agree with the Navy’s practice of not identifying inventory used to satisfy these additional requirements as excess because it overstates the amount of inventory needed to be on hand or on order by billions of dollars. The Navy’s requirements determination process does not consider these additional requirements when it calculates the necessary amount of on-hand and on-order inventory, which means that if the Navy did not have enough inventory on hand or on order to satisfy these additional requirements, the requirements determination process would not result in additional inventory being purchased to satisfy these requirements. We consider the Navy to have inventory deficits if levels of on-hand inventory are insufficient to meet the reorder level, which the Navy defines as the level of on-hand assets at the time an order must be placed to achieve the acceptable stock-out risk. Normally, item managers place an order for the number of parts below the reorder level, plus an economic order quantity. However, due to variation in acquisition lead times, these parts may not be delivered when they are needed. We did not include the procurement cycle (economic order quantity) requirement when calculating inventory deficits, because this requirement defines the maximum level of on-hand or on-order inventory that may be above the reorder level, and does not define a minimum level of on-hand inventory. For comparison purposes with the excess inventory, we calculated the amount of inventory that the Navy would have to acquire to meet acquisition lead time and economic order quantity in order to achieve current operating requirements for these items where there was a deficit. To determine the extent to which the Navy’s on-hand and on-order secondary inventory reflects the amount of inventory needed to support requirements, we reviewed DOD and Navy inventory management policies, past GAO products on DOD and Navy inventory management practices for secondary inventory items, and other related documentation. We also created a database which compared the Navy’s current inventory to its current requirements and computed the amount and value of secondary inventory exceeding or not meeting current operating requirements. We also determined how the Navy applied the inventory that exceeded current requirements to future demands, economic retention, contingency retention, or potential reutilization/disposal. We determined how much of the Navy’s inventory was in serviceable condition, and compared this portion to the inventory in unserviceable condition. We also used codes provided by the Navy to determine the program status of items we identified as meeting or exceeding current requirements. We developed a survey to estimate the frequency of reasons why the Navy maintained inventory items that were not needed to support current requirements or did not meet requirements. The survey asked general questions about the higher assembly (component parts) and/or weapon systems that the items support, and the level of experience of the item manager with responsibility for the item. In addition, we asked survey respondents to identify the reason(s) why inventory exceeded current requirements or was in deficit. We provided potential reasons which we identified during our discussions with Navy managers from which they could select. Since the list was not exhaustive, we provided an open-ended response option to allow other reasons to be provided. In addition to an expert technical review of the survey by a survey methodologist, we conducted pretests with Navy managers for aviation and maritime items in Philadelphia and Mechanicsburg, Pennsylvania, prior to sending out the final survey instrument. We revised the survey accordingly based on findings from the pretests. We e-mailed this electronic survey to specific Navy managers in charge of sampled unique aviation and maritime items at the Navy’s Inventory Control Point locations in Philadelphia and Mechanicsburg, Pennsylvania. We conducted this survey from May 2008 through July 2008. To estimate the frequency of reasons for inventory not needed to meet requirements and inventory deficits, we drew a stratified random probability sample of 424 unique items—353 unique items with on-hand inventory not needed to support current requirements, 31 unique items with on-order inventory not needed to support current requirements, and 40 unique items with inventory deficits—from a study population of 126,331 items (112,567 with inventory not needed to meet current requirements and 13,764 with inventory deficits). These categories identified a combined value of $6.8 billion of inventory not needed to meet current requirements. All of these items met our criteria to be included in our study population of items not needed to meet current requirements. Additionally, based on our analysis of the stratification data, all of the 13,764 unique items with inventory deficits, valued at $462 million, met our criteria to be included in our inventory deficit study population. We stratified using the scheme shown in table 9, dividing the on-hand and on-order excess into 3 substratum each by the amount of supply on hand and stratifying within Philadelphia and Mechanicsburg. With the inclusion of a stratum for inventory deficit items within each office, our sample contained 14 total strata. The divisions of the population, sample, and respondents across the strata, as well as the number of responses by stratum, are also shown in table 9. We sent 424 electronic surveys—one for each item in the sample—to the Navy managers identified as being responsible for these items. Inventory control for three of the items in our sample had recently been transferred to the Defense Logistics Agency, so we treated these cases as out of scope. We did not receive completed data collection instruments for 3 of the remaining items in our sample. We received 418 usable responses to our surveys, providing a total response rate of 98.6 percent. Each sampled item was subsequently weighted in the final analysis to represent all the members of the target population. Because we followed a probability procedure based on random selections, our sample of unique items is only one of a large number of samples that we might have drawn. Because each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results in 95 percent confidence intervals. These are intervals that would contain the actual population values for 95 percent of the samples we could have drawn. As a result, we are 95 percent confident that each of the confidence intervals in this report will include the true values in the study population. All percentage estimates from our sample have margins of error (that is, widths of confidence intervals) of plus or minus 5 percentage points or less, at the 95 percent confidence level unless otherwise noted. In addition to sampling errors, the practical difficulties of conducting any survey may introduce errors, commonly referred to as nonsampling errors. For example, difficulties in how a particular question is interpreted, in the sources of information that are available to respondents, or in how the data are entered into a database or were analyzed can introduce unwanted variability into the survey results. We took steps in the development of the survey, the data collection, and the data analysis to minimize these nonsampling errors. We reviewed each survey to identify unusual, incomplete, or inconsistent responses and followed up with item management specialists by telephone to clarify those responses. In addition, we performed computer analyses to identify inconsistencies and other indicators of errors and had a second independent reviewer for the data analysis to further minimize such error. To determine reasons for the types of answers given in the surveys, we held additional on-site interviews with Navy inventory managers on 70 of the items in our sample. We chose an equal number of aviation and maritime items based on the highest value of inventory to identify 10 each from on-hand, on-order, and deficits. We also held follow-up discussions on 10 other items where we found that demand had been increasing, yet there were excess parts; or conversely where demand had been decreasing, yet there was an inventory deficit. These cases were atypical because, according to Navy managers, demand increases would likely lead to deficits, and, conversely, demand decreases would likely lead to increases in inventory excess to requirements. These included 5 aviation items and 5 maritime items based on the pattern of demand forecasts we observed for these items from fiscal year 2004 through 2007. During these discussions we obtained additional detailed comments and documentation related to demand, demand forecasting, acquisitions, terminations, and retention and disposal actions. We conducted this performance audit from November 2007 to December 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. On the basis of information obtained from the Navy on the reliability of its inventory management systems’ data, and the survey results and our follow-up analysis, we believe that the data used in this report were sufficiently reliable for reporting purposes. In addition to the contact named above, Thomas Gosling, Assistant Director; Carl Barden; Lionel C. Cooper, Jr.; Foster Kerrison; Carl Ramirez; Minnette Richardson; Steve Pruitt; and Cheryl Weissman made key contributions to this report.
Since 1990, GAO has designated the Department of Defense's (DOD) inventory management as a high-risk area. It is critical that the military services and the Defense Logistics Agency effectively and efficiently manage DOD's secondary inventory to ensure that the warfighter is supplied with the right items at the right time. It is also imperative that they maintain good stewardship over the billions of dollars invested in their inventory. GAO reviewed the Navy's management of secondary inventory and determined (1) the extent to which on-hand and on-order secondary inventory reflected the amount needed to support current requirements and (2) causes for the Navy's having secondary inventory in excess of current requirements or, conversely, for having inventory deficits. To address these objectives, GAO analyzed Navy secondary inventory data (spare parts such as aircraft and ship engines and their components and accessories) from fiscal years 2004 through 2007. For the 4-year period GAOexamined, the Navy had significantly more inventory than was needed to support current requirements. The Navy also experienced some inventory deficits, though to a far lesser extent. GAO's analysis of inventory data identified an annual average of about $18.7 billion of Navy secondary inventory for fiscal years 2004 to 2007, of which about $7.5 billion (40 percent) exceeded current requirements. About half of the $7.5 billion of inventory exceeding current requirements was retained to meet anticipated future demands, and the remainder was retained for other reasons or identified as potential excess. Based on Navy demand forecasts, inventory that exceeded current requirements was sufficient to satisfy several years, or even decades, of anticipated supply needs. Also, a large proportion of items that exceeded current requirements had no projected demand. The Navy also had an annual average of about $570 million of inventory deficits over this 4-year period. Some items experienced persistent deficits for the 4 years covered in GAO's review. Navy inventory did not align with current requirements over this 4-year period because (1) the Navy has not established the cost efficiency of its inventory management, (2) its demand forecasting effectiveness is limited and requirements for items may change frequently after purchase decisions are made, and (3) it has not adjusted certain inventory management practices in response to the unpredictability in demand. As a result, the Navy had billions of dollars in excess inventory against current requirements each year. DOD's supply chain management regulation requires the military services to take several steps to provide for effective and efficient end-to-end materiel support. For example, the regulation directs the components to size secondary item inventories to minimize DOD investment while providing the inventory needed. However, while the Navy has performance measures related to meeting warfighter needs, it lacks metrics and targets for tracking and assessing the cost efficiency of its inventory management. In addition, although Navy managers most frequently attributed the accumulation and retention of inventory exceeding current requirements to changes in demand, the Navy has not systematically evaluated the effectiveness of its demand forecasting. Problems with demand forecasting that contribute to excess inventory include incomplete and inaccurate data and a lack of communication and coordination among key personnel. Finally, the Navy has not adjusted certain management practices--in areas such as initial provisioning, modifying purchase decisions for inventory that is on order and not yet in its possession, and retention--to provide flexibility for responding to changes in demand. First, initial provisioning of spare parts based on engineering estimates can result in the purchase of unneeded stock when these estimates prove to be inaccurate. Second, the Navy's management practices for on-order items limit flexibility in modifying purchase decisions in cases where demand has changed. Third, although prior studies have identified weaknesses in inventory retention practices, the Navy has not implemented recommended corrective actions. Also, the Navy's designation of new chief and deputy chief management officer positions provides an opportunity for enhanced oversight of inventory management improvement efforts. Strengthening the Navy's inventory management--while maintaining high levels of supply availability and meeting warfighter needs--could reduce support costs and free up funds for other needs.
The ICD-9 code set was adopted in the United States in 1979 as the standard for documenting morbidity and mortality information for statistical purposes. It was expanded and adopted in 2000 under HIPAA as the standard code set for use in all electronic transactions by covered entities. Specifically, ICD-9 codes are currently used in all U.S. health care settings to document diagnoses and are also used in all U.S. inpatient hospital settings to document procedures. ICD codes are used in a variety of ways by Medicare. For example, Medicare uses the codes to help determine hospital inpatient payment rates based on the Medicare-Severity Diagnosis-Related Groups system, which classifies inpatient stays according to both patients’ diagnoses and the procedures a patient receives. Further, CMS uses the diagnosis codes to determine whether the care provided by physicians is medically necessary and, therefore, eligible for reimbursement. The transition to ICD-10 is intended to increase the number of codes and, thus, improve providers’ ability to designate the level of specificity when documenting diagnoses and procedures. Specifically, ICD-9 codes are made up of three to five alphanumeric values, while ICD-10 codes are made up of three to seven values, allowing for more codes and increased specificity. While there are approximately 15,000 ICD-9 diagnosis codes, there are approximately 70,000 ICD-10 diagnosis codes. Likewise, there are approximately 4,000 ICD-9 procedure codes, while there are approximately 72,000 ICD-10 procedure codes. The additional codes were defined by the Centers for Disease Control and Prevention and CMS to enable providers and payers to capture greater specificity and clinical information in medical claims. For example, using ICD-10 codes, a provider will be able to identify a body part and the side of the body subject to an evaluation or procedure; however, the ICD-9 codes do not allow this level of differentiation between the left and right sides of the body. As another example, there is only one ICD-9 code that a provider would enter on a claim for angioplasty (a procedure to restore blood flow through an artery), but there are 854 ICD-10 codes for angioplasty, with these codes including additional details on the affected body parts and the approaches and devices used for the procedure. (Within these 854 codes there will be higher-level generic codes available for entry if a lower level of detail is not needed. Therefore, a provider may not need to know or use all the 854 codes for angioplasty.) Another difference between the ninth and tenth versions of the codes is the terminology and disease classifications, which are to be updated so that they are consistent with new technology and current clinical practice. For example, under ICD-9, when filing Medicare claims, providers use a single code to reflect tobacco use or dependence, while, under ICD-10, they will be able to use a code that indicates a category for nicotine dependence with subcategories to identify the specific tobacco product and nicotine-induced disorder. In this example, the updated disease classifications for nicotine disorders reflect the increased knowledge of the effects of nicotine. Other differences between the code sets include the addition of new concepts that do not exist in ICD-9 diagnosis codes, such as the expansion of postoperative codes to distinguish between intraoperative and post-procedural complications, and the designation of trimester for pregnancy codes. HHS issued a final rule on January 16, 2009, that mandated the use of ICD-10 codes by all HIPAA-covered entities by October 1, 2013; hence, the transition from version 9 to version 10 of the codes was initially to take effect approximately 2 years ago. However, on September 5, 2012, the department issued a final rule that delayed the effective date until October 1, 2014. The Secretary of HHS made this decision because, among other reasons, results of industry surveys and polls had indicated that HIPAA- covered entities throughout the country were not prepared to successfully complete the transition. Subsequently, the Protecting Access to Medicare Act of 2014, enacted April 1, 2014, mandated an additional delay by prohibiting HHS from requiring the use of ICD-10 codes sooner than October 1, 2015. On August 4, 2014, the department issued a final rule that established October 1, 2015, as the new compliance date. Accordingly, on October 1, 2015, all health care transactions that include ICD codes must begin using the tenth version of the codes for services that occur on or after that date. (Transactions with dates of service that occur prior to the transition date of October 1, 2015, must continue to be documented with ICD-9 codes.). Figure 1 illustrates the sequence of events leading to the current compliance date. Medicare fee-for-service claims that include ICD codes are submitted, processed, and authorized for payment through a combination of stakeholders’ systems and CMS’s internal claims processing systems. For example, health care providers use systems within their practices to complete and submit claims for payment of services covered under the Medicare fee-for-service program, and the Medicare Administrative Contractors (MAC), who administer the processing of the claims, use their own and CMS’s internal systems to complete processing of the claims for approval and authorization of payment. Additionally, other health care insurers receive claims data from CMS that may include ICD codes when payments of health care benefits are shared between these insurers and Medicare. These insurers’ systems must be able to accept and process the data sent by CMS, including the ICD codes. Therefore, all these types of systems would need to be modified by Medicare stakeholders in order to function properly in an electronic claims processing environment when the transition from ICD-9 to ICD-10 is made. Stakeholders in CMS’s electronic Medicare claims processing environment include providers, health care clearinghouses, and private health care insurers, all of which use their own systems to exchange claims data that include ICD codes with CMS’s internal systems. For example, health care providers use systems within their practices to complete claims for payment of services delivered to their patients, including beneficiaries of the Medicare fee-for-service program. Once the data for a patient visit have been entered into a provider’s system, the claim is electronically submitted for processing and payment authorization, either directly from the provider’s systems or through a health care clearinghouse—an organization that converts nonstandard data elements of health information into standard data elements so that they can be transmitted to and used within other claims processing systems. The claims data are transmitted from the provider’s (or clearinghouse’s) system to a MAC—one of the contractors whose services CMS uses to administer the claims processing requirements of the program. Each MAC uses one of two standard software modules to accept electronic claims for processing. These systems, referred to as “front-end” systems, are used to first determine whether the data submitted are valid. There are two front-end validation systems—one for the Part A/B institutional, physician and non-physician practitioner claims, and one for durable medical equipment, prosthetics, and orthotics claims. When validating ICD codes, the front-end systems are designed to check to ensure that the codes and related data are properly entered on the claim. Specifically, in order to be accepted by the MACs’ systems, the claim must include a diagnosis indicator that specifies whether the ICD-9 or ICD-10 code set is being used, one or more dates of service, and ICD codes. These ICD-related data must be consistent in order to pass through the front-end systems and on to the MACs’ systems. For example, a claim with an ICD-10 diagnosis indicator; a September 30, 2014, date of service; and an ICD-10 code entered into a data field that is 7 values in length should be rejected by the front-end data validation routines because the date of service (September 30, 2014) is earlier than the ICD- 10 compliance date (October 1, 2015). Instead, the claims data would require an ICD-9 indicator and code. Therefore, it would not be transmitted from the front-end validation system for further processing by the MAC; rather, it would be rejected and sent back to the submitting provider (via the MAC’s system) for correction. Thus, these front-end data validation systems would require modifications in order to accept and process the expanded 7-value ICD-10 code field in addition to the 5-value ICD-9 code field. The systems would also need to be modified to ensure that they can determine that the version of the codes used is consistent with the date service was delivered—for version 9, a date of service prior to October 1, 2015, and for version 10, a date of service on or after October 1, 2015. After claims pass the front-end validation routines, each MAC uses its own systems to receive electronic Medicare claims data that would include ICD codes from providers (or clearinghouses). For example, the MACs use systems such as claims imaging software, interactive voice response systems, provider portals, and workflow management systems to accept providers’ data. A MAC then uses its systems to transmit the claims data to CMS’s systems that are used to determine whether to approve and authorize payment of the claims. Each of the systems used by a MAC to accept, process, and transmit claims data would require modifications in order to process ICD-10 rather than ICD-9 codes. While private insurers do not submit health care claims data to CMS, they nonetheless also use systems that would need to be changed to process ICD-10 codes that are included with other claims data sent to them by CMS. Specifically, in cases when a beneficiary is covered by both Medicare and another payer, such as a supplemental Medigap insurer or a primary payer other than Medicare, the other payer (insurer) could receive claims data, including ICD codes, from CMS if it was responsible for paying all or a portion of a claim. In such cases, the other payers’ systems that receive the claims data from CMS would have to be modified to accept ICD-10 rather than ICD-9 codes. The Medicare claims data that are transmitted to CMS by the MACs are to be further processed by four internal systems operating within CMS’s Virtual Data Center. These systems are developed and maintained by four information services contractors. The systems are the Fiscal Intermediary Shared System (FISS)—the Medicare Part A and Part B claims processing system used to process claims related to medical care provided by institutional providers, such as hospital inpatient and outpatient departments, skilled nursing facilities, and hospices; Multi-Carrier System (MCS) —the Medicare Part B claims processing system used to process claims related to physician and non-physician practitioners, laboratories, therapy, independent diagnostic testing facilities, and ambulance claims; ViPS Medicare System (VMS)—used by the Durable Medical Equipment contractors to process claims for medical equipment, such as wheelchairs and walkers, and for prosthetics, orthotics, and medical supplies; and Common Working File—provides a single data source where the contractors can verify beneficiary eligibility, compare claims history for a beneficiary across the shared systems, and receive prepayment review and approval of claims. FISS, MCS, and VMS are referred to as “shared systems.” The information services contractors who maintain them are called “shared systems maintainers”. Collectively, these four systems are used by the MACs to support their review of claims prior to payment and ensure that payments are made to legitimate providers for reasonable and medically necessary services covered by Medicare for eligible individuals. When CMS’s FISS, MCS, and VMS shared systems receive the properly formatted claims data from a MAC’s system, additional processing is conducted to determine whether the data, including ICD codes, meet requirements of CMS’s payment policies before the claims can be approved for payment. If the systems determine that the data do not meet the requirements, the claim is denied and sent back via the MAC’s system to the provider for corrections. For example, the shared systems execute automated prepayment controls called “edits,” which are instructions programmed into the system software to identify errors in individual claims and prevent payment of incomplete or incorrect claims. These prepayment edits may rely on an analysis of ICD codes to identify claims for services unlikely to be provided in the normal course of medical care and services for diagnoses that are anatomically impossible. For example, the analysis conducted by a prepayment edit may identify two ICD-10 codes on a claim that indicate a patient was diagnosed with two broken right femurs, when there is only one femur on a person’s right side. As a result, payment of the claim would be denied because the ICD codes used indicate a diagnosis that is anatomically impossible. However, an ICD-10 code that indicates a broken right femur and another that indicates a broken left femur would be accepted because it is anatomically possible for a person to break both femurs. On the other hand, ICD-9 codes entered on the claim for two broken femurs would not indicate that the right and left femurs were both broken, so the analysis conducted by the same prepayment edit would likely identify the ICD coding to be duplicative and, consequently, deny payment of the claim. As such, the software that processes such edits would need to be modified to conduct the analysis based on ICD-10 values rather than ICD- 9 since, as noted in the example, the logical analysis performed would be different for each. Once a claim has been completely processed by the appropriate shared system, its data are transmitted to the Common Working File, which conducts additional processing to determine whether the claim’s beneficiary is eligible for the service for which the claim was filed, compares the claims to other claims filed for that beneficiary across the shared systems, and determines whether payment for the claim should be authorized. This system sends payment authorization and beneficiary information, including ICD codes, back to the shared systems, where they are then processed for payment by the MACs. It also stores beneficiary data that include the ICD codes for use by other CMS systems, such as the systems that store claims data to be used when coordinating benefits between Medicare and private insurers. An overview of CMS’s and stakeholders’ Medicare fee-for-service claims processing and the related systems that utilize ICD-10 codes is illustrated in figure 2. Within CMS, business owners in the policy and business groups (such as the Center for Medicare, Center for Program Integrity, Center for Financial Management, etc.) are responsible for defining requirements to be supported by the agency’s internal systems, including the systems that process fee-for-service claims. Business owners also are responsible for overseeing maintenance of the internal systems, which are operated at CMS’s Virtual Data Center. Within the agency the Center for Medicare and the Office of Technology Solutions are responsible for following the systems development and change management processes for updating the shared systems, including changes needed to support the transition to ICD-10 on October 1, 2015. Specifically, among other responsibilities, the Office of Technology Solutions provides day-to-day oversight of the contractors that perform ongoing systems maintenance and support for Medicare fee-for-service claims processing. The Office of Enterprise Information leads the coordination, development, implementation, and maintenance of the HIPAA electronic data interchange information standards in the health care industry, such as the transition of the ICD standard from revision 9 to revision 10. The Office of Enterprise Information is the agency lead for ICD-10 implementation. CMS’s processes require its internal claims processing systems (the shared systems and Common Working File) to be updated quarterly in order to remain current with ongoing changes in health care procedures, technologies, and policies. Medicare program changes made to comply with the electronic data interchange standards used by the health care industry to conduct electronic transactions (such as Medicare claims processing and other transactions defined by HIPAA rules) are a major driver of system update requirements. Thus, the system changes needed to support the transition from ICD-9 to ICD-10 are to be implemented through the agency’s quarterly updates. To complete its quarterly system updates, CMS is to follow an established agency-wide software development life cycle process, which defines a change management process for identifying and implementing any changes that need to be made to provide functionality within existing operational systems (and that do not require development of new systems). The need to make changes to the systems is determined when a business requirement is identified based on new legislation or other business requirements, such as the transition to the tenth revision of ICD codes required by HHS. To begin the process for implementing the system changes, CMS officials within policy and business groups who are affected by new business requirements are to write change requests. A change request is a formal instruction that defines specifications for making modifications to a system or systems, along with updates to the corresponding technical documentation. The change request is to be presented to the shared systems and Common Working File maintainers, as well as the MACs, who are to analyze the business requirements and translate them into system requirements. The maintainers and MACs are to then analyze the system requirements to determine the scope and effort of the changes that would need to be made to the systems. The change requests are then grouped into a release “baseline,” which is presented to CMS’s Medicare Change Control Board. This board, which is made up of representatives from the policy and business groups that are responsible for the Medicare fee-for-service program, is to then review and approve the change requests to be implemented in the next release. If the release baseline and work to implement changes are approved by the board, the change requests are submitted to the systems maintainers and the MACs who are to follow the agency’s software development life cycle processes and program the systems to implement the functionality needed to meet the new business requirements, such as requirements for processing ICD-10 codes. Any quarterly release may address many change requests for various business requirements. In this regard, the quarterly releases that implement changes to support the ICD-10 business requirements would also include changes to address other business requirements or modifications needed to correct errors in CMS’s claims processing systems. Once the system changes have been made, the contractors are to conduct three levels of testing prior to releasing the updated systems into production. CMS’s Office of Technology Solutions is to oversee the first two levels, and the Center for Medicare is to oversee the third level of testing. The first level of testing—alpha testing—is internal testing of the individual systems conducted by the shared systems maintainers. During this level of testing, the maintainers are to conduct test cases to validate that each change request has been addressed and that appropriate changes have been made within the CMS system they maintain. The second level of testing—beta testing—is to be conducted by CMS’s single testing contractor to ensure that all the shared systems work together as expected in CMS’s claims processing environment. The third level—user acceptance testing—is to be conducted by the MACs to test the integration of their claims submission systems with CMS’s shared systems. This level of testing is supposed to simulate a live production environment in which claims are submitted by providers (or clearinghouses) via the MACs, adjudicated by CMS, and approved or denied for payment. Completion of each of the lower levels of testing is important to ensure that system errors are detected and addressed as soon as possible within a release cycle. User acceptance testing is intended to identify errors that may occur when the stakeholders’ and CMS’s systems are integrated to simulate claims processing from the point when they are completed and submitted by the providers’ systems, through adjudication and payment authorization by CMS’s internal systems. At this level of testing, the MACs are to conduct test cases that simulate a live environment and validate that the interconnections and interfaces between the stakeholders’ and CMS’s systems function properly. Outcomes of this type of testing can also identify errors that were not detected during lower levels of testing, and any additional changes that were not identified earlier in the change management process but are needed to fully address business requirements. The shared systems and Common Working File maintainers and single testing contractor participate in the user acceptance testing to correct any errors that may not have been detected during the first two levels (alpha and beta testing), and to implement any additional system changes needed to address business requirements. Further, the MACs are to conduct testing to ensure that any system changes implemented for a quarterly software update do not adversely affect the ongoing operations of the system or introduce any new system errors. This type of testing is referred to as “regression” testing and is needed to re-test changes made in previous releases of software, such as changes made to process ICD code revisions, during testing of subsequent releases. Therefore, user acceptance testing is intended to provide a comprehensive validation of the readiness of systems to be released into production. Once all the MACs that were affected by the system changes have completed user acceptance testing, the Office of Technology Solutions is to hold a management-level review to determine whether all the systems that had been changed since the last quarterly release are ready to be moved into the live production systems environment. Along with officials from the office’s Business Application Management Group, Division of Shared Systems, Development, Testing, and Operations, the shared systems maintainers and MACs participate in the reviews. An overview of CMS’s quarterly release change management process that was to be applied to the implementation of ICD-10 system changes is depicted in figure 3. Anticipating the transition from ICD-9 to ICD-10 by October 2013, CMS’s Office of E-Health Standards and Services began to plan for the transition in 2007 by identifying systems that needed to be modified in order to process the new codes. The office, in conjunction with the American Health Information Management Association, initiated an assessment in September 2007 of the business processes, systems, and operations under CMS’s direct responsibility that could be impacted by a transition to the ICD-10 code set. The systems identified by the assessment were the three shared systems and the Common Working File that are used to process Medicare fee-for-service claims. As previously mentioned, we reported in January 2015 the results from a related study of CMS’s efforts to help entities affected by changes to ICD codes better prepare for the October 1, 2015, transition. In that report, we described steps that CMS had taken, such as providing educational materials; conducting stakeholder outreach; and monitoring readiness through stakeholder collaboration meetings, focus group testing, and reviews of surveys conducted by the health care industry. We also noted that CMS had documented that the agency had completed all ICD-10-related changes to its Medicare fee-for-service claims processing systems. However, we described several areas of concern identified by stakeholders regarding the ICD-10 systems transition. For example, stakeholders had expressed concerns that CMS’s testing activities had not been comprehensive. We noted that, in response, CMS officials had scheduled end-to-end testing with 2,550 covered entities during 3 weeks in 2015 (in January, April, and July). Additionally, stakeholders had recommended that CMS do more to make its Medicare contingency plans public. We reported that the information in the agency’s contingency plans that are relevant to providers was made publicly available by CMS. We did not make recommendations to CMS in this report. CMS has finished implementing the Medicare claims processing system changes that it determined to be necessary for addressing the October 1, 2015, transition to ICD-10. Based on the agency’s change management documentation, officials responsible for overseeing the transition began taking steps to update the systems in March 2010 and had finished making the systems changes to address ICD-10 requirements in time to meet the initial October 2013 compliance date. In the approximately 2 years since then, the agency has continued to make modifications to systems functionality, as needed, to meet a legislated requirement to update the ICD code sets and to implement changes needed to address the two extensions of the ICD-10 compliance date. Beginning in January 2010 and ending in March 2013, CMS’s business groups submitted 37 change requests to the systems maintainers that required modifications to the shared systems and Common Working File software to meet the October 2013 compliance date. The change requests identified system modifications needed to implement software functionality related to the processing of ICD-10 codes rather than ICD-9 codes. Specifically, changes were needed to establish a structure for defining and maintaining the new codes themselves, which are to be stored in an internal table within CMS’s enterprise data processing environment. The table is to be referenced by the shared systems and Common Working File when processing ICD codes. CMS’s change management documentation reports that the systems contractors completed and CMS approved the implementation of the ICD-10 code table in July 2012. In addition to building the new table to maintain the codes, CMS’s contractors had to make changes to the two front-end validation systems to properly process the ICD-10 codes entered on claims. Toward this end, the agency implemented system changes to validate the ICD-10 codes in the two front-end systems in the July 2012 quarterly system release. Beyond these changes, other requests submitted by CMS’s business groups identified changes that needed to be made to implement functionality related to the utilization of ICD-10 codes by the pre-payment edits within the shared systems and the Common Working File. As previously noted, such edits are used within the systems to analyze claims data and determine whether a claim should be authorized for payment. According to technical documentation that described the systems that needed to be changed, about 200 prepayment edits were affected by the transition from ICD-9 to ICD-10 codes. Changes were made to the shared systems and Common Working File to update the edits in the 2012 and 2013 quarterly releases. CMS’s change management reports indicate that its contractors made changes to update the shared systems and Common Working File to process ICD-10 codes through the quarterly release process. The contractors began to make system changes in July 2010 and continued to make and implement changes the agency identified through 11 system releases until October 2013. The July 2010 quarterly release implemented one change to expand a file structure within FISS for ICD-10 codes. The July and October 2011 quarterly releases implemented five changes for VMS and MCS to address requirements such as the removal of obsolete processes and reports based on ICD-9 codes, identification and printing of ICD-10 indicators, and expansion of various files to accommodate ICD-10 codes. The January, April, July, and October 2012 quarterly releases implemented 19 changes for FISS, VMS, MCS, and the Common Working Files to address various file expansions and conversions, and changes to prepayment and Common Working File edits. The January, April, July, and October 2013 releases implemented 12 changes to FISS, VMS, MCS, and the Common Working File to modify screens and processes, update prepayment edits, and update effective dates for ICD-10. Subsequent to the October 2013 quarterly release, at which time CMS documented that all the changes had been implemented, the agency implemented 5 additional systems modifications related to the processing of ICD-10 codes. These modifications involved making software updates to the shared systems and Common Working File that were needed to address legislated requirements related to new technologies and diseases. Another change was made in October 2014 to address the latest year-long extension by updating the effective date-of-service value throughout the shared systems and Common Working File from October 1, 2104 to October 1, 2015. More detailed information regarding each quarterly system release that addressed ICD-10 change requests is provided in appendix II. According to officials with CMS’s Office of Technology Solutions, on October 1, 2015, the agency’s claims processing systems are expected to begin referencing the internal tables that store ICD-10 codes to validate, edit, and authorize payments to Medicare fee-for-service providers when claims data indicate a service date of October 1, 2015, or later. According to agency documentation, CMS followed processes and practices consistent with industry standards to make changes to and approve the implementation of systems that will be used for processing Medicare claims filed with ICD-10 codes. In addition, the agency identified contingency plans to be followed if its systems experience problems that disrupt the processing of Medicare claims that include the new codes when the systems change over to ICD-10 coding on October 1, 2015. Our body of work related to information systems testing has shown that testing an IT system is essential to validate that the system will satisfy the requirements for its intended use and user needs. Effective testing facilitates early detection and correction of software and system anomalies; provides an early assessment of software and system performance; and provides factual information to key stakeholders for determining the business risk of releasing the product in its current state. Industry standards developed by the Institute of Electrical and Electronics Engineers (IEEE) state that systems testing should be conducted early and often in the life cycle of a systems development project to allow for the modification of products in a timely manner, thereby reducing the overall project and schedule impacts. In addition, CMS’s established software change management processes require that the agency’s development and testing contractors begin testing early in the development process, and that they test often throughout a software development life cycle through three levels of testing—alpha, beta, and user acceptance tests. IEEE also defines practices to help organizations validate systems’ readiness for production. It also recommends planning for contingencies to help mitigate risks and minimize the impact of errors that may be introduced when new or modified systems are implemented into a live production environment. CMS’s change management process used to update its systems to accommodate ICD-10 codes established practices for testing of ICD-10 changes that began early in the software development life cycle for release of systems that would be affected, and continued throughout the quarterly release process. Such practices were established in accordance with IEEE’s recommendation that testing be conducted early and often in a software development process. Based on reports from the agency’s system maintainers, single testing contractor, and MACs, testing of all quarterly software releases was conducted prior to implementation, and included three levels of testing within about 5 months. The results of each level of testing were sent to CMS management for approval before the systems advanced to the next level. Additionally, within each of the 11 quarterly release cycles that implemented ICD-10 system changes, CMS’s Office of Technology Solutions and Center for Medicare oversaw and approved the three levels of testing conducted by the contractors to verify that all change requests had been addressed, the system changes had been implemented, and any system errors had been corrected. CMS’s shared systems and Common Working File maintainers (the CMS contractors who are responsible for developing and maintaining the Medicare claims processing systems) initiated the first level of software testing for ICD-10 changes for each release when they began to program the systems’ software to implement the needed changes. The first level of testing, or alpha testing, for each quarterly release was begun 3 to 5 months prior to implementation of the updated systems. For example, CMS officials and change management documentation for the October 2013 release stated that the first testing of the design and development of the changes made to each of the affected systems—the shared systems and Common Working File—began in May 2013 and was completed and approved in July 2013. Test cases and results for the alpha level define the specific criteria that were to be tested, such as the ICD-10 diagnosis and procedure codes; steps to be taken to verify that results of the test were as expected; the actual results of the test; and whether the test passed. Documented results of the testing indicated that any known errors related to system modifications that were made to address the ICD-10-related change requests had been corrected. After the first level of testing was completed and the results approved, the single testing contractor conducted a second level of testing, in which the systems maintainers participated. Testing was performed for the October 2013 systems release for all of the systems that would be affected—the shared systems and Common Working File—and was completed and approved in September 2013. Documented test results indicated that any detected errors related to the system changes made to support the ICD- 10 transition had been resolved. Specifically, reports on beta test results provided to CMS by the single testing contractor identified 23 errors detected when 84 test cases were conducted in early July 2013. The reports indicated that all the errors were corrected and re-tested by mid- July 2013, and that all test cases for the ICD-10 changes had passed beta testing for the release. Additionally, the final report identified one other ICD-10-related error that was reported to the system maintainers for correction. The report stated that the error was corrected in August 2013. Finally, a third level of testing—the comprehensive user acceptance testing—was conducted by the MACs, with continued involvement of the shared systems maintainers and single testing contractor, about one month prior to each quarterly release. Documented results of the MACs’ user acceptance test cases indicated that the MACs had verified that the system changes made to address ICD-10 change requests had been tested and any errors detected had been corrected by the systems maintainers. In conducting the user acceptance tests leading up to the October 2013 release, seven of the nine MACS identified system errors in test case results. The errors were related to the implementation of system changes that had been made to address two ICD-10 requests for changes needed to process certain prepayment edits. The MACs’ reports indicated that, in each of these cases, the errors were communicated to the system maintainers, who then corrected the errors in the appropriate systems (FISS and MCS) prior to the October 2013 release. The MACs’ user acceptance test results (for each of the shared systems and Common Working File) that were reported weekly to CMS throughout September 2013 identified system issues related to four change requests; however, none of the issues was related to ICD-10 changes. In accordance with IEEE standards and CMS’s change management processes for validating systems’ readiness for production, officials with the Office of Technology Solutions held quarterly release readiness reviews of CMS’s claims processing systems during which agency officials and their contractors considered whether the systems had been tested sufficiently to ensure that they were ready to process ICD-10. Minutes from the October 2013 reviews for each of the shared systems and the Common Working File indicated that results of testing supported CMS officials’ views that the systems were ready to process Medicare claims that include ICD-10 codes. Specifically, the minutes provided details regarding the status of the four ICD-10-related change requests that were to be implemented in the October 2013 quarterly release, including confirmation that each system had been tested and each of the changes had been implemented for the release. The minutes also indicated that the reviews were attended by representatives from all of the MACs, the system maintainers, the single testing contractor, CMS officials responsible for overseeing the implementation of the internal claims processing systems, and representatives of the business or policy groups that requested the system changes. Minutes from each of the shared systems’ release readiness reviews showed that the participating entities were in agreement that the systems were ready to process ICD- 10 codes and to be released into production in October 2013. CMS has established contingency plans to be executed in case errors occur as a result of implementing modifications to the existing claims processing systems, which is consistent with IEEE recommendations that organizations define actions to be taken to minimize the impact of system errors. In August 2014, CMS developed an ICD-10 “day-1” emergency plan, which defined actions that it intends to take in case errors occur when their systems begin processing ICD-10 codes. The plan defines procedures for daily calls between CMS’s Office of Technology Solutions, the MACs, and systems maintainers to identify any problems and discuss possible workarounds to minimize the impact that claims processing system errors may have on Medicare stakeholders. The calls are intended to continue until the agency determines that they are no longer needed. The plan has also established a process, including specific steps and guidelines for engaging an emergency response team, intended to address adverse events related to ICD-10 processing after the modified systems have been released into production. In addition, in February 2015, the agency finalized a systems-level contingency plan that defined corrective actions to be taken by the emergency response team in instances when internal systems may fail to accept and correctly process claims containing ICD-10 codes beginning on October 1, 2015. The plan describes scenarios that would call for action. For example, in describing a scenario in which CMS’s fee-for- service systems do not function as expected, the plan states that if the front-end validation routines fail to properly process correct ICD-10 codes, the team would temporarily disable the faulty routines until corrections could be made. For each claim that was improperly rejected because of the system error, CMS would then determine whether to re-submit the claim for processing by the shared systems. The plan describes other actions to be taken if the shared systems and Common Working File encounter errors in utilizing ICD-10 codes by prepayment edits. In such cases, the emergency response team would determine whether the errors could be fixed quickly, in which case systems would “hold” the claims until the errors were corrected and processing could be completed, or if additional actions would be required prior to the errors being corrected. Corrective actions are also described for another scenario that would occur if the ICD-10 compliance date were delayed until later than October 1, 2015. In this scenario, the contingency plan identifies the system changes that would have to be made, along with the time needed to make them, to the front-end processing systems, the shared systems, and the MACs’ local systems to continue to process ICD-9 codes after October 1, 2015. Although CMS has taken such actions to mitigate risks and minimize the impact of errors occurring in its own and its stakeholders’ systems, the implementation of new or modified software always introduces risks that unforeseen errors will be encountered when the software is released into a live production environment. Such errors may occur if the systems encounter unanticipated conditions related to ICD-10 that had not been considered during system testing. Further, unidentified risks related to the ICD-10 transition could cause disruptions for which the need for corrective actions or contingency plans had not been recognized. Therefore, while it can be expected that system errors will occur, the extent to which any such errors will disrupt the agency’s ability to properly process claims cannot be determined until CMS’s systems begin processing ICD-10 codes. CMS has taken various steps to help providers, clearinghouses, and insurers that participate in the Medicare fee-for-service program implement the systems changes they need to make for transitioning their systems to ICD-10. For example, CMS developed a website for publishing technical support and other information to help stakeholders identify, test, and implement system changes needed to process ICD-10 codes. CMS also provided enhanced technical capabilities that allowed stakeholders to test the integration of their updated systems with CMS’s claims processing systems environment. Further, CMS officials informed stakeholders of alternative ways to file claims that include ICD-10 codes in case any stakeholders have not completed their systems changes by October 1, 2015. As early as 2008, CMS had developed and published a website that includes information related to the implementation of systems changes needed to process and submit ICD-10 codes for Medicare claims processing. The website is updated regularly and, among other information, provides technical guidance to help various stakeholders (e.g., small, medium, and large physician practices; rural physician practices; hospitals) identify and implement needed system changes. The website also includes checklists that stakeholders can use to help guide the development of operational plans for their systems and to identify criteria for developing systems test data and conducting various levels of testing. The checklists identify steps that need to be taken, such as including the most-often-used codes in test cases and testing with external partners, such as payers and clearinghouses. In addition, the “Medicare Learning Network” page on the website provides resources that are intended to keep stakeholders informed of new developments in ICD-10 implementation planning and help them prepare for the ICD-10 transition. These resources include videos, notifications of phone calls with the industry, and subscriptions to e-mail updates. Further, to enhance its efforts to meet stakeholders’ ongoing need for support in implementing system changes for ICD-10, in March 2013, the CMS Office of E-Health Standards and Services collaborated with stakeholders that represent health care providers, health information technology professionals, and insurers. CMS and these stakeholders discussed the need for small providers to update their existing systems in preparation for the ICD-10 transition. Information collected from these collaborations and published on the ICD-10 website identified “lessons learned” from previous experiences in updating systems, such as the need to conduct testing early in the process and to communicate results and information so that providers do not repeat the mistakes made by others. In response to the information collected through these collaborative efforts, CMS identified and developed tools and guidance to address stakeholders’ needs and help them implement and test the systems changes to process ICD-10 claims data. For example, the “Road to 10” made available from the ICD-10 website provides information to help small providers identify the steps they need to take to transition to ICD- 10. Among other things, this resource includes checklists for updating Medicare claims data entry and submission systems, preparing test cases, and conducting internal and external testing. According to the health care information technology stakeholder representatives with whom we spoke—from AHIMA, HIMSS, WEDI, Cooperative Exchange, and AHIP—the information and guidance provided by CMS through its website, collaborations, and industry phone calls have proved to be helpful and valuable to their constituents in updating their systems that process ICD codes. Additionally, CMS took steps to assist the MACs in their efforts to update the systems they use to submit providers’ claims to CMS and to address challenges identified by these contractors as they make changes to their systems. For example, when three of the MACs noted a challenge in mapping ICD-9 to ICD-10 codes when updating their systems, CMS provided a cross-walk data base to help them conduct the mapping. Representatives of the MACs reported that this tool was helpful in their development of software edits associated with ICD-10 coding. Since March 2014, CMS has allowed stakeholders to conduct unlimited testing of their systems that are being changed to submit ICD-10 claims data. Specifically, stakeholders are allowed to conduct “acknowledgment tests” to determine whether providers’ claims data are valid and acceptable for processing by CMS’s internal front-end validation systems. According to information provided on CMS’s ICD-10 website, stakeholders can continue to conduct acknowledgment testing up to October 1, 2015. During acknowledgment testing, test claims are submitted from providers’ systems, either directly or through clearinghouses, to their supporting MACs. The claims are either accepted or rejected by the front-end validation systems. To be accepted, the claims data must include a valid ICD-10 code that matches the date of service and a valid National Provider Identifier. The submitter must also enter an indicator into a data field to specify whether a claim is using an ICD-9 or ICD-10 code, respectively. Claims data that do not meet these requirements are rejected by the front-end systems and sent back to the providers for correction via the MACs’ systems. CMS also conducted and monitored four weeks of structured acknowledgment testing during which agency officials collected data regarding the results of stakeholders’ tests on a national basis. Specifically, it collected data during one week in March 2014, one in November 2014, one in March 2015, and one again in June 2015, about the claims transmitted to CMS’s systems by the stakeholders. The purpose of these national acknowledgment tests was to help providers assess the readiness of their systems to submit claims with ICD-10 codes. During the four weeks of national acknowledgment testing, CMS measured the percentage of claims accepted by the front-end systems, which provided an indicator of the extent to which stakeholders’ systems were ready to submit ICD-10 codes to the Medicare claims processing systems. Table 1 describes the results of this testing and the percentage of claims that were accepted by CMS’s front-end validation systems. To further support stakeholders in their efforts to update their systems for the ICD-10 transition, CMS officials responded to stakeholders’ feedback to address a risk associated with the lack of external testing, which agency officials with the Office of Technology Solutions had identified in their own ICD-10 planning activities. To address this risk, CMS introduced an additional level of testing and offered opportunities for selected stakeholders to conduct end-to-end testing of their systems with CMS’s internal claims processing systems. This additional testing allowed stakeholders to test cases that simulate their live claims processing environment and determine whether the test claims data were valid and properly processed by CMS’s shared systems and Common Working File for authorization or denial of payment. Specifically, during three weeks of end-to-end testing, CMS allowed selected stakeholders (i.e., providers and clearinghouses that submit Medicare claims) to submit test claims data to their supporting MACs, to be processed by the front-end data validation systems that had been modified to accept ICD-10 data. Any claims that were rejected by the front end could be corrected and re-submitted by the provider or clearinghouse. The accepted test claims data were then transmitted to a test environment of CMS’s shared systems, which determined whether the claims had been properly submitted. The approved claims data were transmitted to a test version of the Common Working File, which authorized or denied payment of the claim. The shared systems then created remittance notices that were sent back to the providers. As a result of participating in the end-to-end tests, stakeholders could confirm that their systems were able to accept and transmit the new codes to CMS’s claims processing systems. The end-to-end testing was conducted for a week during each of the months of January, April, and July 2015 to accommodate up to 2,550 stakeholders. In this regard, CMS’s test plans allowed up to 850 stakeholders to participate in the first week of testing; 850 more to participate in the second week (in addition to any of the previous testers who wanted to re-test), and another 850 to participate in the third week (also in addition to any who wanted to re-test, for a total of 2,550 stakeholders). The MACs were responsible for selecting volunteers from the providers and clearinghouses they support to participate in the tests. The MACs’ selections were subject to approval by CMS. To be approved, the participants had to be enrolled in electronic data interchange and able to receive electronic remittance advice. They also had to have an active, valid National Provider Identifier number. According to officials representing the MACs, their test participant selections included a representative cross section of providers and a variety of specialties and facilities. They also stated that they considered the types of claims, the size of the providers, and the geographic location when selecting participants. According to the MACs, the testers were responsible for developing their own test cases, which were to be designed to reflect a wide variety of services or equipment for which they normally submit claims for Medicare payment. Testing criteria were provided to the approved testers via the MACs’ websites, and CMS hosted training sessions with the testers to review procedures. CMS scheduled monthly status calls with the system maintainers and MACs to discuss challenges or issues encountered by their test participants during end-to-end testing. Officials from the nine MACs stated that technical support from CMS during the end-to-end testing weeks was available and easily accessible. As reported by CMS and noted in table 2, in January 2015, 661 testers submitted claims and, in April 2015, 546 additional and 329 repeat submitters participated in the tests. In July 2015, 1173 testers submitted claims, including 680 additional and 493 repeat submitters. While the reported number of testers and claims submitted during the April and July 2015 test periods increased from the number in January, the level of participation was considerably lower than the level of participation that CMS’s facilities were designed to accommodate—850 during the first period, 1,700 during the second period, and 2,550 during the third period. Stakeholders with whom we spoke told us that, while there was concern regarding this level of participation in end-to-end testing, CMS had communicated the availability of enhanced testing capabilities and provided guidance to encourage broader participation. Representatives of the MACs and stakeholders noted that CMS was responsive to their requests for additional testing and expanded outreach to targeted groups in an attempt to broaden the scope of test participants. The MACs noted that, given the variety of testers participating and claims submitted during end-to-end testing, along with the three levels of testing conducted as part of CMS’s change management process, in their view overall testing had been comprehensive and sufficient to ensure that stakeholders’ and CMS’s systems would be ready to process ICD-10 claims data on October 1, 2015. Figure 4 provides an overview of the time frames during which CMS offered structured acknowledgment and end-to-end testing opportunities to stakeholders. Though not generalizable to all stakeholders, representatives of the health information technology groups with whom we spoke stated that CMS’s efforts to support the ICD-10 system transition, particularly over the past 2 years, have been very effective. Officials with AHIMA, AHIP, HIMSS, WEDI, Cooperative Exchange, and the MACs stated that they believed that the majority of their constituencies—i.e. providers, insurers, and clearinghouses—have updated their systems to accommodate the ICD-10 transition, and most had done so in time for the earlier October 2014 compliance date—the first extension allowed by HHS to give stakeholders more time to prepare for the transition. The agency and the MACs have also provided alternative methods for providers to submit claims data to CMS should any of their systems not be ready to process and submit claims with ICD-10 codes by October 1, 2015. For example, free billing software is available from all the MACs and can be downloaded from their websites onto providers’ computers and used by providers to manually enter and electronically submit claims data until they have completed the system changes needed to submit claims with ICD-10 codes. Additionally, five of the nine MACs (that cover 8 of the 16 MAC jurisdictions) provide access to online portals that allow entry of Medicare Part B claims data for submission to CMS. For example, a provider may log into its MAC’s website and access the claims data entry system, which allows providers to manually enter data, including ICD-10 codes, and submit them to the MAC’s system. According to CMS data provided by the Office of Technology Solutions, in 2014, more than 1.3 million Medicare Part B claims were submitted in this manner. CMS also defined a contingency plan that would allow for paper claims submission for a temporary period of time, if specified conditions are met, in the event that stakeholders’ systems are not ready to submit ICD-10 codes beginning October 1, 2015. Under some circumstances, CMS allows providers to request a waiver from electronic submission requirements and, if granted, submit paper instead of electronic claims. However, very few stakeholders currently submit paper claims. According to a CMS official with the Office of Technology Solutions, 98 percent of Medicare fee-for-service claims are filed electronically. However, this alternative to electronic claims filing provides an option for providers to submit claims that include ICD-10 codes if they have not yet completed making changes to their systems. CMS communicated information to its stakeholders about the availability and use of these alternative solutions through its “Road to 10” web page. The agency also included information about the free billing software and the MACs’ portals in a “Medicare Learning Network” article published on its website in February 2014. According to CMS officials in the Office of Enterprise Information, the known costs of the agency’s efforts to update its claims processing systems for the ICD-10 transition are estimated to be approximately $116 million for developing, testing, and implementing the system changes. The officials stated that the agency incurred about $96 million of these costs from September 2007 through September 2014. The officials added that this estimate reflects efforts to address 42 ICD-10-related change requests that were submitted during that time. Although CMS identified its systems changes as having been completed in time for the initial October 2013 compliance date, additional costs were incurred after the delay of the compliance date from October 2014 to October 2015. Specifically, agency officials stated that they incurred costs associated with rework that needed to be completed in order to reinstate software specific to ICD-9 that had been changed to process ICD-10 data. In addition, CMS officials reported that resources were redirected to conduct regression tests of the systems throughout the delay to ensure that functionality and changes already implemented, such as for ICD-10, were not negatively affected by additional changes made to the systems for other reasons (e.g., new technologies or policy changes) at each quarterly release. Agency officials with the Office of Technology Solutions further stated that they had initially planned to conduct just one period of end-to-end testing prior to the October 2014 compliance date, but added that they were able to schedule the two later end-to-end test periods during the delay to provide additional opportunities for stakeholders to test their systems from October 2014 through October 1, 2015. According to officials with the Office of Enterprise Information, the additional IT costs associated with these combined activities were estimated to be about $20 million. Beyond the estimated costs reported by CMS, little is known about the costs that providers, clearinghouses, and insurers incurred for updating their Medicare claims submission systems. Such costs were not identified by the professional associations we contacted or by industry studies that we reviewed. HHS (in its final ICD-10 compliance rule) and the associations reported estimates of the overall costs for their constituencies to transition to ICD-10, and one study conducted for AMA estimated costs that would be incurred by providers to update their systems environment, including the systems they use for provider management and electronic health records implementation. However, none of these entities studied, estimated, or reported costs specific to stakeholders’ efforts to upgrade systems to process Medicare fee-for- service claims data that include ICD-10 codes. We received written comments on a draft of our report, signed by HHS’s Assistant Secretary for Legislation. In the comments (reprinted in appendix III), HHS described a number of actions that it has taken to help ensure that its systems are ready to process Medicare claims that include ICD-10 codes and ongoing efforts to support stakeholders in their transition to the new code set. The department also said our report stated that CMS’s systems were completely updated and had undergone comprehensive and sufficient testing to process ICD-10 codes. In fact, we reported that agency officials had finished making the changes that they determined were needed to process the new codes and had conducted system testing and validation procedures consistent with industry practices. We cautioned that unanticipated system errors could disrupt Medicare claims processing when systems are required to begin processing ICD-10 codes and emphasized that the actions taken by CMS were important steps to help minimize the impact of any such disruptions. HHS also provided technical comments, which have been incorporated as appropriate. We are sending copies of this report to interested congressional committees, the secretaries and agency heads of the departments and agencies addressed in this report, and other interested parties. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staffs have any questions on the matters discussed in this report, please contact me at (202) 512-6304 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. The objectives of our review were to determine (1) the status of CMS’s effort to implement changes needed to be made to its systems in order to process Medicare claims that include ICD-10 codes; (2) the extent to which CMS’s testing and verification actions are sufficient to ensure changes to its systems have been made to process Medicare claims that include ICD-10 codes by October 1, 2015; (3) steps CMS is taking to ensure that health care insurers, providers, and other entities have access to the technical support, tools, and other resources needed to identify, develop, and test system modifications, and to process Medicare claims that include ICD-10 codes if needed system changes have not been made; and (4) what is known about estimated costs to CMS, insurers, and providers. For each of the objectives, our scope included the CMS systems used to process Medicare fee-for-service claims including the Fiscal Intermediary Shared System (FISS), the Multi-Carrier System (MCS), the ViPS Medicare System (VMS), and the Common Working File. To address the first objective, we obtained and reviewed documentation describing the CMS systems impacted by the ICD-10 transition, the necessary system changes, and the modified systems’ implementation dates. To determine the actions taken by CMS to implement the changes for the ICD-10 transition, we obtained and examined relevant project management documents, including project plans and release notes that provided information about the systems changes that were needed and activities planned for completion by October 1, 2015. To determine the steps taken to identify and address risk, we examined documentation describing practices and methods for identifying and categorizing risks associated with ICD-10 system changes. We also reviewed documentation describing mitigation strategies used to manage the identified risks such as contingency plans for processing claims data in case errors occurred. In addition, we held discussions with CMS officials responsible for the ICD-10 systems transition to obtain their views on the status of steps taken to implement system changes. To address the second objective, we identified criteria for assessing the sufficiency of systems testing and verification practices based on established industry standards for conducting software and systems testing. We obtained and analyzed documentation describing the processes CMS has established to test and validate the changes it identified that needed to be made to process Medicare claims with ICD- 10 data. We compared the reported outcomes of CMS’s testing and validation processes to criteria and practices defined by industry standards. In particular, we assessed steps conducted by the agency during the testing phases of its systems change management process to practices defined by the Institute of Electrical and Electronics Engineers standards for conducting software and system tests. To determine the extent to which CMS followed its established process and adhered to these standards, we reviewed documentation that described test schedules, plans, and results. We focused our study of CMS’s ICD-10 testing activities on the final phase, the user acceptance test, because its purpose is to provide a comprehensive test that replicates a live production claims processing environment and involves the participation of end users along with the support of the testing and development contractors that conducted lower levels of testing that preceded user acceptance testing. These contractors’ support was intended to ensure that any errors that were not detected previously could be corrected prior to systems being released into production. We examined documents, including minutes of weekly status meetings, that described tests conducted by CMS’s contractors, errors identified during the tests, and the status of efforts to address the errors prior to the initial compliance date of October 1, 2013. We also reviewed minutes of and documentation supporting release readiness reviews held by CMS officials that describes the status of testing and any remaining tests to be done. We assessed the reliability of the data provided by CMS by reviewing related documentation and collecting supporting data via questionnaires we issued to, and received from, all the Medicare Administrative Contractors (MACs) that documented the status of and progress made toward testing system changes and correcting errors. We analyzed detailed test data, such as examples of test cases and documented test results, provided by the MACs’ documents to understand the extent of testing that was conducted leading up to the October 2013 release. We determined that the data we collected were reliable for the purpose of our report to understand the extent to which CMS’s efforts were sufficient to validate that any errors associated with ICD-10 system changes were addressed and the software changes approved for release into production. To address the third objective, we identified stakeholders’ needs for technical assistance based on prior GAO work and information collected from entities such as professional associations that represent providers, insurers, and health care clearinghouses, and the MACs that process Medicare fee-for-service claims. To determine the types of technical resources CMS provided to help stakeholders identify and test the changes that needed to be made to their systems, we examined documentation describing tools such as user guides and data crosswalks, and additional resources such as claims processing software and testing facilities to support the ICD-10 transition. We also analyzed agency and contractors’ documentation that described practices for selecting and approving participants for testing activities conducted by CMS to help stakeholders test the integration of their systems with CMS’s claims processing systems. We obtained and reviewed the list of participants and number of claims submitted to obtain an understanding of the level of stakeholder representation in the testing activities. We also collected information from the contractors that supported the stakeholders to obtain their views on the outcomes and comprehensiveness of the tests. Finally, we examined CMS’s plans for providing alternate resources for stakeholders’ to submit claims that include ICD-10 codes in case their systems are not yet updated to include and submit ICD-10 data for Medicare claims. To determine the extent to which the industry stakeholder groups found the support provided by CMS useful, we selected and held discussions with entities that play a role in supporting the implementation of health care information technology, including implementing system changes needed to be made for the ICD-10 transition. The entities we selected were the Healthcare Information Management Systems Society (HIMSS), American Health Information Management Association (AHIMA), America’s Health Insurance Plans (AHIP), and the Workgroup for Electronic Data Interchange (WEDI). From these discussions, we obtained their views on the effectiveness of the technical support CMS has provided to their constituencies since 2008. For the fourth objective, we collected data available from CMS regarding any actual or estimated costs known to have been incurred by the agency associated with the development, testing, and implementation of necessary system changes for the ICD-10 transition, to include additional costs incurred as a result of the delay until October 1, 2015. We assessed the reliability of the data provided by CMS by examining agency documentation and discussing with an official of the Office of Enterprise Information the agency’s approach for producing financial statements and the outcomes of independent audits of those statements, which were reportedly produced in accordance with generally accepted accounting principles. We also reviewed published reports of selected health care professional associations that support providers that submit claims to CMS for reimbursement. These entities were the Professional Association of Health Care Office Management and the American Medical Association, along with HHS’s final rule on ICD-10 compliance. We determined that the cost data we collected were reliable for the purpose of our report to identify any costs known to have been incurred by CMS and its Medicare stakeholders for implementing system changes needed to process ICD-10 codes. We conducted this performance audit from January 2015 to September 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusion based on our audit objectives. Description Implement Fiscal Intermediary Shared System ( FISS) Integrated Outpatient Code Editor changes related to the tenth revision of the International Classification of Diseases (ICD-10) Expand Multi-Carrier System (MCS) Diagnosis File to accommodate ICD-10 diagnosis codes Create ViPs Medicare System (VMS) utility run for Durable Medical Equipment MACs identification of edits for ICD-10 Update VMS Automated Development System to recognize and print the ICD-10 indicator Remove any obsolete Quarterly Medical Review processes and reports from VMS that include ICD-9 codes Expand MCS procedure code file to accommodate ICD-10 diagnosis codes Expand Expert Claims Processing System for FISS to accommodate ICD-10 Expand FISS End Stage Renal Disease Parameter files, Hook Selection files, and Medical Policy Parameter files to accommodate the transition to ICD-10 Convert FISS reason codes to ICD-10 format Update MCS hard-coded edits for ICD-10 diagnosis codes Expand MCS to accommodate ICD-10 by expanding Common Working File elements Update the existing VMS Utilization Parameter files for ICD-10 Expand Related Diagnosis file to accommodate ICD-10 diagnosis codes Update VMS Inbound and Outbound Claims Interface Processing Convert FISS reason codes, Phase II Expand FISS Medical Policy Parameter Convert FISS reason codes, Phase III Convert the Common Working File, Phase I Implementation Include Type of Bill 33X for ICD-10 Create file to be used for planning and testing purposes in preparation for the ICD-10 code conversion Convert FISS for Add-on Payment for Blood Clotting Factors, and ESRD Co-morbidity Adjustment Factors Implement VMS ICD-10 Release III, No. 1; update VMS online screens Convert the Common Working File for ICD-10 changes (Phase II Implementation) Expand the Laboratory National Coverage Determination edit software Implement VMS ICD-10, Release III, No.2; updates to Online Claims Processing and Entry Code Convert FISS Present on Admissions indicator Convert from ICD-9 and related code infrastructure of the Medicare Shared Systems as they relate to CMS National Coverage Determinations (change request 1 of 3) In addition to the contact named above, Teresa F. Tucker, Assistant Director; Melina I. Asencio; Christopher G. Businsky; Nancy E. Glover; Ashfaq M. Huda; Thomas E. Murphy, Terry L. Richardson, and Amber H. Sinclair made key contributions to this report.
ICD is the standard code set used in the United States to document patient medical diagnoses and inpatient medical procedures. Every claim submitted by health care providers to payers for reimbursement, including those for Medicare programs, includes these codes. CMS is responsible for enforcing the use of ICD codes and is requiring providers to begin using the 10th revision of the codes (ICD-10) on October 1, 2015. Its role in preparing for the transition includes making changes to the agency's information technology systems used to process Medicare fee-for-service claims and supporting stakeholders' efforts to implement changes to the systems they use to submit Medicare claims that are to include ICD-10 data. GAO was asked to study the actions planned and taken by CMS to support entities' transition to ICD-10. This report discusses (1) CMS's efforts to implement system changes needed for the agency to process claims that include ICD-10 codes, (2) the extent to which CMS's testing and verification actions are sufficient to ensure the system changes are made, and (3) steps CMS is taking to ensure that stakeholders have access to technical support needed to make system changes. To do this, GAO reviewed project documentation and held discussions with Medicare officials, contractors, and selected stakeholder groups that represent providers, health care clearinghouses, and insurers that share claims data with CMS. GAO provided a draft of this report to HHS and incorporated its comments as appropriate. The Centers for Medicare & Medicaid Services (CMS) has finished updating its systems with the changes it determined were needed to process the new International Classification of Diseases codes (ICD-10) on Medicare fee-for-service claims. In 2007, CMS began taking steps to identify components of its systems that needed to be changed to update the ICD codes from version 9 to 10. CMS began making the system changes in early 2010 as part of an established change management process for releasing system updates on a quarterly basis, and, by October 2013, had completed actions to modify its systems to process the new data. In doing so, CMS made changes to validate that codes on submitted claims were of the correct length and format specific to ICD-10 requirements and to determine whether submitted claims data included the proper codes to be processed and approved for payment. Industry guidance states that systems testing should be conducted early and often in the life cycle of a project to allow for the modification of software in a timely manner, and that organizations should define procedures for approving systems for release and plan for contingencies to help mitigate risks that may be introduced when software changes are implemented into a live production environment. Consistent with these practices, CMS began testing and validating the changes made to its systems in March 2010. For each quarterly release, CMS conducted three levels of testing prior to implementing the systems that had been changed, including a level conducted to simulate a live production environment of Medicare claims processing. Agency reports on the outcomes of the tests described errors found and steps taken to ensure any such errors were corrected. The agency also held management reviews to determine whether each version of the modified systems was ready to be released into a live claims processing environment. In addition, CMS officials defined contingencies for cases when systems may not properly process claims that include ICD-10 codes. Such actions are important to help minimize the impact on Medicare stakeholders that could result from errors in CMS's systems. While CMS's actions to update, test, and validate its systems, and plan for contingencies can help mitigate risks and minimize impacts of system errors, the extent to which any such errors will affect the agency's ability to properly process claims cannot be determined until CMS's systems begin processing ICD-10 codes. CMS provided technical support to help its stakeholders identify and make system changes. As early as 2008, CMS had developed and published a website that includes information related to the implementation of system changes to process and submit ICD-10 codes, such as checklists and “lessons learned” identified through collaboration with stakeholders, to help stakeholders identify and implement system changes. The agency also developed tools to help its Medicare Administrative Contractors update claims review and submission systems, such as those used to ensure valid claims are transmitted to CMS's claims processing systems. In addition, CMS expanded and enhanced capabilities to accommodate end-to-end testing that allowed stakeholders to test the integration of their systems with CMS's internal systems, and offered alternative technical solutions for submitting claims with ICD-10 data in case their systems are not modified in time to meet the compliance date.
Homeland security is a complex mission that involves a broad range of functions performed throughout government, including law enforcement, transportation, food safety and public health, information technology, and emergency management, to mention only a few. Federal, state, and local governments have a shared responsibility in preparing for catastrophic terrorist attacks as well as other disasters. The initial responsibility for planning, preparing, and response falls upon local governments and their organizations—such as police, fire departments, emergency medical personnel, and public health agencies—which will almost invariably be the first responders to such an occurrence. For its part, the federal government has principally provided leadership, training, and funding assistance. The federal government’s role in responding to major disasters has historically been defined by the Stafford Act, which makes most federal assistance contingent on a finding that the disaster is so severe as to be beyond the capacity of state and local governments to respond effectively. Once a disaster is declared, the federal government—through the Federal Emergency Management Agency (FEMA)—may reimburse state and local governments for between 75 and 100 percent of eligible costs, including response and recovery activities. In addition to post disaster assistance, there has been an increasing emphasis over the past decade on federal support of state and local governments to enhance national preparedness for terrorist attacks. After the nerve gas attack in the Tokyo subway system on March 20, 1995, and the Oklahoma City bombing on April 19, 1995, the United States initiated a new effort to combat terrorism. In June 1995, Presidential Decision Directive 39 was issued, enumerating responsibilities for federal agencies in combating terrorism, including domestic terrorism. Recognizing the vulnerability of the United States to various forms of terrorism, the Congress passed the Defense Against Weapons of Mass Destruction Act of 1996 (also known as the Nunn-Lugar-Domenici program) to train and equip state and local emergency services personnel who would likely be the first responders to a domestic terrorist event. Other federal agencies, including those in FEMA; the departments of Justice, Health and Human Services, and Energy; and the Environmental Protection Agency, have also developed programs to assist state and local governments in preparing for terrorist events. As emphasis on terrorism prevention and response grew, however, so did concerns over coordination and fragmentation of federal efforts. More than 40 federal entities have a role in combating and responding to terrorism, and more than 20 in bioterrorism alone. Our past work, conducted prior to the establishment of an Office of Homeland Security and the current proposals to create a new Department of Homeland Security, has shown coordination and fragmentation problems stemming largely from a lack of accountability within the federal government for terrorism-related programs and activities. Further, our work found there was an absence of a central focal point that caused a lack of a cohesive effort and the development of similar and potentially duplicative programs. Also, as the Gilmore Commission report notes, state and local officials have voiced frustration about their attempts to obtain federal funds from different programs administered by different agencies and have argued that the application process is burdensome and inconsistent among federal agencies. President Bush has taken a number of important steps in the aftermath of the terrorist attacks of September 11th to address the concerns of fragmentation and to enhance the country’s homeland security efforts, including creating of the Office of Homeland Security in October 2001, proposing the Department of Homeland Security in June 2002, and issuing a national strategy in July 2002. Both the House and Senate have worked diligently on these issues and are deliberating on a variety of homeland security proposals. The House has passed (H.R. 5005), and the Senate will take under consideration, after the August recess, legislation (S. 2452) to create a Department of Homeland Security. While these proposals would both transfer the functions, responsibilities, personnel, and other assets of existing agencies into the departmental structure, each bill has unique provisions not found in the other. For example, while both bills establish an office for State and Local Government Coordination and a first responder council to advise the department, the Senate bill also establishes a Chief Homeland Security Liaison Officer appointed by the Secretary and puts federal liaisons in each state to provide coordination between the department and the state and local first responders. The proposal to create a statutorily based Department of Homeland Security holds promise to better establish the leadership necessary in the homeland security area. It can more effectively capture homeland security as a long-term commitment grounded in the institutional framework of the nation’s governmental structure. As we have previously noted, the homeland security area must span the terms of various administrations and individuals. Establishing homeland security leadership by statute will ensure legitimacy, authority, sustainability, and the appropriate accountability to the Congress and the American people. The proposals call for the creation of a Cabinet department that would be responsible for coordination with other executive branch agencies involved in homeland security, including the Federal Bureau of Investigation and the Central Intelligence Agency. Additionally, the proposals call for coordination with nonfederal entities and direct the new Secretary to reach out to state and local governments and the private sector in order to: ensure adequate and integrated planning, training, and exercises occur, and that first responders have the necessary equipment; attaining interoperability of the federal government’s homeland security communications systems with state and local governments’ systems; oversee federal grant programs for state and local homeland security efforts; and coordinate warnings and information to state and local government entities and the public. Many aspects of the proposed consolidation of homeland security programs are in line with previous recommendations and show promise towards reducing fragmentation and improving coordination. For example, the new department would consolidate federal programs for state and local planning and preparedness from several agencies and place them under a single organizational umbrella. Based on our prior work, we believe that the consolidation of some homeland security functions makes sense and will, if properly organized and implemented, over time lead to more efficient, effective, and coordinated programs, better intelligence sharing, and a more robust protection of our people, borders, and critical infrastructure. However, as the Comptroller General has recently testified,implementation of the new department will be an extremely complex task, and in the short term, the magnitude of the challenges that the new department faces will clearly require substantial time and effort, and will take additional resources to make it effective. Further, some aspects of the new department, as proposed, may result in yet other concerns. For example, as we reported on June 25, 2002, the new department could include public health assistance programs that have both basic public health and homeland security functions. These dual-purpose programs have important synergies that should be maintained and could potentially be disrupted by such a change. The recently issued national strategy for homeland security states it is intended to answer four basic questions: what is “homeland security” and what missions does it entail; what does the nation seek to accomplish, and what are the most important goals of homeland security; what is the federal executive branch doing now to accomplish these goals and what should it do in the future; and what should non-federal governments, the private sector, and citizens do to help secure the homeland. Within the federal executive branch, the key organization for homeland security will be the proposed Department of Homeland Security. The Department of Defense will contribute to homeland security, as well other departments such as the Departments of Justice, Agriculture, and Health and Human Services. The national strategy also makes reference to using tools of government such as grants and regulations to improve national preparedness. The national strategy defines homeland security as a concerted national effort to 1) prevent terrorist attacks within the United States, 2) reduce America’s vulnerability to terrorism, 3) minimize the damage, and 4) recover from attacks that do occur. This definition should help the government more effectively administer, fund, and coordinate activities both inside and outside the proposed new department and ensure all parties are focused on the same goals and objectives. The three parts of the definition form the national strategy’s three objectives. The strategy identifies six critical mission areas, and outlines initiatives in each of the six mission areas. It further describes four foundations that cut across these mission areas and all levels of government. These foundations— law; science and technology; information sharing and systems; and international cooperation— are intended to provide a basis for evaluating homeland security investments across the federal government. Table 1 summarizes key intergovernmental roles in each of the six mission areas as presented in the strategy. With regard to the costs of Homeland Security, the national strategy emphasizes government should fund only those homeland security activities that are not supplied, or are inadequately supplied, in the market, and cost sharing between different governmental levels should reflect federalism principles and different tools of government. In terms of the financial contributions made by state and local government to homeland security, the strategy acknowledges that state and local governments are incurring unexpected costs defending or protecting their respective communities. These costs include protecting critical infrastructure, improving technologies for information sharing and communications, and building emergency response capacity. At this time, the National Governors’ Association estimates that additional homeland security- related costs, incurred since September 11th and through the end of 2002, will reach approximately $6 billion. Similarly, the U.S. Conference of Mayors has estimated the costs incurred by cities during this time period to be $2.6 billion. The proposed department will be a key player in the daunting challenge of defining the roles of the various actors within the intergovernmental system responsible for homeland security. In areas ranging from fire protection to drinking water to port security, the new threats are prompting a reassessment and shift of longstanding roles and responsibilities. However, until this time, proposed shifts in roles and responsibilities have been considered on a piecemeal and ad hoc basis without benefit of an overarching framework and criteria to guide this process. The national strategy recognizes that the process is challenging because of the structure of overlapping federal, state, and local governments given that our country has more than 87,000 jurisdictions. The national strategy further notes that the challenge is to develop interconnected and complementary systems that are reinforcing rather than duplicative. The proposals for a Department of Homeland Security call for the department to reach out to state and local governments and the private sector to coordinate and integrate planning, communications, information, and recovery efforts addressing homeland security. This is important recognition of the critical role played by nonfederal entities in protecting the nation from terrorist attacks. State and local governments play primary roles in performing functions that will be essential to effectively address our new challenges. Much attention has already been paid to their role as first responders in all disasters, whether caused by terrorist attacks or natural hazards. The national strategy emphasizes the critical role state and local governments play in homeland security and the need for coordination between all levels of government. The national strategy emphasizes that homeland security is a shared responsibility. In addition, the national strategy has several initiatives designed to improve partnerships and coordination. Table 1 provides several examples of areas with key intergovernmental roles and coordination. For example, there are initiatives to improve intergovernmental law enforcement coordination and enabling effective partnerships with state and local governments and the private sector in critical infrastructure protection. States are asked to take several legal initiatives, such as coordinating suggested minimum standards for state driver’s licenses and reviewing quarantine authorities. Many initiatives are intended to develop or enhance first responder capabilities, such as initiatives to improve the technical capabilities of first responders or enable seamless communication among all responders. In many cases, these initiatives will rely on federal, state, and local cooperation, some standardization, and the sharing of costs. Achieving national preparedness and response goals hinges on the federal government’s ability to form effective partnerships with nonfederal entities. Therefore, federal initiatives should be conceived as national, not federal in nature. Decision makers have to balance the national interest of prevention and preparedness with the unique needs and interests of local communities. A “one-size-fits-all” federal approach will not serve to leverage the assets and capabilities that reside within state and local governments and the private sector. By working collectively with state and local governments, the federal government gains the resources and expertise of the people closest to the challenge. For example, protecting infrastructure such as water and transit systems lays first and most often with nonfederal levels of government. Just as partnerships offer opportunities, they also pose risks based upon the different interests reflected by each partner. From the federal perspective, there is the concern that state and local governments may not share the same priorities for use of federal funds. This divergence of priorities can result in state and local governments simply replacing (“supplanting”) their own previous levels of commitment in these areas with the new federal resources. From the state and local perspective, engagement in federal programs opens them up to potential federal preemption and mandates. From the public’s perspective, partnerships if not clearly defined, risk blurring responsibility for the outcome of public programs. Our fieldwork at federal agencies and at local governments suggests a shift is potentially underway in the definition of roles and responsibilities between federal, state, and local governments with far reaching consequences for homeland security and accountability to the public. The challenges posed by the new threats are prompting officials at all levels of government to rethink long-standing divisions of responsibilities for such areas as fire services, local infrastructure protection, and airport security. Current homeland security proposals recognize that the unique scale and complexity of these threats call for a response that taps the resources and capacities of all levels of government as well as the private sector. In many areas, these proposals would impose a stronger federal presence in the form of new national standards or assistance. For instance, the Congress is considering proposals to mandate new vulnerability assessments and protective measures on local communities for drinking water facilities. Similarly, new federal rules have mandated local airport authorities to provide new levels of protection for security around airport perimeters. The block grant proposal for first responders would mark a dramatic upturn in the magnitude and role of the federal government in providing assistance and standards for fire service training and equipment. Additionally, the national strategy suggests initiatives for an expanded state role in several areas. For example, there are no national or agreed upon state standards for driver’s license content, format, or acquisition procedures. The strategy states that the federal government should support state-led efforts to develop suggested minimum standards for drivers’ licenses. In another example, in order to suppress money laundering, the strategy recommends that states assess the current status of their regulation regarding providers of financial services and work to adopt uniform laws as necessary. Governments at the local level are also moving to rethink roles and responsibilities to address the unique scale and scope of the contemporary threats from terrorism. Numerous local general-purpose governments and special districts co-exist within metropolitan regions and rural areas alike. Many regions are starting to assess how to restructure relationships among contiguous local entities to take advantage of economies of scale, promote resource sharing, and improve coordination of preparedness and response on a regional basis. In our case studies of five metropolitan areas, we have identified several common forms of regional cooperation and coordination including special task forces or working groups, improved collaboration among public health entities, increased countywide planning, mutual aid agreements, and communications. These partnerships are at varying stages of development and are continuing to evolve. Table 2 summarizes these initiatives. Although promising greater levels of protection than before, these shifts in roles and responsibilities have been developed on an ad hoc piecemeal basis without the benefit of common criteria. An ad hoc process may not capture the real potential each actor in our system offers. Moreover, a piecemeal redefinition of roles risks the further fragmentation of the responsibility for homeland security within local communities, blurring lines of responsibility and accountability for results. While federal, state, and local governments all have roles to play, care must be taken to clarify who is responsible for what so that the public knows whom to contact to address their problems and concerns. Current homeland security initiatives provide an opportunity to more systematically identify the unique resources and capacities of each level of government and better match these capabilities to the particular tasks at hand. If implemented in a partnerial fashion, the national strategy can also promote the participation, input, and buy in of state and local partners whose cooperation is essential for success. The proposed department, in fulfilling its broad mandate, has the challenge of developing a national performance focus. The national strategy is a good start in defining strategic objectives and related mission areas, plus foundations that cut across the mission areas. The national strategy’s initiatives to implement the objectives under the related mission and foundation areas extend from building capabilities to achieving specific outcomes. According to the national strategy, each department and agency is to be held accountable for its performance on homeland security efforts. However, the initiatives often do not provide a baseline set of goals and measures upon which to assess and improve many of its initiatives to prevent attacks, reduce the nation’s vulnerability to attacks, or minimize the damage and recovering from attacks that do occur. For example, the initiative of creating “smart borders” requires a clear specification of what is expected of a smart border, including consideration of security and economic aspects of moving people and goods. Specific performance goals and measures for many initiatives will occur at a later date. The strategy states that each department or agency will create benchmarks and other performance measures to evaluate progress and allocate future resources. Performance measures will be used to evaluate the effectiveness of each homeland security program, allowing agencies to measure their progress, make resource allocation decisions, and adjust priorities. As the national strategy and related implementation plans evolve, we would expect clearer performance expectations to emerge. Given the need for a highly integrated approach to the homeland security challenge, national performance goals and measures may best be developed in a collaborative way involving all levels of government and the private sector. Assessing the capability of state and local governments to respond to catastrophic terrorist attacks is an important feature of the national strategy and the responsibilities of the proposed new department. The President’s fiscal year 2003 budget proposal acknowledged that our capabilities for responding to a terrorist attack vary widely across the country. The national strategy recognizes the importance of standards and performance measures in areas such as training, equipment, and communications. For example, the national strategy proposes the establishment of national standards for emergency response training and preparedness. These standards would require certain coursework for individuals to receive and maintain certification as first responders and for state and local governments to receive federal grants. Under the strategy, the proposed department would establish a national exercise program designed to educate and evaluate civilian response personnel at all levels of government. It would require individuals and government bodies to complete successfully at least one exercise every year. The department would use these exercises to measure performance and allocate future resources. Standards are being developed in other areas associated with homeland security, yet formidable challenges remain. For example, national standards that would apply to all ports and all public and private facilities are well under way. In preparing to assess security conditions at 55 U.S. ports, the Coast Guard’s contractor has been developing a set of standards since May 2002. These standards cover such things as preventing unauthorized persons from accessing sensitive areas, detecting and intercepting intrusions, and checking backgrounds of those whose jobs require access to port facilities. However, challenges remain in finalizing a complete set of standards for the level of security needed in the nation’s ports, resolving issues between key stakeholders that have conflicting or competing interests, and establishing mechanisms for enforcement. Moreover, because security at ports is a concern shared among federal, state, and local governments, as well as among private commercial interests, the issue of who should pay to finance antiterrorism activities may be difficult to resolve. Communications is an example of an area for which standards have not yet been developed, but various emergency managers and other first responders have continuously highlighted that standards are needed. State and local governments often report that there are deficiencies in their communications capabilities, including the lack of interoperable systems. The national strategy recognizes that it is crucial for response personnel to have and use equipment, systems, and procedures that allow them to communicate. Therefore, the strategy calls for the proposed Department of Homeland Security to develop a national communication plan to establish protocols (who needs to talk to whom), processes, and national standards for technology acquisition. According to the national strategy, this is a priority for fiscal year 2003 funding which ties all federal grant programs that support state and local purchase of terrorism-related communications equipment to this communication plan. The establishment of specific national goals and measures for homeland security initiatives, including preparedness, will not only go a long way towards assisting state and local entities in determining successes and areas where improvement is needed, but could also be used as goals and performance measures as a basis for assessing the effectiveness of federal programs. The Administration should take advantage of the Government Performance and Results Act (GPRA) and its performance tools of strategic plans, annual performance plans and measures, and accountability reports for homeland security implementation planning. At the department and agency level, until the new department is operational, GPRA can be a useful tool in developing homeland security implementation plans within and across federal agencies. Given the recent and proposed increases in homeland security funding, as well as the need for real and meaningful improvements in preparedness, establishing clear goals and performance measures is critical to ensuring both a successful and fiscally responsible effort. The choice and design of the policy tools the federal government uses to engage and involve other levels of government and the private sector in enhancing homeland security will have important consequences for performance and accountability. Governments have a variety of policy tools including grants, regulations, tax incentives, and information-sharing mechanisms to motivate or mandate other levels of government or the private sector to address security concerns. The choice of policy tools will affect sustainability of efforts, accountability and flexibility, and targeting of resources. The design of federal policy will play a vital role in determining success and ensuring that scarce federal dollars are used to achieve critical national goals. The national strategy acknowledges the shared responsibility of providing homeland security between federal, state, and local governments, and the private sector and recognizes the importance of using tools of government such as grants, regulations, and information sharing to improve national preparedness. The federal government often uses grants to state and local governments as a means of delivering federal assistance. Categorical grants typically permit funds to be used only for specific, narrowly defined purposes. Block grants typically can be used by state and local governments to support a range of activities aimed at achieving a broad, national purpose and to provide a great deal of discretion to state and local officials. In designing grants, it is important to (1) target the funds to states and localities with the greatest need based on highest risk and lowest capacity to meet these needs from their own resource bases, (2) discourage the replacement of state and local funds with federal funds, commonly referred to as supplantation, with a maintenance-of-effort requirement that recipients maintain their level of previous funding, and (3) strike a balance between accountability and flexibility. At their best, grants can stimulate state and local governments to enhance their preparedness to address the unique threats posed by terrorism. Ideally, grants should stimulate higher levels of preparedness and avoid simply subsidizing local functions that are traditionally state or local responsibilities. One approach used in other areas is the “seed money” model in which federal grants stimulate initial state and local activity with the intent of transferring responsibility for sustaining support over time to state and local governments. Recent funding proposals, such as the $3.5 billion block grant for first responders contained in the president’s fiscal year 2003 budget, have included some of these provisions. This grant would be used by state and local governments to purchase equipment; train personnel; and exercise, develop, or enhance response plans. Once the details of the grant have been finalized, it will be useful to examine the design to assess how well the grant will target funds, discourage supplantation, and provide the appropriate balance between accountability and flexibility, and whether it provides temporary “seed money” or represents a long-term funding commitment. Other federal policy tools can also be designed and targeted to elicit a prompt, adequate, and sustainable response. In the area of regulatory authority, the federal, state, and local governments share authority for setting standards through regulations in several areas, including infrastructure and programs vital to preparedness (for example, transportation systems, water systems, and public health). In designing regulations, key considerations include how to provide federal protections, guarantees, or benefits while preserving an appropriate balance between federal and state and local authorities and between the public and private sectors. Regulations have recently been enacted in the area of infrastructure. For example, a new federal mandate requires that local drinking water systems in cities above a certain size provide a vulnerability assessment and a plan to remedy vulnerabilities as part of ongoing EPA reviews, while the Transportation and Aviation Security Act grants the Department of Transportation authority to order deployment of local law enforcement personnel in order to provide perimeter access security at the nation’s airports. In designing a regulatory approach, the challenges include determining who will set the standards and who will implement or enforce them. Several models of shared regulatory authority offer a range of approaches that could be used in designing standards for preparedness. Examples of these models range from preemption through fixed federal standards to state and local adoption of voluntary standards formulated by quasi- official or nongovernmental entities. As the administration noted, protecting America’s infrastructure is a shared responsibility of federal, state, and local government, in active partnership with the private sector, which owns approximately 85 percent of our nation’s critical infrastructure. To the extent that private entities will be called upon to improve security over dangerous materials or to protect critical infrastructure, the federal government can use tax incentives to encourage or enforce their activities. Tax incentives are the result of special exclusions, exemptions, deductions, credits, deferrals, or tax rates in the federal tax laws. Unlike grants, tax incentives do not generally permit the same degree of federal oversight and targeting, and they are generally available by formula to all potential beneficiaries who satisfy congressionally established criteria. Since the events of September 11th, a task force of mayors and police chiefs has called for a new protocol governing how local law enforcement agencies can assist federal agencies, particularly the FBI. As the U.S. Conference of Mayors noted, a close working partnership of federal and local law enforcement agencies, which includes the sharing of information, will expand and strengthen the nation’s overall ability to prevent and respond to domestic terrorism. The USA Patriot Act provides for greater sharing of information among federal agencies. An expansion of this act has been proposed (S1615; H.R. 3285) that would provide for information sharing among federal, state, and local law enforcement agencies. In addition, the Intergovernmental Law Enforcement Information Sharing Act of 2001 (H.R. 3483), which you sponsored, Mr. Chairman, addresses a number of information-sharing needs. For instance, the proposed legislation provides that the Attorney General expeditiously grant security clearances to Governors who apply for them and to state and local officials who participate in federal counterterrorism working groups or regional task forces. The national strategy also includes several information-sharing and systems initiatives to facilitate dissemination of information from the federal government to state and local officials. For example, the strategy supports building and sharing law enforcement databases, secure computer networks, secure video teleconferencing capabilities, and more accessible websites. It also states that the federal government will make an effort to remove classified information from some documents to facilitate distribution to more state and local authorities. The recent publication of the national strategy is an important initial step in defining homeland security, setting forth key strategic objectives, and specifying initiatives to implement them. The proposals for the Department of Homeland Security represent recognition by the administration and the Congress that much still needs to be done to improve and enhance the security of the American people and our country’s assets. The proposed department will clearly have a central role in the success of efforts to strengthen homeland security, and has primary responsibility for many of the initiatives in the national homeland security strategy. Moreover, given the unpredictable characteristics of terrorist threats, it is essential that the strategy be implemented at a national rather than federal level with specific attention given to the important and distinct roles of state and local governments. Accordingly, decision makers will have to balance the federal approach to promoting homeland security with the unique needs, capabilities, and interests of state and local governments. Such an approach offers the best promise for sustaining the level of commitment needed to address the serious threats posed by terrorism. This completes my prepared statement. I would be pleased to respond to any questions you or other Members of the Subcommittee may have. For further information about this testimony, please contact me at (202) 512-9573 or JayEtta Hecker at (202) 512-2834. Other key contributors to this testimony include Matthew Ebert, Thomas James, David Laverny- Rafter, Yvonne Pufahl, Jack Schulze, and Amelia Shachoy. Port Security: Nation Faces Formidable Challenges in Making New Initiatives Successful. GAO-02-993T. Washington, D.C.: August 5, 2002. Aviation Security: Transportation Security Administration Faces Immediate and Long-Term Challenges. GAO-02-971T. Washington, D.C.: July 25, 2002. Homeland Security: Critical Design and Implementation Issues. GAO- 02-957T. Washington, D.C.: July 17, 2002. Homeland Security: New Department Could Improve Coordination but Transferring Control of Certain Public Health Programs Raises Concerns. GAO-02-954T. Washington, D.C.: July 16, 2002. Critical Infrastructure Protection: Significant Homeland Security Challenges Need to Be Addressed. GAO-02-918T. Washington, D.C.: July 9, 2002. Homeland Security: New Department Could Improve Biomedical R&D Coordination but May Disrupt Dual-Purpose Efforts. GAO-02-924T. Washington, D.C.: July 9, 2002. Homeland Security: Title III of the Homeland Security Act of 2002. GAO-02-927T. Washington, D.C.: July 9, 2002. Homeland Security: Intergovernmental Coordination and Partnership Will Be Critical to Success. GAO-02-901T. Washington, D.C.: July 3, 2002. Homeland Security: New Department Could Improve Coordination but May Complicate Priority Setting. GAO-02-893T. Washington, D.C.: June 28, 2002. Homeland Security: New Department Could Improve Coordination but May Complicate Public Health Priority Setting. GAO-02-883T. Washington, D.C.: June 25, 2002. Homeland Security: Proposal for Cabinet Agency Has Merit, But Implementation Will Be Pivotal to Success. GAO-02-886T. Washington, D.C.: June 25, 2002. Homeland Security: Key Elements to Unify Efforts Are Underway but Uncertainty Remains. GAO-02-610. Washington, D.C.: June 7, 2002. National Preparedness: Integrating New and Existing Technology and Information Sharing into an Effective Homeland Security Strategy. GAO-02-811T. Washington, D.C.: June 7, 2002. Homeland Security: Integration of Federal, State, Local, and Private Sector Efforts Is Critical to an Effective National Strategy for Homeland Security GAO-02-621T. Washington, D.C.: April 11, 2002. Combating Terrorism: Enhancing Partnerships Through a National Preparedness Strategy. GAO-02-549T. Washington, D.C.: March 28, 2002. Homeland Security: Progress Made, More Direction and Partnership Sought. GAO-02-490T. Washington, D.C.: March 12, 2002. Homeland Security: Challenges and Strategies in Addressing Short- and Long-Term National Needs. GAO-02-160T. Washington, D.C.: November 7, 2001. Homeland Security: A Risk Management Approach Can Guide Preparedness Efforts. GAO-02-208T. Washington, D.C.: October 31, 2001. Homeland Security: Need to Consider VA’s Role in Strengthening Federal Preparedness. GAO-02-145T. Washington, D.C.: October 15, 2001. Homeland Security: Key Elements of a Risk Management Approach. GAO-02-150T. Washington, D.C.: October 12, 2001. Homeland Security: A Framework for Addressing the Nation’s Issues. GAO-01-1158T. Washington, D.C.: September 21, 2001. Combating Terrorism: Intergovernmental Cooperation in the Development of a National Strategy to Enhance State and Local Preparedness. GAO-02-550T. Washington, D.C.: April 2, 2002. Combating Terrorism: Enhancing Partnerships Through a National Preparedness Strategy. GAO-02-549T. Washington, D.C.: March 28, 2002. Combating Terrorism: Critical Components of a National Strategy to Enhance State and Local Preparedness. GAO-02-548T. Washington, D.C.: March 25, 2002. Combating Terrorism: Intergovernmental Partnership in a National Strategy to Enhance State and Local Preparedness. GAO-02-547T. Washington, D.C.: March 22, 2002. Combating Terrorism: Key Aspects of a National Strategy to Enhance State and Local Preparedness. GAO-02-473T. Washington, D.C.: March 1, 2002. Combating Terrorism: Considerations for Investing Resources in Chemical and Biological Preparedness. GAO-01-162T. Washington, D.C.: October 17, 2001. Combating Terrorism: Selected Challenges and Related Recommendations. GAO-01-822. Washington, D.C.: September 20, 2001. Combating Terrorism: Actions Needed to Improve DOD’s Antiterrorism Program Implementation and Management. GAO-01-909. Washington, D.C.: September 19, 2001. Combating Terrorism: Comments on H.R. 525 to Create a President’s Council on Domestic Preparedness. GAO-01-555T. Washington, D.C.: May 9, 2001. Combating Terrorism: Observations on Options to Improve the Federal Response. GAO-01-660T. Washington, D.C.: April 24, 2001. Combating Terrorism: Comments on Counterterrorism Leadership and National Strategy. GAO-01-556T. Washington, D.C.: March 27, 2001. Combating Terrorism: FEMA Continues to Make Progress in Coordinating Preparedness and Response. GAO-01-15. Washington, D.C.: March 20, 2001. Combating Terrorism: Federal Response Teams Provide Varied Capabilities; Opportunities Remain to Improve Coordination. GAO-01-14. Washington, D.C.: November 30, 2000. Combating Terrorism: Need to Eliminate Duplicate Federal Weapons of Mass Destruction Training. GAO/NSIAD-00-64. Washington, D.C.: March 21, 2000. Combating Terrorism: Observations on the Threat of Chemical and Biological Terrorism. GAO/T-NSIAD-00-50. Washington, D.C.: October 20, 1999. Combating Terrorism: Need for Comprehensive Threat and Risk Assessments of Chemical and Biological Attack. GAO/NSIAD-99-163. Washington, D.C.: September 7, 1999. Combating Terrorism: Observations on Growth in Federal Programs. GAO/T-NSIAD-99-181. Washington, D.C.: June 9, 1999. Combating Terrorism: Analysis of Potential Emergency Response Equipment and Sustainment Costs. GAO-NSIAD-99-151. Washington, D.C.: June 9, 1999. Combating Terrorism: Use of National Guard Response Teams Is Unclear. GAO/NSIAD-99-110. Washington, D.C.: May 21, 1999. Combating Terrorism: Observations on Federal Spending to Combat Terrorism. GAO/T-NSIAD/GGD-99-107. Washington, D.C.: March 11, 1999. Combating Terrorism: Opportunities to Improve Domestic Preparedness Program Focus and Efficiency. GAO-NSIAD-99-3. Washington, D.C.: November 12, 1998. Combating Terrorism: Observations on the Nunn-Lugar-Domenici Domestic Preparedness Program. GAO/T-NSIAD-99-16. Washington, D.C.: October 2, 1998. Combating Terrorism: Threat and Risk Assessments Can Help Prioritize and Target Program Investments. GAO/NSIAD-98-74. Washington, D.C.: April 9, 1998. Combating Terrorism: Spending on Governmentwide Programs Requires Better Management and Coordination. GAO/NSIAD-98-39. Washington, D.C.: December 1, 1997. Homeland Security: New Department Could Improve Coordination but may Complicate Public Health Priority Setting. GAO-02-883T. Washington, D.C.: June 25, 2002. Bioterrorism: The Centers for Disease Control and Prevention’s Role in Public Health Protection. GAO-02-235T. Washington, D.C.: November 15, 2001. Bioterrorism: Review of Public Health and Medical Preparedness. GAO-02-149T. Washington, D.C.: October 10, 2001. Bioterrorism: Public Health and Medical Preparedness. GAO-02-141T. Washington, D.C.: October 10, 2001. Bioterrorism: Coordination and Preparedness. GAO-02-129T. Washington, D.C.: October 5, 2001. Bioterrorism: Federal Research and Preparedness Activities. GAO-01-915. Washington, D.C.: September 28, 2001. Chemical and Biological Defense: Improved Risk Assessments and Inventory Management Are Needed. GAO-01-667. Washington, D.C.: September 28, 2001. West Nile Virus Outbreak: Lessons for Public Health Preparedness. GAO/HEHS-00-180. Washington, D.C.: September 11, 2000. Need for Comprehensive Threat and Risk Assessments of Chemical and Biological Attacks. GAO/NSIAD-99-163. Washington, D.C.: September 7, 1999. Chemical and Biological Defense: Program Planning and Evaluation Should Follow Results Act Framework. GAO/NSIAD-99-159. Washington, D.C.: August 16, 1999. Combating Terrorism: Observations on Biological Terrorism and Public Health Initiatives. GAO/T-NSIAD-99-112. Washington, D.C.: March 16, 1999. Disaster Assistance: Improvement Needed in Disaster Declaration Criteria and Eligibility Assurance Procedures. GAO-01-837. Washington, D.C.: August 31, 2001. FEMA and Army Must Be Proactive in Preparing States for Emergencies. GAO-01-850. Washington, D.C.: August 13, 2001.
The challenges posed by homeland security exceed the capacity and authority of any one level of government. Protecting the nation against these threats calls for a truly integrated approach, bringing together the resources of all levels of government. The proposed Department of Homeland Security will clearly have a central role in efforts to enhance homeland security. The proposed consolidation of homeland security programs has the potential to reduce fragmentation, improve coordination, and clarify roles and responsibilities. Realistically, the challenges that the new department faces will clearly require substantial time and effort, and it will take additional resources to make it effective. Moreover, formation of a department should not be considered a replacement for the timely issuance of a national homeland security strategy to guide implementation of the complex mission of the department. Appropriate roles and responsibilities within and between the levels of government and with the private sector are evolving and need to be clarified. New threats are prompting a reassessment and shifting of long-standing roles and responsibilities, but these shifts are being considered on a piecemeal basis without benefit of an overarching framework and criteria to guide the process. A national strategy could provide such guidance by more systematically identifying the unique capacities and resources of each level of government to enhance homeland security and by providing increased accountability within the intergovernmental system. The nation does not yet have performance goals and measures upon which to assess and improve preparedness and develop common criteria that can demonstrate success, promote accountability, and determine areas where additional resources are needed, such as improving communications and equipment interoperability. A careful choice of the most appropriate tools is critical to achieve and sustain national goals. The choice and design of policy tools, such as grants, regulations, and tax incentives, can enhance the capacity of all levels of government to target areas of highest risk and greatest need, promote shared responsibilities, and track progress toward achieving preparedness goals.
The current health information system used by VA clinicians is VistA. Since the inception of this system in 1983, VHA has made numerous enhancements to its functionality. A significant example was the release in 1996 of the Computerized Patient Record System, which enabled the department to provide an individual electronic medical record for each VA patient. By fiscal year 2007, the implementation of an imaging capability (VistA Imaging) at all the department’s facilities further enhanced the system by enabling multimedia data, such as radiology images, to be linked to a patient’s electronic medical record. These collective enhancements to VistA resulted in a comprehensive, integrated, electronic medical record for each patient that is viewable by all of the department’s clinicians at all of its health care facilities, thus eliminating the need for paper medical records. According to VHA officials, VistA was developed based on close collaboration between staff in the medical facilities and VHA’s IT personnel, with the intention of providing a system that met the clinicians’ needs. In this regard, clinicians and IT personnel in the various medical facilities collaborated to define the system’s requirements and, in certain cases, carry out its development and implementation. For example, development of VistA Imaging resulted from a clinician building a prototype at home before it was fielded at a medical facility. Although system enhancements to VistA were disseminated through a central office, staff at a medical center could develop and implement applications at the local level to facilitate the potentially different functions at each location. According to the department, as a result of VHA’s decentralized development approach, VistA now consists of 104 separate computer applications. These include 56 health provider applications; 19 management and financial applications; 13 crosscutting applications such as patient data exchange; 8 registration, enrollment, and eligibility applications; 5 health data applications; and 3 information and education applications (app. III contains a complete list of these applications). Besides being numerous, these applications have been customized at all 128 VA sites. According to VA, this customization increases the cost of maintaining the system, as it requires that maintenance also be customized. VA has reported expending significant resources (approximately $2.5 billion) to maintain the system between 2001 and 2007. Further, according to the department, limitations in the system need to be addressed for the system to remain effective. As mentioned, some VistA applications are more than 20 years old, and VistA does not standardize data, which is a prerequisite to making data computable. In addition, according to VA, VistA stores data in an organizational format based on the location where care is provided, rather than maintaining a global record for each individual patient, and it is programmed in a language for which there is a continually decreasing supply of qualified software developers. Accordingly, in 2001, VHA undertook the HealtheVet initiative in order to standardize its health care system and eliminate the approximately 128 different systems used by its field locations. As we reported in 2003, it planned to develop or enhance specific areas of system functionality through six projects, which were to be completed between 2006 and 2012 (shown in table 1). These six projects did not represent all the functionality provided by the 104 VistA applications; rather, they were high-priority projects that were under way at the time. In 2004, VA contracted with the Software Engineering Institute (SEI) for a technical review of the HealtheVet program. As a result of this review, SEI concluded, among other things, that VA needed to improve and integrate the governance of the HealtheVet program, develop an organizational structure for the program, define the program’s vision, and define the path for transition from VistA to HealtheVet. In 2005, VA began to take action on the SEI recommendations. For example, the department began to develop a HealtheVet organizational structure, including defining the responsibilities of a project management office. In addition, it developed an initial draft for HealtheVet governance that defined decision-making processes, established guidelines for issue identification and escalation, defined areas of control and levels of authority, and established accountability. However, the effort to develop a governance plan and structure was superseded by a major realignment of the department’s overall IT management structure. This realignment, initiated in October 2005, was undertaken with the goal of providing greater authority and accountability over VA resources by centralizing IT management under the department’s CIO; an additional goal was to standardize operations and systems development across the department using new management processes based on industry best practices. Under the department’s realigned structure, the Assistant Secretary for Information and Technology serves as VA’s CIO. The CIO is assisted by one Principal Deputy Assistant Secretary and five Deputy CIOs. In particular, the Deputy CIO for Enterprise Development serves as the chief advisor to the CIO for all enterprise applications development activities, including HealtheVet; this official heads the Office for Enterprise Development, which is responsible for performing enterprise applications development. Before the realignment, funding and approval of IT was controlled by each medical center director, this enabled local IT personnel to make changes to VistA applications that were specific to the local medical facility. As a result of the realignment, the funding for all IT development projects, including both VistA and HealtheVet projects, was moved under the control of the department’s CIO. The business owners (that is, VHA for VistA and HealtheVet) retain responsibility for development and prioritization of requirements and program oversight, while staff in the Office of Enterprise Development are responsible for planning and execution of information technology development projects. As of June 2008, the HealtheVet program has eight major software development projects under way. One of these is to continue development and population of an operational database that currently contains health data. Five are applications development projects, of which four are health care applications currently in development and one is a financial application in the planning stage. The remaining two projects are to enhance current VistA systems, prepare them for transition to HealtheVet, and develop new applications. However, since 2003, the time frames for completing the projects and the HealtheVet system as a whole have been extended from 2012 to 2018. Department officials acknowledge that VA has experienced significant delays in developing and implementing HealtheVet and attribute the delays to various factors, and stated that they are working to address the delays by using an incremental development life-cycle approach and establishing more realistic time frames, among other things. Of the eight projects in progress, one is currently operational though not yet completed. The Health Data Repository (HDR) database, which became operational in 2006, currently contains standardized health data in three areas: vital signs, allergies, and outpatient pharmacy. These data were addressed first because they were given high priority by clinicians. As we have previously reported, the department is currently using HDR to help achieve interoperability with DOD to support the exchange of computable electronic patient information. The HDR project is currently standardizing and converting laboratory data so that they can be added to the repository next, with further types of health data (for example, inpatient pharmacy, dental, and ophthalmology) to be added as the development of the HealtheVet system continues. Four projects are developing health care information applications: The Scheduling application is planned for initial deployment at one site (a VA medical center in Muskogee, Oklahoma) in September 2008 (full deployment to all medical facilities is planned for 2011). For the Pharmacy project, final testing of one function (order checking) is scheduled to begin in September 2008, and new drug file and pharmacy data management systems are scheduled to be implemented in January 2009. Remaining system functions to be developed include inventory, order entry and clinical monitoring, medication dispensing, and medication administration. Further development of the Pharmacy application depends on the results of an ongoing analysis and evaluation of the costs of building and deploying these functions. This analysis, for which a contract was issued in February 2008, is due July 2008. The new Laboratory system is scheduled for independent verification and validation in October 2008. National deployment is planned to begin in 2010, with a phased implementation across the department expected to take place over the next 5 years. The initial implementation of the Enrollment application is scheduled for August 2008. This project is to provide an enrollment workflow for use at VA’s Health Eligibility Center. An enhancement is scheduled for implementation by July 2009 for communicating to veterans and providing operational efficiencies for VA staff at the Health Eligibility Center and medical centers to coordinate changes in veterans’ eligibility. Finally, in December 2011, the department expects to complete a modernized registration capability. A fifth project (Billing) is for a new financial system, which is in the planning stage. The current Billing project is a second attempt to modernize the billing system. Under the first attempt, VA awarded a contract in July 2003 to implement a commercial product to provide an updated billing capability for the department (called at that time the Patient Financial Services System); however, after about $107 million was spent on this effort, the contract was terminated in September 2006 by mutual agreement between the department and the contractor. The department expects to complete national deployment of the current project (called the Revenue Improvements and System Enhancement project) at the end of fiscal year 2015. Finally, the program has two ongoing projects that are focused on activities to develop and implement required enhancements to existing VistA applications and lay the foundation for transitioning these applications to HealtheVet: The focus of the VistA application development project in the near term is to develop the critical enhancements and fixes to the VistA system that are necessary to ensure compliance with changes to patient enrollment and billing requirements and accomplish other critical data updates. In fiscal year 2010, the emphasis for this initiative will shift from fixes and enhancements to new development work aimed at the transition to HealtheVet. The initiative will then encompass building many of the replacement systems within HealtheVet. The VistA foundations modernization project includes work on architecture and testing services, including a comprehensive testing suite and strategy for all VistA and HealtheVet applications. In fiscal year 2009, several common services—the deployment toolkit, business rules engine, and workflow engine—are expected to be delivered, along with new testing services capabilities and updates to the overall architecture. This work is expected to be ongoing until the completion of the HealtheVet initiative. Table 2 summarizes the status of these projects. From the inception of the initiative in 2001 through fiscal year 2007, VA reported spending almost $600 million for the development of these eight projects. The department estimates that it will incur additional development costs of approximately $535 million for the initiative during fiscal years 2008 and 2009, with the estimated total development cost of HealtheVet being $11 billion when completed in 2018. Table 3 shows the reported development costs through fiscal year 2007 and estimated development costs for fiscal years 2008 and 2009. In addition, the time frames for completing the projects and the HealtheVet system as a whole have been extended since the inception of the HealtheVet initiative. As shown in table 1, the time frames as of 2003 envisioned completion by 2012. Current time frames extend the completion date to 2018. Officials from VA’s Office of Information and Technology acknowledged that VA had experienced significant delays in developing and implementing HealtheVet. These officials attribute the delays to various factors, including changes in technical and deployment approaches, lack of management continuity, and loss of experienced contractor staff. For example, changes in technical and deployment approaches delayed the development of the Scheduling, Health Data Repository, Pharmacy, Laboratory, and Enrollment projects. In particular, for Scheduling, Health Data Repository, Laboratory, and Enrollment, VHA has alternated between developing the systems in-house and using a commercial off-the-shelf product. In addition, programming languages for the Scheduling and Enrollment projects changed. Finally, VHA changed the deployment approach for Pharmacy annually between 2003 and 2007. Several projects experienced management turnover. For example, the Enrollment project has had multiple program managers since it began, and the VistA application development and VistA foundations modernization projects have seen more than one change in program management. Finally, the Scheduling, Health Data Repository, Laboratory, Vista Application Development, and Vista Foundations Modernization projects were delayed by the loss of experienced contractor staff. These initiatives were supported by an overall contract for HealtheVet. When this contract expired in September 2006, it was renewed on a monthly basis to ensure continuity of work until a new contract was awarded. However, task orders from the new contract, which was signed in November 2006, were not issued until June, July, and September 2007. According to department officials, as a result of these delays, the experienced contractor staff who supported the initiatives had moved to other work, corporate knowledge for these initiatives was lost, and new contractor staff had to be hired and educated. Department officials stated that they are working to address the delays experienced by using an incremental, development life-cycle approach and establishing more realistic time frames for the effort. In addition, to address future contracting issues, the department is establishing an integrated product team composed of IT, program, and acquisition personnel. Under VA’s current strategy for HealtheVet, developed in August 2006, the department is taking an incremental approach to the remainder of the initiative, based on six phases (referred to as “blocks”) that are to be completed in 2018. Under this strategy, the department plans to replace the 104 VistA applications that are currently in use (see app. III) with 67 applications, 3 databases, and 10 common services. Figure 1 provides a high-level overview of the strategy, and table 4 lists all the planned software development applications by block. As table 4 shows, work has not yet been initiated on many applications that are planned for the final system. Further, although the department has established interim dates for completing projects that are under way, as of mid-June 2008, the department had not developed a detailed schedule or approach for completing the HealtheVet initiative, including the remaining 62 software applications, other than to state that it intends to complete all six blocks of the initiative by 2018. Industry best practices and IT project management principles stress the importance of accountability and sound planning for any project, particularly an effort of the magnitude and complexity of HealtheVet. Inherent in such planning is the development and use of a project management plan that describes, among other factors, the project’s scope, implementation strategy, lines of responsibility, security requirements, resources, and estimated schedule for development and implementation. Specifically, an effective project management plan incorporates all the critical areas of system development and is to be used as a means of determining what needs to be done and when, and should measure progress. Such a plan also includes an integrated schedule that considers all dependencies and includes subtasks so that deadlines are realistic, and it incorporates reviews to allow oversight and approval by high-level managers. A key component of planning is determining the resources necessary to accomplish the myriad tasks needed throughout the life cycle of the initiative. In April 2008, VA provided an $11 billion cost estimate to complete HealtheVet; however, it has not yet independently validated these estimates. We stress in our Cost Assessment Guide that having a validated cost estimate is essential to improve the accuracy of cost, schedule, and performance management. Validated cost estimates are also important to facilitate program approval and determine the necessary funding needed for HealtheVet. Without an integrated plan that includes independently validated cost estimates, VA increases the risk that HealtheVet could incur schedule slippages and cost increases and not achieve what it intends to achieve. In the wake of the realignment of IT resources under central, department- level control, VA leadership endorsed an approach to the oversight and governance of IT development projects that is based on ensuring the involvement of senior management from both the user and the developer organizations. Under this approach, business owners establish IT requirements, business benefits, and priorities and oversee full life-cycle execution of IT programs. The department’s CIO organization provides the developers who devise technology solutions for the users. In addition, CIO officials chair a set of IT governance boards that review progress and recommend funding for IT projects; these boards include executive-level representation from business owners. For the HealtheVet initiative, various levels and types of oversight are currently provided by the business owner (the Veterans Health Administration), the developers (the Office of Enterprise Development within the department’s CIO organization), and departmental IT governance boards. However, the business unit has not yet finalized a governance plan or implemented a complete governance structure, several key leadership positions within the developers’ organization are either vacant or filled with acting personnel, and the IT governance boards have not yet scheduled critical reviews of HealtheVet projects. Until all elements of governance and oversight are in place, the risk is increased that the HealtheVet initiative may experience cost overruns and continued schedule slippages and may not achieve what it intends to achieve. VHA has not yet established a governance structure for HealtheVet in accordance with the approach endorsed by the department. Under this approach, business unit governance for IT initiatives is provided at several levels. An Executive Steering Committee, chaired by the head of the business unit, provides executive oversight. Reporting to the Executive Steering Committee is an Oversight Board that is responsible for ensuring that all stakeholders are represented in defining requirements, monitoring progress, and determining that the initiative is meeting their needs. Finally, a Program Director is responsible for day-to-day oversight activities to ensure that the technical solution provided by the developers meets business needs (such as requirements development and testing) and for coordinating with the developers’ program office. According to senior management officials, VHA has not yet established a governance structure for HealtheVet in accordance with this approach, but it has developed a plan to do so. According to these officials, they worked with the departmental CIO organization to develop this plan. Officials told us that the plan had been approved by the Under Secretary for Health and was under review and awaiting approval by the Secretary of Veterans Affairs (we anticipate reviewing the plan upon its approval by the Secretary). VHA officials expect the plan to be approved next month. However, the officials did not provide a schedule for finalizing the plan and implementing the structure. Until the governance structure is implemented, VHA is providing oversight of the HealtheVet initiative through various means. For example, according to officials, the former VHA CIO (now the Chief Officer of VHA’s Office of Information) briefs the VHA head (the Under Secretary for Health) twice weekly. In addition, the Office of Information holds formal meetings every two weeks with the developers (the Office of Enterprise Development in the department’s CIO organization) on three or four IT projects (which may include HealtheVet projects). Further, VHA’s Office of Information holds meetings with VHA managers who act as business liaisons between VHA and the departmental CIO organization. VHA also has an Information Data Management Committee that establishes priorities for VHA IT investments (including HealtheVet) and makes funding recommendations to the Under Secretary. This committee includes major VHA stakeholders, including headquarters and regional executives, as well as the Chief Officer, who co-chairs the council. These means fulfill some of the functions of the governance model endorsed by the department. That is, the Information Data Management Committee performs some of the oversight functions of an Oversight Board, and the Chief Officer coordinates with the developers’ program office. However, there is currently no equivalent to an Executive Steering Committee, and there is no Program Director. If the draft governance plan follows the model endorsed by the department, its approval and implementation would include these elements. Without a complete governance structure in place, the business owners’ ability to perform appropriate oversight of the HealtheVet projects may be decreased. The Office of Enterprise Development within the departmental CIO organization is responsible for development of the HealtheVet projects. This office provides day-to-day oversight and management of the technical development activities. However, currently several key leadership positions within the Office of Enterprise Development are either vacant or are filled with acting personnel. That is, positions within Program Management and Software Development are vacant, and the Assistant Deputy CIO for Software Engineering is acting (see fig. 2 for an organizational chart showing these positions). The position of Assistant Deputy CIO for Program Management is currently vacant; this position is responsible for activities such as managing a program’s portfolio of IT applications during its entire life cycle, as well as for developing and managing project plans and schedules and managing risk. In addition, the position of head of Software Engineering is filled by an acting Assistant Deputy CIO; this position has responsibility for overseeing the architecture of an application’s technical solution. Another Assistant Deputy CIO position (head of Software Development) is vacant; this position is responsible for ensuring that software deliverables meet their expected requirements. In commenting on a draft of this report, the department noted that a vacancy announcement for the Assistant Deputy CIO for Program Management position has been posted with a closing date of July 7, 2008. Until these key leadership positions are permanently staffed, the risk is increased that the department’s management and control of the HealtheVet initiative will not be efficient and effective. In 2007, three VA governance boards for IT investment projects were established; they have the following general responsibilities: The Business Needs and Investment Board (chaired by the Principal Deputy Assistant Secretary) is to evaluate whether proposed IT investment projects meet business needs. The Planning, Architecture, Technology, and Services Board (chaired by a Deputy CIO) determines whether IT projects meet technical standards by, among other things, performing milestone reviews. The Information Technology Leadership Board (chaired by the CIO) uses input from the two other boards to make recommendations to the department’s Strategic Management Council for funding the major categories of IT projects. Although the boards are chaired by officials in the CIO’s office, they all include high-level executives from the user organizations. For example, the VHA representative on the Information Technology Leadership Board is the head of VHA—the Under Secretary for Health. Since being established, the three governance boards have begun providing oversight to ensure that investments align with the department’s strategic plan and that business and budget requirements for ongoing and new initiatives meet user demands. In 2007, the three boards evaluated the HealtheVet projects that were proposed for fiscal year 2009, and the Information Technology Leadership Board made funding recommendations to the department’s Strategic Management Council. As a result of these deliberations, the department requested about $330 million for HealtheVet development projects for fiscal year 2009. However, there is one oversight function that has not yet been exercised for the HealtheVet projects: milestone reviews. Milestone reviews, which are a responsibility of the Planning, Architecture, Technology, and Services Board, afford an opportunity for progressive decision making about the program under review and are coupled with authorization for funding. The VA milestone review process includes concept definition, requirements development, system design and prototype, system development and testing, system deployment, and operations and maintenance. Each step in the process has specific and organizationally required exit criteria that must be satisfied before the program can proceed to the next stage. The Planning, Architecture, Technology, and Services Board has performed one milestone review since being established (this was a system design and prototype review for another IT development project). However, the board has not yet developed a schedule for any milestone reviews for HealtheVet projects. In particular, although the Enrollment project is scheduled for initial implementation in August 2008, no system deployment milestone review has been scheduled. According to the chair of this board, although no HealtheVet milestone reviews have been scheduled, the board has scheduled an operational test readiness review for another HealtheVet project (the Scheduling project) in June 2008 to verify that the application functions as designed are ready for initial deployment. Doing such a review should provide the board with useful information for oversight of this project. Nonetheless, it is important to hold milestone reviews on all projects that are moving from one phase of development to the next. Without milestone reviews of project progress, the governance boards cannot effectively measure progress or determine the funding needed for HealtheVet. Although VA has made progress on its $11 billion HealtheVet initiative, it has also experienced significant delays, and none of the associated development projects have been completed. Moreover, VA is proceeding with this complex initiative without a project management plan and validated cost estimates to coordinate and guide the effort. At the same time, a governance structure for HealtheVet has not yet been established, and key leadership positions that are responsible for providing day-to-day oversight have not been permanently staffed. Further, several IT governance boards with oversight responsibility for HealtheVet have not yet performed essential reviews of HealtheVet projects to gauge progress and funding requirements and the department lacks a time frame for doing so. Until the department takes the necessary actions to fully address these matters, it will face the risk that HealtheVet may experience cost overruns and continued schedule slippages, and may not achieve the outcome it intends to achieve. To better ensure the success of HealtheVet, we recommend that the Secretary of Veterans Affairs direct the Chief Information Officer to take the following four actions: Develop a project management plan that encompasses all six blocks of HealtheVet. Validate cost estimates for all six blocks of HealtheVet. Expedite efforts to permanently staff the position of the Director of the Program Management office and fill other critical leadership positions in the Office of Enterprise Development. Develop a schedule for and conduct milestone reviews of the HealtheVet projects. In addition, to ensure proper oversight of HealtheVet, we recommend that the Secretary of Veterans Affairs direct the Veterans Health Administration Under Secretary to take the following action: Finalize and implement the plan to establish the HealtheVet governance structure. In providing written comments on a draft of this report, the Deputy Secretary of Veterans Affairs agreed with our conclusions and concurred with our recommendations. (The department’s comments are reproduced in app. II.) The comments described actions planned or being taken that respond to our recommendations. For example, according to the department, the Office of Information and Technology is developing a comprehensive, integrated HealtheVet project management plan to be completed within 6 months that is to reflect dependencies between resources and establish a single schedule for all VA medical information technology projects. As part of this plan, the department noted that it will include the format and schedule for conducting milestone reviews for HealtheVet projects. In addition, the department stated that it has hired a contractor to conduct an independent financial validation of the HealtheVet preliminary cost estimate that includes three phases and is to be completed by February 2009. To address staffing within the Office of Enterprise Development, the department stated that it had posted a vacancy announcement to fill the leadership position for the Program Management Office. Lastly, the department said it expects final review and approval of the HealtheVet governance plan by July 2008. If the actions that the department has planned or undertaken are properly implemented, they should help ensure success with the development and implementation of HealtheVet. The department also provided technical comments on the draft report, which we have incorporated as appropriate. As agreed, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we are sending copies of this report to interested congressional committees and the Secretary of Veterans Affairs. Copies of this report will also be made available to other interested parties on request. This report will also be available at no charge on our Web site at http://www.gao.gov. Should you or your staffs have any questions on matters discussed in this report, please contact me at (202) 512-6304 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. As requested, the objectives of our review were to determine (1) the status of the HealtheVet initiative, (2) VA’s overall plan for completing the initiative, and (3) how VA is providing oversight to ensure the success of the initiative. We conducted our review by reviewing relevant HealtheVet project and budget documentation and validated our analyses through interviews with knowledgeable VA officials. To determine the status of the HealtheVet initiative, we reviewed individual HealtheVet documents on system operation and development, time frames, and activities planned. Additionally, we researched the department’s expenditures on HealtheVet initiatives through fiscal year 2007 and the department’s current estimate of how much it plans to spend in fiscal years 2008 and 2009. We did not assess the accuracy of the cost data provided to us. We supplemented our analyses with interviews of VA personnel involved in the initiative. We also observed demonstrations of scheduling and enrollment prototypes to better understand how HealtheVet initiatives could provide enhanced service to patients and better support VA’s medical care providers. Finally, to gain user perspective on moving from VistA to HealtheVet, we visited the VA Medical Center in Salem, Virginia, because it had recently installed customized enhancements to VistA. To determine VA’s plan for completing HealtheVet, we reviewed the department’s strategy and transition plan. We supplemented this review with responsible officials at the Office of Information and Technology, including the Deputy CIO for Enterprise Development and the Acting Deputy Director of the Program Management Office within the Office of Enterprise Development, to identify the department’s current strategy for the completion of HealtheVet. We summarized information obtained through interviews and reviews of HealtheVet documents to illustrate VA’s approach to completing the initiative. To determine how VA is providing oversight for HealtheVet, we reviewed department information technology (IT) governance documents, including the IT Governance Plan, as well as the charters of the three VA IT governance boards, to determine the boards’ roles and responsibilities for oversight of VA IT initiatives such as HealtheVet. In addition, we reviewed minutes of the three VA IT governance boards to determine the extent of their oversight of HealtheVet to date. We interviewed the chairman of the Planning, Architecture, Technology, and Services Board to determine that board’s plans for conducting future milestone reviews for HealtheVet. We also reviewed the Office of Enterprise Development organizational structure and responsibilities. We interviewed the Chief Officer of VHA’s Office of Information and members of his staff to obtain information on the plan under development to provide governance for HealtheVet. We conducted this performance audit at the Department of Veterans Affairs headquarters in Washington, D.C., and the VA medical center in Salem, Virginia, from July 2007 through June 2008, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. According to VA, all of the functionality delivered by the VistA applications described below will be either rehosted or replaced as part of HealtheVet. Offers a convenient way for healthcare providers to view information about multiple patients on a single screen. Users can see at a glance multiple patients for whom they have items that require attention. Passes final patient results between vendor clinical information systems and VistA. Enables clinicians to enter, review, and continuously update all order-related information connected with any patient. Provides a common and consistent data structure for adverse reaction data. Provides a method for identifying who is authorized to perform various actions on clinical documents. Assists clinical decision-making and educate providers about appropriate care. The primary goal is to provide relevant information to providers at the point of care, for improving care for veterans. Provides an efficient way for clinicians to order consultations and procedures from other providers or services within the hospital site, at their own facility or another facility. A clinically oriented, structured report that extracts many kinds of data from VistA and displays it in a standard format. Provides the clinician with a current and historical view of the patient’s health care problems across clinical specialties and allows each identified problem to be traceable through the VistA system in terms of treatment, test results, and outcome. Simplifies the use and management of clinical documents for both clinical and administrative medical facility personnel. A menu-based system incorporating features necessary for the maintenance of medical center dental records. Contains important demographic and clinical data on VHA patients identified with Hepatitis C infection. Designed to allow for the local entry and verification of patient-related data at an individual medical center. Contains important demographic and clinical data on VHA patients identified with Human Immunodeficiency Virus infection. Designed to store, in the patient’s electronic medical record, all patient intake and output information associated with a hospital stay or outpatient visit. Supports General Laboratory, Microbiology, Histology, Cytology, Surgical Pathology, Electron Microscopy, Blood Donors, and Blood Bank for managing and automating the workload and reporting process. Automates record keeping and reporting for all areas of Anatomic Pathology. Uses data that can be tied primarily to a donor, a patient, or a unit of blood/blood component. Reduces or eliminates the need for manual ordering and reporting of laboratory results to interface laboratories. Allows entry, edit, and viewing of data for many medical tests and procedures. Provides computer support for both clinical and administrative patient care activities associated with mental health care. Generates management reports on employees; accumulates daily statistics on the number of patients treated; generates reports on patients by bed section and ward; allows users to enter vital signs, height, and weight for patients; and allows users to generate intake and output reports. Integrates the automation of many Clinical Nutrition, Food Management, and Management Reports functions. Automates the tumor registry and supports tumor registrars in abstracting cancer cases, following up on cancer patients and producing the Hospital Annual Report. Provides a method to track drug distribution and inventory management within a medical center. Provides a real-time, point-of-care solution for validating the administration of Unit Dose and intravenous medications to inpatients in medical centers. Provides a regional system resource to expedite the distribution of mail-out prescriptions to veteran patients. Provides functionality to monitor and track the receipt, inventory, and dispensing of all controlled substances. Works toward perpetual inventory for each VA medical facility pharmacy by tracking all drugs through pharmacy locations. Provides the ability to create and distribute electronic Outpatient Pharmacy claims to insurance companies on behalf of VHA pharmacy prescription beneficiaries in a real-time environment. Integrates functions from the Intravenous and Unit Dose modules to provide a comprehensive record of medications utilized during hospitalization of the veteran. Provides pharmacists and their staff with IV labels, manufacturing worksheets, ward lists for order updates, and management report. Provides a standard computerized system for dispensing and managing inpatient medications. Provides standardization of the local drug files in all VA medical facilities. Provides a way to manage the medication regimen of veterans seen in outpatient clinics and to monitor and manage the workload and costs in the Outpatient Pharmacy. Makes data extraction reports available at the medical centers and allows local management to use the data to project local drug usage and identify potential drug accountability problem areas. Provides tools for managing site configurable data in pharmacy files. Provides medical centers with the ability to determine whether a patient has been seen at other VA facilities and to request current pharmacy information from those facilities prior to the patient appearing for a scheduled outpatient visit. In the outpatient setting, patients are assigned a primary care team and provider who are responsible for delivering essential health care, coordinating all health care services, and serving as the point of access for specialty care. This application allows a user to create, set up, and define teams; create and assign positions to the team; assign staff to the positions; assign patients to the team; and assign patient to providers’ positions. Automates purchasing, provides control and auditing of expenditures, and generates management reports. Used to enter, edit, and retrieve data for each episode of care. Automates the entire range of diagnostic functions performed in imaging departments, including order entry of requests, registration of patients for exams, processing of exams, recording of reports/results, verification of reports on-line, displaying/printing results for clinical staff, automatic tracking of requests/exams/reports, and generation of management statistics/reports, both recurring and ad hoc. Used by clinicians to place orders for certain types of medical products and services that are maintained under contract by the Denver Distribution Center. The most substantial product line is custom hearing aids. Automates all aspects of the outpatient appointment process. Is designed to facilitate the Social Work Service functions within a medical facility and is composed of Case Management, Clinical Assessment, and Community Resources. Permits the identification and tracking of patients with a spinal cord dysfunction due to trauma or disease and the medical resources utilized during their treatment. Integrates scheduling surgical cases and tracking clinical patient data to provide a variety of administrative and clinical reports. Provides medical facilities a mechanism to track information relating to both surgical risk and operative mortality. Facilitates medical decision-making by delivering complete multimedia patient information to the clinician’s desktop in an integrated manner. Includes the components used to capture, store, and display all types of images. Allows scanned and electronically generated documents to be associated with the online patient record and displayed on clinical workstations. Allows radiology departments to operate without generating X-ray film. Captures, stores, and displays images for a particular service or specialty. Enables the Visual Impairment Service Team to easily manage and track activities and services provided to blinded veterans in their service areas. Designed to store, in the patient’s electronic medical record, all vital signs and various measurements associated with a patient’s hospital stay or outpatient clinic visit. To establish a computerized tracking system that generates aggregate data at the facility level. It would assist in the assessment of various aspects of care provided to women veterans. Automates the debt collection process and a billing module is available to create non-medical care debts. Creates and prints encounter forms that display relevant clinical information, and provides for the entry of clinical encounter data for local and national needs. Provides the ability to perform the functions involved in issuing beneficiary travel pay. Provides on-line access to medical data to Veterans Benefits Administration Rating Veteran Service Representatives and Decision Review Officers. It also creates a more efficient means of requesting compensation and pension examinations. CPT codes are used for reporting medical services and procedures performed by physicians. The software includes all CPT codes to code outpatient services for reimbursement and workload purposes. Provides a means of exporting data from selected VistA applications and transmitting it to a Decision Support System at the Austin Automation Center. Is based on the Medicare Grouper requirements as defined by the Health Care Financing Administration. Each DRG represents a class of patients who are deemed medically comparable and who require approximately equal amounts of health care resources. Facilitates the management of information needed to effectively discharge key operations responsibilities normally assigned to VA engineering organizations. Provides additional functionality within the Integrated Funds Distribution, Control Point Activity, Accounting and Procurement package. Provides a mechanism to track and account for procedures and delivered services that are not handled in any other VistA package. Supports VHA’s Fee for Service program, which is care authorized for veterans who are legally eligible and are in need of care that cannot feasibly be provided by a VA facility. Allows code sheet data to be entered and transmitted electronically from the medical facility service level to the national database. Provides the medical center the ability to monitor incomplete records, interim summaries, discharge summaries, and both inpatient and outpatient operation reports. Automates a spectrum of VA financial activities. Provides users the capability to manage budgets, order goods and services, maintain records of available funds, determine the status of a request, compare vendors and items to determine the best purchase, record the receipt of items into the warehouse, and pay vendors. Automates the mini-banking system that VA provides for patients to manage their personal funds while hospitalized in a VA medical facility. Contains all the features necessary to create bills for patients and third party insurance carriers. Captures clinical data resulting from ambulatory care patient encounters. Automates time and attendance for employees, timekeepers, payroll, and supervisors. A national-level application replacing the site-based Voluntary Timekeeping System that tracks and manages the hours of service contributed by volunteers and volunteer organizations. Enhances the ability to associate appropriate data with a single patient identifier. It provides the tools necessary to automatically identify patient records identified as being duplicates. This package enables M-based VistA applications running on core facility computer systems to exchange health care information with other computer systems. It provides messaging services and a single toolset for M-based VistA applications to create, send, receive, and process HL7 messages. A portability layer between the underlying operating system and application code. This enables the VistA system to be portable among different computers, operating systems, and M implementations. Provides Development and Quality Assessment Tools, Capacity Planning Tools, and System Management Utilities. Provides an efficient way for applications to present a list of items to the user for action. An electronic messaging system that transmits messages, computer programs, data dictionaries, and data between users and applications located at the same or at different facilities. This is a suite of applications that provides the ability to uniquely identify a patient and the facilities where that patient receives care. It is a foundation for the CPRS Remote Data Views that allows the clinician to retrieve clinical information from wherever the patient has received care. Is a Web-based application that creates a new, on-line environment where veterans, family, and clinicians may come together to optimize veterans’ health care. Provides clinicians quick and easy access to patients’ information from any VA medical facility where a patient has received care. Electronically requests and receives patient demographics, episodes of care, medications, and diagnostic evaluations from other VA facilities. Provides functionality so that graphical user interface developers can establish a connection from a client workstation to a VistA Server; fun remote procedure calls on the VistA M Server; and return data to the client workstation. The majority of VHA clinical data is stored in VA FileMan files and is retrieved and accessed through VA FileMan Application Programmer Interfaces and user interfaces. Provides a synchronous communication mechanism between M applications and rehosted applications, supporting VHA’s ongoing transition to HealtheVet. Provides a comprehensive range of software dedicated to the support of administrative functions related to patient admission, discharge, transfer, and registration. Allows the user to design monitors that capture patient data in support of quality management efforts. Facilitates the processing of an application for health benefits, which has been transmitted to the VHA site from the web-based software. Provides the capability to request and obtain veteran eligibility data via the VA national telecommunications network. Extracts patient-reported Means Test data and transmit it to the Health Eligibility Center. Provides for the maintenance and control of medical records and x-ray films to facilitate availability to a variety of users. Provides a standardized assessment tool supporting the completion of a comprehensive accurate and reproducible patient assessment, and serves as the basis for developing the patient’s plan of care. Replaces the embossed data card as a means of identifying veteran patients entitled to care and service at VA health care facilities. Facilitates the electronic interchange of veteran information between Veteran Benefits Administration Regional Offices and VA medical facilities. Supports VHA policy by compiling data on patient incidents. Used to express diagnostic clinical problems in easy-to-understand terminology and associate these terms to coding systems such as ICD, DSM, NANDA. Supports VHA policy by providing for the identification of events requiring follow- up review. Tracks and trends compliments and complaints and measures the facility’s types of complaints as they relate to the Customer Services Standards and the National Patient Satisfaction Survey. Designed to manage the data from all employee accidents, create a Report of Accident, and produce the Office of Worker’s Compensation Programs Form CA-1 and the Federal Employee’s Notice of Occupational Disease and Claim for Compensation Form CA-2. Automates the entire serials management process in VA Library Services. Supports the VA Police in their responsibilities of crime prevention, preliminary investigation of crimes, apprehension, legally correct handling of suspected offenders, and the transfer of suspected offenders to appropriate authorities. In addition to the contact named above, key contributions to this report were made by Barbara Oliver (Assistant Director), Barbara Collier, Neil Doherty, Nancy Glover, Michele Mackin, J. Michael Resser, Amos Tevelow, Eric Trout, and Charles Youman.
The Department of Veterans Affairs (VA), through its Veterans Health Administration (VHA), provides health care for more than 5 million veterans each year. In 2001, VHA began an initiative, HealtheVet, to modernize its current medical information system. GAO's objectives were to determine the status of the modernization, VA's overall plan for completing it, and how VA is providing oversight to ensure the success of the initiative. To conduct this review, GAO analyzed project documentation and interviewed officials responsible for the development and implementation of the new system. As of June 2008, the HealtheVet initiative has these eight major software development projects under way. One project is to further develop the Health Data Repository, a database of standardized health data. This database, which is currently operational, is not yet complete; additional types of health data remain to be standardized and added to the repository. Four application projects are currently in development. One application project is in the planning stage. Two projects are being pursued to enhance current systems, prepare them for transition to HealtheVet, and develop new applications. From 2001 through fiscal year 2007, VA reported spending almost $600 million for these eight projects. The time frame for completing the projects and the HealtheVet system as a whole was 2012, but the projected completion date has now been delayed until 2018. The department has a high-level strategy for HealtheVet, in which the remainder of the initiative is to be completed incrementally in phases (referred to as "blocks"), but it does not have a comprehensive project management plan to guide the remaining work. This work is considerable: the department plans to replace the 104 applications in its current medical information system with 67 modernized applications (of which 5 are currently in development, as described), 3 databases, and 10 common services (general software functions, such as messaging and security, on which application software can call as needed). In view of this scope, the importance is increased of developing a comprehensive project management plan that includes, among other things, an integrated schedule that considers all dependencies and defines subtasks to ensure that deadlines are realistic. Another important component of such planning is determining the resources necessary to accomplish tasks throughout the life cycle of the initiative. In April 2008, VA provided an $11 billion cost estimate for completion of HealtheVet; however, it has not yet independently validated this estimate. Having a validated cost estimate is essential to improve the accuracy of cost, schedule, and performance management. Without an integrated plan that includes independently validated cost estimates, VA increases the risk that HealtheVet could incur cost increases and continued schedule slippages and not achieve its intended outcomes. Various levels and types of oversight are currently being provided for the HealtheVet initiative by business owners, developers, and departmental information technology governance boards. However, the business owners have not yet implemented a complete governance structure, several key leadership positions within the developers' organization are either vacant or filled with acting personnel, and the governance boards have not yet scheduled critical reviews of HealtheVet projects. Until all elements of governance and oversight are in place, the risk to the success of the HealtheVet initiative is increased.
GPO was established in 1861 to (1) assist Congress and federal agencies in the production and replication of information products and services, and (2) provide the public with government information products and services. GPO provides printing services to all three branches of government—either by producing work in-house or by procuring it from commercial printers. Information dissemination is accomplished through GPO’s Superintendent of Documents, who is to provide public access to government information through (1) the sale of publications; (2) distribution of publications to depository and international exchange libraries, to those recipients designated by law, and for agencies on a reimbursable basis; and (3) compilation of catalogs and indexes containing complete and authoritative descriptions of government publications. The public printing and documents chapters of title 44 of the U. S. Code require GPO to fulfill the printing needs of the federal government and distribute government publications to the public. GPO’s activities are financed through a revolving fund, which is reimbursed by payments from client agencies and sales of government publications, and transfers from the Congressional Printing and Binding Appropriation and the Salaries and Expenses Appropriation of the Superintendent of Documents. These annual appropriations are to reimburse GPO for costs incurred while performing congressional work and fulfilling statutory requirements associated with the distribution of government publications. Reimbursements from these appropriations to the revolving fund are recorded as revenues. The sales program operates within the revolving fund and is expected to recover its costs from its sales. According to GPO, the sales program does not receive any direct appropriation. GPO is headed by the Public Printer, who is nominated by the President and confirmed by the Senate. The sales program is led by the Superintendent of Documents, who reports directly to the Public Printer. The sales program provides the public the opportunity to purchase government publications and subscriptions at GPO’s 24 bookstores across the country; through telephone, fax, and mail orders; and through consigned sales agents at other agencies. Within the Superintendent of Documents’ staff is the Documents Sales Service, which includes staff in the Sales Management Division and Documents Control Branch. Other key players in the sales program include Customer Services, whose printing specialists serve as liaisons with the issuing agencies until the publications are produced, and the Office of the Comptroller, which keeps the financial records, including inventory. For more detail on GPO’s organizational makeup, see appendix II. Once a publication is printed and enters the sales program, the Documents Control Branch, within the Sales Management Division, maintains inventory control, determines its continued salability, and makes reprinting and disposal decisions. Working with the issuing agency, Sales Management Division staff establish a life cycle for each publication that represents the period during which sales demand is expected. According to GPO, the average life cycle for a publication in its inventory is now about 12 months. As of September 1996, the sales program carried 13,268 publications in its inventory, valued at about $12.8 million based on printing and binding costs. The sales program did not report a loss between fiscal years 1981 and 1995. For fiscal year 1996, however, the sales program’s expenses of $79.4 million exceeded revenues of $70.5 million, for a net loss of $8.9 million. As of June 1997, the sales program was showing a loss of about $537,000 for fiscal year 1997. In May 1996, financial projections indicated that the sales program expected a substantial loss for fiscal year 1996, the first such loss in 15 years. These projections were based on information that indicated that revenue was down and expenses were up for several reasons, including declining sales, the effect of the government shutdown at the beginning of fiscal year 1996, competition from other government sales programs, increasing use of free electronic publications, and a substantial increase in charges to surplus publications expense (i.e., the printing and binding costs of publications in GPO’s inventory that are expected to be unsalable). As a result of the projected loss for fiscal year 1996, the Superintendent of Documents tasked a management team with developing an action plan to increase revenue and reduce expenses, with the objective of returning the sales program to full cost-recovery in fiscal year 1997. The plan, dated September 1996, originally contained 44 individual projects and was later amended to include 2 more projects. One original project was a special effort, over and above GPO’s routine process for removing excess publications, to move aggressively to reduce the inventory of surplus publications before the new fiscal year began on October 1, 1996. Such a reduction would increase the surplus publications expense for fiscal year 1996 but was expected to decrease those expenses in fiscal year 1997 and subsequent years. In other words, the sales program’s losses for fiscal year 1996 would be greater, but GPO officials hoped that this would result in the program breaking even or better for fiscal year 1997 and beyond. The inventory reduction began in early September 1996, even before the action plan was issued, with a deadline for completion of September 30, 1996. The September 1996 inventory reduction involved 2,127 publications that had a printing and binding cost of about $3 million, which was about one-third of the surplus publications expense GPO charged for publications it excessed and disposed of in fiscal year 1996. (See appendix III for examples of the publications disposed of in September 1996.) GPO’s records and our discussions with GPO warehouse and contractor personnel indicate that the publications inventory that was excessed during the reduction was sold (for less than 3 cents per pound) to a scrap contractor, who was required by contractual terms to shred and recycle it rather than resell the individual publications. The Superintendent of Documents had issued policies and procedures for determining excess, obsolete, damaged, and destroyed information products and for managing inventory. Superintendent of Documents Policy No. 38, dated May 28, 1984, provides that publication inventories are to be reviewed quarterly to determine the quantities that are to be retained and those that are excess. This policy applies to inventories that are managed by headquarters staff. A separate procedure (Superintendent of Documents Policy No. 38.6) applies to inventories in GPO’s bookstores. Under the existing policies and procedures, inventory management specialists (IMS) in the Documents Control Branch are to review quarterly the amount of inventory for the publications they manage. This review is conducted to identify whether the inventory should be reduced based on the sales history and projected life cycle of the publication. As part of the Superintendent of Documents’ existing policy, which was issued by the current Public Printer when he was the Superintendent of Documents, once an IMS determines the number (if any) of copies of a publication that are excess, he or she is to call the issuing agency to determine whether it wants the extra copies. As part of the inventory review process for publications of high dollar value or with a large number of copies on hand, the IMS then is to complete Form 3880, which includes such information as the estimated printing and binding cost of the publication, anticipated sales, total copies sold, and whether the issuing agency wants any of the excess copies. (The form does not include the holding cost of retaining the copies in inventory.) This completed form is to be sent to a Documents Survey Board consisting of the Director of Documents Sales Service, the Chief of the Sales Management Division, and the Chief of the Documents Control Branch. If the Survey Board approves the form, the IMS then must prepare a notice to be sent to GPO’s warehouse in Laurel, Maryland. At the warehouse, the excessed stock (i.e., stock not wanted by the issuing agency) is to be identified and moved to a separate area for periodic pick up by a contractor, who is required by the contract to shred the documents and have them recycled. The contractor is not permitted to resell the documents other than for recycling. During the major reduction in September 1996, the Superintendent of Documents’ staff followed his orders and disregarded policy and normal procedures in order to reduce the inventory of excess publications before October 1, 1996. When the Superintendent of Documents realized that the sales program expected a substantial loss, he told his staff in a June 1996 memorandum that, while developing an action plan to increase revenue and reduce expenses, they should: “Ignore politics and external influences. Disregard current policies and practices that inhibit creativity and impede change.” According to the Superintendent of Documents, his instruction to ignore politics and external influences referred to frequent requests from issuing agencies to have more copies of publications in the sales inventory than GPO believes can be sold. The Superintendent further said that he subsequently verbally instructed his staff to begin the inventory reduction before the action plan was approved and told them to disregard policies that would interfere with the removal of as much excess inventory by September 30, 1996, as possible. In order to maximize charges to surplus publications expense in fiscal year 1996, the Superintendent of Documents and GPO’s Comptroller advised IMS staff to focus their attention on excessing publications that had high printing and binding costs, large quantities in inventory, and low sales volume. Also, during this major reduction, IMS decisions on what publications to excess did not receive the normal management review, and IMS staff did not call the issuing agencies to see whether they wanted the excess copies of their publications. Superintendent of Documents staff told us that they disregarded policy because they would not have had enough time to contact the issuing agencies and receive answers by September 30. According to these staff and Superintendent of Documents management officials, it would have been very difficult to contact all of the agencies involved with the 2,127 publications being excessed and to wait for their various responses concerning whether they wanted the excess copies. According to GPO, this response period usually takes about 4 weeks, and GPO officials did not believe that the agencies would be able to respond appropriately if given only a few days. According to Documents Control Branch staff, this disregard of policy resulted from the Superintendent’s June 1996 memorandum and his oral instructions to his staff regarding the formulation of the action plan to increase revenue and reduce expenses. The IMS responsible for handling congressional publications, and his supervisor in September 1996, acknowledged discussing between themselves whether they should follow GPO’s policy to offer excess publications to issuing agencies. They said that they felt they had the authority to dispose of the publications without notifying the issuing agencies because of time constraints and instructions from the Superintendent of Documents to disregard policies. They said that, given the Superintendent of Documents’ instructions, they saw no need to tell management officials above them that they were disregarding this policy. The IMS responsible for handling congressional publications told us that he made the decision on which publications to excess based primarily on the criteria he was given—high printing and binding costs, large quantity in inventory, and low projected sales. According to the IMS, his decisions on which publications to dispose of in September 1996 were not reviewed or approved by the Documents Survey Board, as generally would be required. The publications selected for excessing by the IMS were approved by his supervisor, but no one else’s approval was noted on the inventory records. The Superintendent of Documents said that he was responsible for policies not being followed and for the inventory reductions that took place at the end of fiscal year 1996. The Superintendent of Documents said that he wanted to dispose of the excess inventory by September 30, 1996, in order to take the losses in fiscal year 1996. He also said that he wanted to identify and dispose of as much excess inventory as possible in fiscal year 1996 rather than in later years, when it otherwise would have been identified, disposed of, and charged to expense. According to the Superintendent of Documents, he instructed staff to dispose of the excess inventory by September 30, 1996, because he mistakenly believed that the inventory had to be physically removed from GPO property before surplus publications expense could be charged. However, the inventory identified as excess by the IMS staff did not have to be disposed of by September 30, 1996, in order that the surplus publications expense could be charged to fiscal year 1996. Neither generally accepted accounting principles nor GPO’s own accounting procedures require physically removing the excessed publications from GPO property before surplus publications expense can be charged. Surplus publications expense can be charged whenever GPO staff determine that inventory is obsolete or unsalable. In fact, GPO had another major inventory stock reduction in fiscal year 1981, and at that time, according to GPO’s Comptroller, certain publications had been identified as excess but had not yet been disposed of when they were shown as an expense in GPO’s financial records. Both GPO’s Comptroller and the Superintendent of Documents agree that the latter misunderstood how publications expenses were handled in GPO’s accounting system at the time of the major inventory reduction in 1996. They both said that, at that time, GPO had no written guidance or instructions stating that excess inventory does not have to be physically removed from GPO before surplus publications expense can be charged. In July 1997, the Public Printer told us that, while he was notified that a major inventory reduction would be taking place in 1996, he was not made aware of the details of the reduction. He said that he did not know that the policy to offer excess publications to the issuing agency, which he had instituted when he was Superintendent of Documents, was not followed in the September 1996 reduction. As mentioned earlier, according to the IMS responsible for handling congressional publications, the decisions concerning which publications to excess were primarily based on the criteria of high printing and binding costs, large quantity in inventory, and projected sales. The IMS said that the Senate history volumes met these criteria for disposal. According to the IMS, he made his decision concerning the number of copies of the Senate history to retain based on an estimate of future sales, using a 10-year estimated life cycle for each of the four volumes. According to GPO’s records, the 10-year life cycle was developed when the volumes were first published, as a result of discussions involving the Senate Historian, House Historian, Joint Committee on Printing staff, and staff from GPO’s Documents Sales Service group. GPO records show that, of the inventory that was excessed, 3,258 copies, involving some of each of the four volumes written by Senator Byrd, were disposed of. The 3,258 copies were about 10 percent of the total number originally printed of the four volumes (32,386 at a total cost of $1,572,291). The printing and binding cost of the 3,258 excessed copies was about $83,000. The scrap value received for the shredded copies was about $600. See table 1 for more detail. According to GPO records, GPO retained 1,134 total copies of the Senate history, which GPO inventory management staff kept in inventory based on the estimated quantity needed to meet a sales demand calculated on what they initially agreed with representatives from the Senate Historian’s Office and others to be the life cycle for the publications. This life cycle was to be 10 years from the dates the volumes were published; their publication dates were 1988 (volume I), 1991 (volume II), 1994 (volume III), and 1993 (volume IV). Table 2 contains a breakdown of the disposition of the Senate history volumes, including the number on hand as of July 1997. A representative from GPO’s Congressional Printing Management Division in Customer Services told us that, in June 1996, he told the IMS responsible for handling congressional publications that the Senate Historian’s Office wanted any excess Senate history volumes that GPO might have. The responsible IMS said he knew that in the past the Senate Historian’s Office had inquired about the status of the Senate history volumes on several occasions and that, while he recalled the previous inquiries by the Senate Historian’s Office, he did not recall being told in June 1996 that the Senate Historian’s Office wanted any excess copies. He said that he proceeded with the inventory reduction based on the Superintendent of Documents’ instructions to disregard policies and ignore politics. Inventory records showed that he identified the copies as excess on September 6, 1996, and September 9, 1996. Warehouse records show that the copies were removed from the warehouse shelves for pickup by the scrap contractor on September 10, 1996, and September 12, 1996. All of the Superintendent of Documents staff we interviewed who were involved in the September 1996 inventory reduction said that no specific discussion of the Senate history volumes occurred during the September 1996 reduction. The Public Printer said he did not know at the time that the Senate history volumes were among those being excessed and that, if he had, those books would not have been disposed of. GPO has taken action or has actions in process that are aimed at helping to prevent a recurrence of a situation in which excess publications are disposed of without regard to established policies and procedures. While GPO’s initial actions could have helped prevent a recurrence, they did not appear to address all of the underlying causes of the problems associated with the September 1996 major inventory reduction. During the course of our review, we identified and brought to GPO’s attention several additional actions that we believed would address those causes. As discussed below, GPO officials agreed and took additional steps to prevent a recurrence. In May 6, 1997, and July 11, 1997, letters to Senator Byrd, the Public Printer said that GPO had made an error in disposing of the Senate history volumes and that all four volumes, because of their historical significance, would remain in print and available through the sales program indefinitely. According to the Superintendent of Documents, this action was carried out through oral instructions to his staff in July 1997. In response to these oral instructions, the IMS responsible for handling congressional publications wrote a note saying not to dispose of these volumes without top management’s approval and attached the note to the inventory control cards he maintained for these volumes. At our recommendation, the Superintendent of Documents put his oral instructions in writing in August 1997. In response to our inquiries, both the Public Printer and the Superintendent said that some publications, such as the Constitution and the Senate history volumes, should be kept indefinitely because of their historical significance. The Superintendent said that GPO did not have a systematic process for identifying or designating such publications but that, in response to our recommendation, GPO would develop a formal system for identifying publications that should remain in inventory indefinitely. In addition, he said that GPO was already developing a new inventory management system that would allow publications that are to be held indefinitely to be designated as such once they have been identified. The Superintendent of Documents also acknowledged that his lack of awareness about the planned disposal of the Senate history volumes contributed to their being excessed. On July 22, 1997, the Superintendent of Documents sent a memorandum to his staff stating that no further exceptions should be made to the current policy on excess, obsolete, damaged, and destroyed information products and that “excess stocks will be offered to the issuing agency.” On July 23, 1997, the Superintendent of Documents asked his staff to revise his formal policy document dated May 28, 1984, to address the problems that arose in connection with the September 1996 inventory reduction. According to the Superintendent of Documents, this revised policy will provide that excessed inventory should be charged to surplus publications expense when it is determined to be excess. The excessed inventory is then to be held in the warehouse for a reasonable period while issuing agencies are contacted to see if they want the excess publications. Under the Superintendent’s revised procedures, the policy of offering issuing agencies excess copies before their disposal cannot be waived. We pointed out that we saw no written statement in GPO’s policies, procedures, or guidance that specifically said that excessed inventory does not have to be physically removed from GPO’s warehouse before it can be charged to surplus publications expense. Both the Superintendent of Documents and the Comptroller agreed that the lack of such a written statement may have contributed to the misunderstanding that took place in 1996. In August 1997, GPO’s Comptroller prepared such a statement. Another action GPO has had in process for some time that could also help prevent a recurrence of the problems of the September 1996 reduction is the development of a new Integrated Processing System. The Superintendent of Documents expects this new system, which GPO plans to implement in October 1997, to provide his office with more flexibility in tracking inventory and better information for making decisions to excess publications. According to the Superintendent of Documents, the new system will (1) allow GPO to designate inventory as excess without physically relocating it in the warehouse, and (2) include a comment box where the IMS can indicate that a publication is not to be excessed or make other appropriate notations about its disposition. Until the new system is implemented, notations concerning holding copies indefinitely must be made on records that are maintained manually. Finally, another dilemma GPO has faced in disposing of excess inventory is the lack of authority to donate excess publications to schools or similar institutions. Under existing law and policy, GPO’s current options for disposing of excess publications are to offer them to issuing agencies at no cost or to dispose of them as scrap. GPO is also precluded by statute and regulation from offering publications to the public at discount prices except to those who buy 100 or more copies of the same publication or to book dealers who agree to resell the books at GPO’s prices—in which case, GPO can only offer a maximum discount of 25 percent. To address this problem, in May 1997, as part of its recommended revision of title 44 of the U. S. Code, GPO forwarded a proposal to the Joint Committee on Printing that would authorize the donation of excess publications to schools or similar institutions if the copies are not wanted by the issuing agency. In a May 6, 1997, letter to Senator Byrd, the Public Printer said that, on the basis of a study GPO had done, it was more cost-effective to maintain an adequate inventory of sales publications based on their projected life cycle and to reprint if necessary, than to hold excess copies of publications in inventory. According to the Superintendent of Documents, who drafted the May 6 letter for the Public Printer, the study cited in the letter referred to data supplied by the GPO’s Comptroller in 1996. These data showed that, overall, GPO’s inventory of excess publications was growing and was contributing to increasing charges to surplus publications expense. These increased charges were, in turn, contributing to a worsening financial situation for the sales program. To help remedy this problem, according to the Superintendent of Documents, the Comptroller recommended that the Superintendent identify as much excess inventory as possible in fiscal year 1996 to improve the sales program’s long-term financial situation. According to the Superintendent of Documents, the statement in the May 6 letter pertained to the typical publication, which he said has a printing and binding cost of about $2 per copy; it did not specifically pertain to the Senate history volumes, which had a printing and binding cost of $19 to $35 per copy. In this regard, we noted that the printing and binding cost of the 3,258 copies of the Senate history volumes disposed of was about $83,000, and that GPO’s estimated annual storage costs attributable to these copies was about $2,500. These figures can be compared to GPO’s estimated reprinting cost of about $210,000 should GPO reprint the copies disposed of, which it has agreed to do if necessary. During our review of GPO’s inventory management records, we also noted that Form 3880, which IMS staff use to make recommendations and supervisory personnel use to review actions on obsolete or excess inventory, does not provide for inclusion of data on storage or holding costs for publications. This omission is inconsistent with a memorandum, dated January 4, 1985, from the Chief, Sales Management Division, to Documents Control Branch staff, which directed that reasonable life cycles should be consistent with economic analysis of the following factors: expected trend, reprint costs, expected revision date, and holding costs. The memorandum also stated that, when reviewing records to identify excess or consider extension of the life cycle, the following factors should be considered: continued marketability, projected revenue, and estimated holding costs. We discussed this inconsistency with the Superintendent of Documents in August 1997. He said storage or holding costs are usually not significant, but he recognized that they should be considered in making decisions on excess inventory. He agreed to modify Form 3880 to incorporate consideration of such costs. To achieve its financial objective, GPO did not have to disregard policy and procedures for notifying the issuing agencies of excess publications. Because of the erroneous belief of the Superintendent of Documents, who heads GPO’s sales program, that GPO had to physically remove excess publications from the GPO warehouse by September 30, 1996, in order to record them as an expense for fiscal year 1996, and because of his express instruction to disregard policies and procedures, GPO staff disposed of about 2,100 different publications without first contacting the issuing agencies of those publications. As a result, 3,258 copies of the Senate history were destroyed, even though the Senate Historian’s Office had told a GPO representative that it wanted any excess copies. GPO has taken or plans to take actions that, if effectively implemented, should prevent this situation from recurring. We made various recommendations during the course of our work that GPO agreed to and either implemented the corrective action or is in the process of doing so. Thus, we are making no further recommendations. On September 5, 1997, we provided the Public Printer with a draft of this report for comment. We received his written comments, included in their entirety in appendix IV, on September 10, 1997. The Public Printer said that the report fairly represents the events as they occurred during the September 1996 inventory reduction. He also said that actions have been and are being taken to ensure that no sales publications will be disposed of in the future without strict adherence to applicable GPO policies and procedures. We are sending copies of this report to the Public Printer of the Government Printing Office and the Chairman and the Vice Chairman of the Joint Committee on Printing. We will make copies available to others upon request. Major contributors to this report are listed in appendix V. If you have any questions concerning this report, please call me on (202) 512-4232. As agreed with your offices, our objectives were to determine the facts surrounding the September 1996 inventory reduction; whether it followed existing policies and procedures; and the fate of the 3,258 copies of The Senate 1789-1989, a four-volume set written by Senator Byrd, that were destroyed as part of that reduction. In order to obtain information on GPO’s sales program and the inventory reduction held in September 1996, we reviewed pertinent documentation, such as GPO’s inventory control records, policies and procedures, memoranda, and financial records and reports. We interviewed the Public Printer, the Superintendent of Documents and his staff who were involved in the reduction, the Comptroller and his staff who are responsible for the financial records, and staff in the Congressional Printing and Management Division who serve as the liaison with the Senate Historian’s Office for Senate publications, including Senator Byrd’s books. We visited GPO’s Laurel, Maryland, warehouse where excessed publications are disposed of; reviewed its inventory disposition records; and interviewed a representative of GPO’s contractor that had picked up GPO’s excessed publications from its Laurel warehouse in September 1996. In addition, we reviewed GPO’s authority to donate surplus books. We coordinated our review with GPO’s Office of Inspector General. The representative from GPO’s contractor told us that his company did not maintain any records that would specifically show that the 3,258 Senate history copies were shredded. Therefore, we had to rely on GPO’s records and interviews with GPO staff and the contractor’s representative to determine what happened to the 3,258 Senate history copies that were excessed. Further, we did not verify GPO’s computerized inventory or financial records or do actual counts of the remaining stock inventory of the Senate history volumes at the Laurel warehouse. In addition to the Senate history volumes, we selected the following examples of excessed publications to provide a mix of publications from both the legislative and executive branches and, in some cases, to reflect publications having high dollar values. John S. Baldwin, Sr., Assistant Director Michael W. Jarvis, Evaluator-in-Charge Kiki Theodoropoulos, Senior Evaluator (Communications Analyst) Victor B. Goddard, Senior Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Government Printing Office's (GPO) procedures for managing its inventory of excess publications, particularly its management of a major inventory reduction that took place in September 1996, focusing on: (1) whether GPO followed existing policies and procedures; and (2) how 3,258 copies of The Senate 1789-1989, a four-volume set written by Senator Byrd, were destroyed as part of that reduction. GAO noted that: (1) when for the first time in 15 years a potential financial loss was identified in GPO's sales program in June 1996, the Superintendent of Documents, who heads the sales program, initiated several actions intended to improve the program's long-term financial condition; (2) the Superintendent of Documents said he wanted to dispose of the excess inventory by September 30, 1996, to take the losses in fiscal year (FY) 1996 rather than in later years, when it otherwise would have been identified, disposed of, and charged to expense; (3) the Superintendent of Documents also said he had erroneously believed that it was necessary to physically remove excess publications from inventory storage by September 30, 1996, in order to record them as an expense in the financial records for FY 1996; (4) although the Superintendent of Documents had policies and procedures in place to prevent the disposal of publications that the issuing agency still wanted, in June 1996 he instructed his staff to disregard those policies that would interfere with his goal of disposing of as much excess publications inventory as possible by September 30, 1996; (5) acting under the Superintendent's overall instructions, GPO sales program staff disregarded a policy that has existed since at least 1984, which provides that, before disposing of any excess copies of publications, GPO should offer them to the issuing agencies; (6) in explaining its inventory reduction to Senator Byrd, GPO said that it had found that it was generally more cost-effective to dispose of excess inventory and reprint if necessary, than to hold it in storage indefinitely; (7) however, GPO officials said that they knew that the reprint costs would substantially exceed the holding costs for these copies, given their relatively high printing and binding costs; (8) in July 1997, after Senator Byrd inquired about the major inventory reduction, the Superintendent of Documents orally instructed his staff to retain the remaining volumes of the Senate history and, at GAO's recommendation, put this instruction in writing in August 1997; and (9) the Superintendent further said that GPO was developing a new integrated processing system that would help designate publications that should not be excessed and, at GAO's recommendation, agreed to develop a systematic process for identifying publications to be held indefinitely for valid reasons.
DOD sites that require cleanup are often contaminated by many different types of hazardous materials, have contamination in more than one medium (e.g., soil, surface water, or groundwater), and may encompass several acres or even square miles. Groundwater stored in subsurface formations called aquifers can become contaminated in a number of ways. For example, contamination can occur when a liquid hazardous substance soaks down through the soil. Often, groundwater contamination is difficult to address because of the complexity of groundwater systems. The subsurface environment can be composed of numerous layers of diverse types of material—such as sand, gravel, clay, and solid rock—and fractured layers through which groundwater flows. These variations in the subsurface often affect how groundwater flows through a contaminated site and can influence how contaminants are spread and accumulate in the subsurface. Chemical properties of the contaminant also influence its distribution in the subsurface. Typically, contaminated sites consist of a source zone where the bulk of the contaminant is concentrated and a plume of contamination that develops beyond the source of contamination as a result of groundwater flowing through the contaminated site. See figure 1 for an illustration of a site with contaminated groundwater. According to DOD, the Air Force has identified more than 2,500 sites on its active and closing installations with contaminated groundwater; the Navy has identified more than 2,000 sites; the Army has identified about 800 sites; and the Defense Logistics Agency has identified 16 sites. In addition, DOD has identified more than 500 contaminated groundwater sites on formerly used defense sites for which the Corps is responsible for cleanup. Contamination on DOD facilities can pose a threat to military personnel, the public, and the sustainability of DOD’s training and testing ranges. DOD first initiated its environmental restoration efforts in 1975. Over the last 10 years, DOD has invested approximately $20 billion for the environmental restoration of contaminated sites, including remediation of contaminated groundwater on and around active, closing, and formerly used defense facilities. DOD’s policies for administering cleanup programs are outlined in its guidance for managing its environmental restoration program and generally follow the CERCLA process for identifying, investigating, and remediating sites contaminated by hazardous materials. According to DOD’s guidance, department officials are required to involve EPA, relevant state and local government officials, and the public, among others, at specified points in the cleanup process. See figure 2 for more information on the phases of DOD’s environmental cleanup process. Once DOD identifies potential contamination on one of its facilities, it initiates a preliminary assessment to gather data on the contaminated site. If DOD finds evidence that the site needs remediation, it consults with EPA to determine whether the site qualifies for inclusion on the National Priorities List. If EPA places a DOD facility on the National Priorities List, CERCLA requires DOD to begin the next phase of cleanup within 6 months. During this next phase, called a remedial investigation/feasibility study, DOD characterizes the nature and extent of contamination and evaluates the technical options available for cleaning up the site. DOD also pursues a remedial investigation/feasibility study for sites that do not qualify for the National Priorities List but require decontamination. Data collected during the remedial investigation influences DOD’s development of cleanup goals and evaluation of remediation alternatives. During the feasibility study, often conducted concurrently with the remedial investigation, DOD identifies applicable regulations and determines cleanup standards that will govern its cleanup efforts. CERCLA requires that sites covered by the statute be cleaned up to the extent necessary to protect both human health and the environment. In addition, cleanups must comply with requirements under federal environmental laws that are legally “applicable” or “relevant and appropriate” as well as with state environmental requirements that are more stringent than the federal standards. Furthermore, CERCLA cleanups must at least attain goals and criteria established under the Safe Drinking Water Act and the Clean Water Act, where such standards are relevant and appropriate under the circumstances. Once cleanup standards have been established, DOD considers the merits of various actions to attain cleanup goals. Cleanup actions fall into two broad categories: removal actions and remedial actions. Removal actions are usually short term and are designed to stabilize or clean up a hazardous site that poses an immediate threat to human health or the environment. Remedial actions, which are generally longer term and usually costlier, are aimed at implementing a permanent remedy. Such a remedy may, for example, include the use of groundwater remediation technologies. Also during the feasibility study, DOD identifies and screens various groundwater remediation technologies based on their effectiveness, feasibility, and cost. At the conclusion of the remedial investigation/feasibility study, DOD selects a final plan of action—called a remedial action—and develops a Record of Decision that documents the cleanup objectives, the technologies to be used during cleanup, and the analysis that led to the selection. If EPA and DOD fail to reach mutual agreement on the selection of the remedial action, then EPA selects the remedy. If the cleanup selected leaves any hazardous substances, pollutants, or contaminants at the site, DOD must review the action every 5 years after the initiation of the cleanup. According to DOD policy, this may include determining if an alternative technology or approach is more appropriate than the one in place. DOD continues remediation efforts at a site until the cleanup objectives stated in the Record of Decision are met, a milestone referred to as “response complete.” Even if DOD meets the cleanup objectives for a site, in some cases the site may require long-term management and monitoring to ensure that it does not become contaminated from residual sources of pollution. DOD has implemented or field-tested all of the 15 types of generally accepted technologies currently available to remediate groundwater. These 15 technologies include 6 ex-situ and 9 in-situ technologies, each of which can be used to treat a variety of contaminants. All of these groundwater remediation technologies rely on a variety of biological, chemical, or physical processes to treat or extract the contaminant. DOD guidance directs department officials to consider cost-effectiveness and performance when selecting technologies for cleanup. We identified a range of ex-situ and in-situ technologies that DOD can employ to clean up a contaminated groundwater site. Ex-situ technologies rely on a pump-and-treat system to bring the contaminated water above ground so that it can be treated and the contaminants removed. Some ex- situ technologies destroy the contaminant, while others remove the contaminant from the groundwater, which is subsequently disposed of in an approved manner. The decontaminated water can be discharged to surface water, used as part of a public drinking water supply, injected back into the ground, or discharged to a municipal sewage plant. We identified 6 categories of ex-situ technologies: Advanced oxidation processes often use ultraviolet radiation with oxidizing agents—such as ozone or hydrogen peroxide—to destroy contaminants in water pumped into an above-ground treatment tank. Air stripping separates volatile contaminants from water by exposing the water to large volumes of air, thus forcing the contaminants to undergo a physical transformation from liquid to vapor (volatilization). There is no destruction of the contaminant; therefore, the contaminant must be removed and disposed of properly. Bioreactors are above-ground biochemical-processing systems designed to degrade contaminants in water using various microorganisms, an approach similar to that used at a conventional wastewater treatment facility. Contaminated groundwater flows into a tank or basin where it interacts with microorganisms that degrade the contaminant. Constructed wetlands are artificially built wetland ecosystems that contain organic materials, plants, microbial fauna, and algae that filter or degrade contaminants from the water that is pumped into the wetland. Ion exchange involves passing contaminated water through a bed of resin media or membrane that exchanges ions in the contaminants, thus neutralizing them into nonhazardous substances. Adsorption (mass transfer) involves circulating contaminated water through an above-ground treatment vessel containing a sorbent material—such as activated carbon—that removes the contaminant from the water. (See app. II for more information on key characteristics of these ex-situ technologies.) Similarly, we identified nine in-situ technologies that can be used to remediate contaminated groundwater. In contrast to ex-situ technologies, in-situ technologies treat contaminants within the subsurface. Some in-situ technologies—such as bioremediation and chemical treatment—destroy the contaminant within the subsurface by altering the contaminant’s chemical structure and converting the toxic chemical to a nontoxic form (e.g., benzene to carbon dioxide). Other in-situ technologies—such as multiphase extraction and enhanced recovery using surfactant flushing— facilitate the removal of the contaminant from the subsurface for treatment above ground. Still other technologies—such as air sparging—combine in- situ treatments with extraction techniques. Air sparging introduces air or other gases into the subsurface to remove the contamination from the groundwater through volatilization (converting a solid or liquid into a gas or vapor that may be treated at the surface), and in some configurations may also introduce oxygen into the contaminated area to stimulate in-situ biological breakdown (i.e., bioremediation) or ozone to achieve chemical oxidation of the contaminant. Bioremediation relies on microorganisms living in the subsurface to biologically degrade groundwater contaminants through a process called biodegradation. Bioremediation may be engineered and accomplished in two general ways: (1) stimulating native microorganisms by adding nutrients, oxygen, or other electron acceptors (a process a called biostimulation) or (2) providing supplementary pregrown microorganisms to the contaminated site to augment naturally occurring microorganisms (a process called bioaugmentation). Enhanced recovery using surfactant flushing involves the injection of active agents known as surfactants into contaminated aquifers to flush the contaminated groundwater toward a pump, which removes the contaminated water and surfactant solution to the surface for treatment and disposal of the contaminants. Chemical treatments inject various substances into the groundwater that can chemically oxidize or reduce contaminants into less-toxic or nonhazardous materials. Monitored natural attenuation involves using wells and monitoring equipment in and around a contaminated site to track the natural physical, chemical, and biological degradation of the contaminants. Although not necessarily considered a treatment technology, this approach is often used to monitor contaminant concentrations to ensure that human health and the environment are not threatened. Multiphase extraction uses a series of pumps and vacuums to simultaneously remove from the subsurface combinations of contaminated groundwater, free product (i.e., liquid contaminants floating on top of groundwater), and hazardous vapors. This technology can be used to remove contaminants from above and below the groundwater table, thereby exposing more of the subsurface for treatment. Permeable reactive barriers are vertical walls or trenches built into the subsurface that contain a reactive material to intercept and remediate a contaminant plume as the groundwater passes through the barrier. Phytoremediation relies on the natural hydraulic and metabolic processes of selected vegetation to remove, contain, or reduce the toxicity of environmental contaminants in the groundwater. Thermal treatments involve either pumping steam into the aquifer or heating groundwater to vaporize or destroy groundwater contaminants. Vaporized contaminants are often removed for treatment using a vacuum extraction system. (See app. II for more information on key characteristics of these in-situ technologies.) Although most in-situ technologies have the advantage of treating a contaminant in place, these technologies may afford less certainty about the extent and uniformity of treatment in contaminated areas when compared with some ex-situ technologies. For example, enhanced recovery using surfactant flushing has not been used extensively and has limited data on its remediation effectiveness, whereas air stripping has been widely used for several decades to remove certain contaminants, and its benefits and limitations as a water treatment technology are well- understood. In some cases, a combination of in-situ and ex-situ technologies may be used (either concurrently or successively) to clean up a site if a single technology cannot effectively remediate an entire site with its range of contaminants and subsurface characteristics. According to the National Research Council, integration of technologies is most effective when the weakness of one technology is mitigated by the strength of another technology, thus producing a more efficient and cost-effective solution. As shown in table 1, the DOD components involved in groundwater remediation activities reported using the full range of technologies that we identified as currently available for groundwater remediation. Specifically, the Navy reported that it has used all 15 of the currently available technologies; the Air Force, Army, and Corps reported using 14 each. The Defense Logistics Agency has used 9 of the available technologies for the cleanup of the limited number of contaminated groundwater sites for which it is responsible. According to department officials, DOD selects the most suitable technology to clean up a contaminated site based on a number of factors, including the type of contaminant, its location and concentration at different levels in the subsurface, and its chemical and physical composition. These officials identified a number of contaminants of concern, such as federally regulated chlorinated solvents (commonly found in metal degreasers) and fuels used for military aircraft and vehicles. DOD officials also consider some other hazardous materials that are not regulated by the federal government—such as the rocket propellant perchlorate—to be contaminants of concern because they are regulated by some states, such as California, where DOD has active, closing, or formerly used defense sites that need groundwater remediation. According to the groundwater remediation experts we consulted, some of DOD’s contaminants of concern, such as chlorinated solvents, can potentially be treated using 14 of the 15 technologies, while others, such as metals, can be treated with only 7 of the 15 technologies. For example, many chlorinated solvents do not readily dissolve in water; and because they are often more dense (heavier) than water, they migrate downward and pool at the bottom of aquifers, thereby limiting the number of technologies that can treat them. Alternatively, some contaminants composed of petroleum hydrocarbons (e.g., jet fuel, diesel fuel, and motor gasoline) float on top of the water table because they are less dense (lighter) than water, and technologies such as air sparging or multiphase extraction can often effectively treat or extract them through processes such as volatilization or free product recovery. See table 2 for information on which of the 15 technologies can potentially treat each of DOD’s contaminants of concern. According to DOD guidance on groundwater remediation, department officials should consider cost-effectiveness and performance of various groundwater remediation options when selecting the most suitable cleanup technology. A number of factors influence total cleanup costs for a given site, such as how long the cleanup is expected to take and the horizontal and vertical extent of the contamination. In addition, according to the National Research Council, actual cleanup costs associated with each technology depend on site-specific hydrogeologic, geochemical, and contaminant conditions. Thus, a particular technology may be the most cost-effective solution for one site and not necessarily for another similarly contaminated site. The National Research Council and others have also found that performance of most technologies, including time for total cleanup, also depends on complexities within the site’s subsurface (i.e., site heterogeneities) as well as contaminant characteristics. For example, the effectiveness of certain in-situ technologies—such as air sparging— decrease as site heterogeneity increases because the air will naturally follow certain pathways that may bypass the contaminant. Similarly, the effectiveness of many in-situ technologies may be limited by the presence of some chlorinated solvents that, if heavier than water, can migrate into inaccessible zones in the subsurface. Alternatively, in-situ thermal treatments that use conductors to heat the soil are not as sensitive to heterogeneity in the subsurface and contaminant characteristics because thermal conductivity varies little with the properties of subsurface materials and certain contaminants are more easily volatilized at elevated temperatures. However, equipment and energy costs may make this approach more costly than other in-situ technologies. While overall conclusions on the cost-effectiveness of each groundwater remediation technology are difficult to reach, a few groups have attempted to estimate costs for various technologies. For example, EPA has developed a technology cost compendium for several technologies based on cost data from various public and private remediation projects. Similarly, the Federal Remediation Technologies Roundtable—a federal consortium of representatives from DOD, EPA, and other federal agencies—has attempted to evaluate the relative overall cost and performance of selected remediation technologies in general terms. However, according to DOD officials and other experts we consulted, these efforts to compare technologies are of only limited utility because of the site-specific nature of technology decisions. We did not identify any alternative groundwater remediation technologies being used outside the department that DOD has not already either employed or tested on some scale (laboratory or pilot). However, we did identify a number of new approaches to groundwater remediation being developed by commercial vendors, but these approaches are based on modifications of or enhancements to existing technologies. Most of these new approaches are being used or field-tested by DOD and involve novel materials that are applied to contaminated sites using existing technologies. In addition, we found that DOD is generally aware of new approaches to groundwater remediation, in part through its efforts to develop remediation technologies with the commercial sector. DOD also works with various stakeholders, including the regulatory community, to promote understanding and acceptance of innovative remediation approaches. Some DOD officials and groundwater remediation experts believe additional resources may be needed in order to develop and advance DOD’s process for selecting the most appropriate technology at a site. Most of the new remediation approaches commercial vendors have developed and made available to DOD use existing technologies to apply novel materials to contaminated sites. These materials typically accelerate the breakdown of contaminants through biological or chemical processes. In particular, multiple commercial vendors have developed proprietary compounds used during bioremediation to stimulate microorganisms in the subsurface to biodegrade contaminants. Some of these compounds are designed to slowly release oxygen or other nutrients into the subsurface in an effort to prolong their availability, which microorganisms need to biodegrade the contaminants. DOD has also field-tested several novel compounds for bioremediation that are derived from food-grade materials such as molasses or vegetable oils. These compounds can be injected into the contaminated site using pre-existing wells or other existing techniques such as direct push injection: The Army used a compound developed by a commercial vendor to stimulate the bioremediation of chlorinated solvents at a contaminated site at its Rocky Mountain Arsenal. This compound reacted with the contaminated groundwater to produce lactic acid, which native microorganisms used to produce the hydrogen that ultimately led to the biological degradation of the contaminants. In addition, the Air Force reported using oxygen-releasing compounds to stimulate aerobic biodegradation at several of its cleanup sites, including a site in Florida contaminated by spilled fuel. DOD has also field-tested the use of molasses during bioremediation to treat chlorinated solvents at Vandenberg and Hanscom Air Force bases. In addition, DOD reported using vegetable oils to stimulate microorganisms in order to treat groundwater contaminated by chlorinated solvents and perchlorate at a variety of locations, including naval facilities in Massachusetts, Rhode Island, and South Carolina. Commercial vendors have also developed innovative approaches for chemically treating contaminants in the subsurface. For example, several vendors have developed proprietary approaches for delivering oxidants, such as molecular oxygen and ozone with or without hydrogen peroxide, into the subsurface to achieve in-situ chemical oxidation of a variety of contaminants, including fuels and chlorinated solvents. These oxidants are often delivered underground using variations of existing air sparging technologies and a variety of injection technologies. In addition to achieving in-situ chemical oxidation of target contaminants, the use of ozone with or without hydrogen peroxide can enhance the aerobic biodegradation of contaminants because it increases oxygen levels in the subsurface. Commercial vendors have also developed approaches to directly injecting other chemicals that are oxidizing agents, such as persulfate and permanganate, into the subsurface using existing technologies such as injection wells and direct push-probe technologies. DOD is exploring with the commercial sector other innovative approaches to groundwater remediation that involve modifying the engineering, design, or application of existing technologies. For example, DOD is currently working with the commercial sector to explore innovative uses of nanoscale metallic materials—such as zero-valent iron and palladium impregnated iron—to improve the efficacy of in-situ chemical treatments of chlorinated solvents commonly found on DOD facilities. In the past, DOD used metallic materials, such as zero-valent iron in granular form, to fill trenches dug into the ground (a form of a permeable reactive barrier) to chemically reduce chlorinated solvent plumes. The iron reacts with chlorinated solvents, transforming them into benign products, such as ethane and ethene. Treating contaminant plumes located deep within the subsurface is often difficult, costly, and technically impossible using this approach. Because of their size, nanoscale particles can be mixed with other materials—such as vegetable oil and water—and injected deep into the subsurface using existing technologies to treat contaminant sources or plumes. Furthermore, nanoscale particles have high surface areas relative to their volume (i.e., more metal is available to contact and react with the contaminants), which will lead to increased rates of reaction and more effective treatment. We found that DOD is actively involved in researching and testing new approaches to groundwater remediation, largely through its efforts to develop and promote the acceptance of innovative groundwater remediation technologies. According to the National Research Council, research on innovative remediation technologies is sponsored almost exclusively by federal agencies such as DOD and, in some circumstances, by individual companies and industry groups that have joined with federal agencies in seeking more cost-effective solutions to common problems. In particular, the DOD-funded Strategic Environmental Research and Development Program (SERDP) supports public and private research on contaminants of concern to DOD and innovative methods for their treatment, among other activities. Created in 1990, the program primarily focuses on issues of concern to DOD, although it is jointly managed by DOD, EPA, and the Department of Energy. In fiscal year 2004, SERDP spent about $49 million to fund and manage projects in a variety of areas, including 27 projects related to groundwater remediation. In response to technology needs and requirements generated by each of the DOD components, SERDP funds research projects in private, public, and academic settings on the fundamentals of contaminant behavior, environmental toxicity, and the advanced development of cost-effective innovative groundwater remediation technologies, among other things. For example, SERDP has funded research projects to examine such issues as the innovative use of vegetable oils for bioremediation; zero-valent iron based bioremediation of explosives; and the behavior of, and treatment options for, several emerging groundwater contaminants not yet regulated by the federal government, such as 1,4-Dioxane (found in solvents), N- Nitrosodimethylamine (found in rocket fuel), and trichloropropane (used as a degreaser and paint stripper). In addition, SERDP holds workshops with the scientific, engineering, academic, regulatory, and DOD-user communities to discuss DOD’s issues and identify needs for future research, development, and testing of groundwater remediation techniques. DOD also pursues innovative solutions to groundwater remediation through its Environmental Security Technology Certification Program (ESTCP). This program, founded in 1995, field-tests and validates promising innovative environmental technologies that attempt to address DOD’s highest-priority environmental requirements, including groundwater remediation. Using a process similar to that of SERDP, ESTCP solicits proposals from public and private researchers to field-test laboratory- proven remediation technologies that have broad DOD and market application. Once ESTCP accepts a proposal, it identifies a military partner, which provides a site on a DOD installation where the researcher can field- test the technology and document the technology’s cost, performance, and reliability. In fiscal year 2004, ESTCP spent about $35 million to fund and manage its program, including 36 projects on groundwater remediation. These projects include the demonstration of an enhanced recovery technology using innovative surfactants, emulsified zero-valent nanoscale iron to treat chlorinated solvents, and an ion exchange technology for the removal and destruction of perchlorate. ESTCP and SERDP have co- located offices and, according to DOD officials, the two programs work together to pursue the development of innovative groundwater remediation technologies from basic research through advanced field-testing and validation. ESTCP often funds the demonstration of technologies that were developed by private or public researchers with financial support from SERDP. At Dover Air Force Base, DOD has constructed three double-walled underground test areas (referred to as cells) that enable researchers to inject common soil and groundwater pollutants into a natural geologic setting as test constituents, without allowing the test constituents to come into contact with the surrounding environment. These test cells, known as the Groundwater Remediation Field Laboratory, include one large test cell and several smaller ones, all sharing the same outer containment cell area. The cells are constructed of interlocking steel sheet pilings with sealed grouted joints that extend from the ground's surface to a depth of 40 feet. This safe testing area is in an area with "ideal geology," according to the site program manager, because it has a shallow aquifer contained by a clay layer, which prevents the migration of contaminants. This laboratory is the only place in the United States that offers such a test setting. A variety of technologies have been tested here for cleaning up a range of contaminants. For example, tests for cleanup of TCE are under way using a combination of three technologies: soil vapor extraction, bioremediation, and air stripping. In addition to funding the development of innovative technologies, DOD works with various stakeholders, including the regulatory community, to promote the understanding and acceptance of these technologies. For example, DOD participates in the Interstate Technology and Regulatory Council (ITRC), a state-led coalition that works with the private sector, regulators, and other stakeholders to increase the regulatory acceptance of new environmental technologies. ITRC develops guidance on innovative environmental technologies and sponsors training for regulators and others on technical and regulatory issues related to environmental cleanup technologies and innovative groundwater remediation approaches. According to ITRC, these efforts are designed to help regulators streamline their review process and enable wider acceptance of innovative environmental technologies across state boundaries. In 2004, ITRC and DOD signed a memorandum of understanding on the relationship between the two organizations. As a result of the agreement, DOD now provides several liaisons to the ITRC’s board of advisers and helps the group develop materials and training courses on innovative groundwater remediation technologies. According to a DOD official, the department’s partnership with ITRC has led to enhanced cooperation among state regulators, DOD personnel, and community stakeholders and increased the deployment of innovative technologies at DOD cleanup sites. Although DOD is actively involved in the research and development of innovative technologies, some groundwater remediation experts and some DOD officials with whom we consulted believe that additional resources may be needed to develop and advance DOD’s process for selecting the most appropriate technology at a site. These individuals believe that a better understanding of the nature and extent of contamination at a site is critical for selecting appropriate technologies for cleanup. Furthermore, these experts and some DOD officials believe that additional resources may be appropriate for examining and improving methods and engineering approaches for optimizing the performance of the 15 types of groundwater remediation technologies that are currently available. Other groundwater remediation experts and some DOD officials suggested that more resources may be needed to further develop innovative approaches to emerging groundwater remediation issues, and to educate DOD personnel and regulators on these approaches. DOD generally agreed with the content of the report, stating that the report is an accurate summary of DOD’s use and field tests of remedial technologies; DOD also provided technical clarifications that we have incorporated, as appropriate. We are sending copies of this report to appropriate congressional committees; the Secretary of Defense; the Administrator of EPA; and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. This report (1) describes the groundwater remediation technologies that the Department of Defense (DOD) is currently using or field-testing and (2) examines whether any new groundwater remediation technologies are being used outside the department or are being developed by commercial vendors that may have potential for DOD’s use, and the extent to which DOD is researching and developing new approaches to groundwater remediation. In addition, this report provides limited information on the key characteristics, benefits, and limitations of selected groundwater remediation technologies. To address the first objective, we developed a questionnaire that we sent to the DOD components responsible for DOD’s groundwater cleanup efforts—the Air Force, Army, U.S. Army Corps of Engineers, Defense Logistics Agency, and Navy. In the questionnaire, we listed groundwater remediation technologies and asked these DOD components to indicate which technologies they have implemented and still currently use. We also asked the components to provide examples of specific groundwater remediation projects. We developed the list of technologies based on a review of reports and existing lists developed by the National Research Council, Environmental Protection Agency (EPA), Federal Remediation Technology Roundtable, and others, as well as through discussions with a groundwater remediation consulting firm and several nationally recognized groundwater remediation experts. To better understand DOD’s processes for environmental cleanup and technology development, we met with officials from the offices of the Deputy Undersecretaries of Defense for Installations and Environment and for Science and Technology. We also reviewed documents, reports, and guidance on groundwater remediation from the Office of the Secretary of Defense and the various DOD components involved in groundwater remediation. To obtain information on how DOD uses groundwater remediation technologies to treat contaminants of concern, we toured several bioremediation projects at Dover Air Force Base and spoke with a groundwater remediation program manager for the Air Force. To address our second objective, we contracted with consultants from the Washington, D.C., office of Malcolm Pirnie Inc. to gather information from commercial vendors on the range of currently available groundwater remediation technologies. We also attended a national groundwater remediation conference, where we spoke with a number of vendors of groundwater remediation technologies about their products, efforts to develop innovative approaches to groundwater remediation, and remediation work they may have performed for DOD. In addition, we collected and reviewed reports and studies from these vendors to better understand the range of technologies available to DOD. We also consulted with four nationally recognized groundwater remediation experts—two from academia and two from industry—to provide information on innovative remediation technologies currently available or under development by the commercial sector. We selected these experts on the basis of their independence, knowledge of and experience with groundwater remediation technologies, and recommendations from the National Academy of Sciences and others. In addition, we consulted with a senior groundwater remediation official from EPA’s Groundwater and Ecosystem Restoration Division, who is an expert on technologies used for groundwater remediation. Through these sources, we identified 15 technologies that are currently available commercially for the treatment of contaminated groundwater. For the purposes of this report, we defined a technology as a distinct technical method or approach for treating or removing contaminants found in groundwater. We did not consider any modifications or enhancements to a technology, such as variations in the material or equipment used during treatment, to be a separate technology. To determine whether there were any technologies currently being used outside of DOD, we compared the list of 15 currently available technologies with information provided to us by DOD officials on technologies currently used by DOD for groundwater remediation. To identify the extent to which DOD supports the research and development of new approaches to groundwater remediation, we interviewed officials from the Strategic Environmental Research and Development Program and the Environmental Security Technology Certification Program. We reviewed reports, project portfolios, and other documents developed by these two programs. To gain a better understanding of DOD’s efforts to field-test innovative approaches to groundwater remediation, we visited a DOD National Environmental Technology Test Site, located in Delaware, where private and public researchers can test innovative groundwater remediation technologies. We observed several ongoing research projects and interviewed an official responsible for managing the test facility. To gain a better understanding of DOD’s relationship with the Interstate Technology and Regulatory Council, we reviewed a memorandum of understanding between the two organizations and interviewed an official that serves as DOD’s liaison to the council. Information presented in this report is based on publicly available documents and information provided by government officials, independent consultants, and experts. We did not review nonpublic research and development activities that may be under way in private laboratories. We reviewed data for accuracy and consistency, and corroborated DOD- provided data to the extent possible. We assessed the reliability of the DOD-provided data by reviewing related documentation, including DOD’s annual reports to Congress on its Defense Environmental Restoration Program and information provided by consultants. We performed our work from January 2005 through May 2005, in accordance with generally accepted government auditing standards. 1. Advanced oxidation processes often use ultraviolet light irradiation with oxidizers such as ozone or hydrogen peroxide to produce free radicals, which break down and destroy chlorinated solvents, fuels, and explosive contaminants as water flows through a treatment reactor tank. Depending on the design of the system, the final products of this treatment can be carbon dioxide, water, and salts. An advantage of advanced oxidation processes is that it destroys the contaminant, unlike some other technologies, which only shift the phase of the contaminant into something more easily handled and removed. There are some limitations to these processes; for instance, maintenance of the treatment equipment can be a problem if certain substances—such as insoluble oil or grease—are allowed into the system. Also, the handling and storage of oxidizers can require special safety precautions. The cost of this type of remediation is largely dependent on the volume and flow rate of groundwater to be treated, energy requirements, and chemicals utilized. Operations and maintenance costs are also a factor in the overall cost of this approach. For the purposes of this report, advanced oxidation processes also include the related technologies of phyotolysis and photocatalysis. 2. Air stripping involves the mass transfer of volatile contaminants from water to air by exposing contaminated water to large volumes of air, so that the contaminants, such as chemical solvents, undergo a physical transformation from liquid to vapor. In a typical air stripper setup, called a packed tower, a spray nozzle at the top of a tower pours contaminated water over packing media or perforated trays within the tower. At the bottom of the tower, a fan forces air up through the tower countercurrent to the water flow, thus stripping the contaminants from the water. The contaminants in the air leaving the tower must then be removed and disposed of properly. Air strippers can be combined with other technologies for treatment of groundwater. Advantages of this technology include its potential to effectively remove the majority of the volatile organic contaminants of concern. Moreover, this mature technology is relatively simple and design practices are standardized and well-documented, and, in comparison with other approaches, this technology is often less expensive. However, maintenance can be an issue with this technology if inorganic or biological material clogs or fouls the equipment, and process energy costs can be high. 3. Bioreactors are biochemical-processing systems designed to degrade contaminants in groundwater using microorganisms, through a process similar to that used at a conventional wastewater treatment facility. Contaminated groundwater flows into a tank or basin, where it interacts with microorganisms that grow and reproduce while degrading the contaminant. The excess biomass produced is then separated from the treated water and disposed of as a biosolids waste. This technology can be used to treat, among other things, chlorinated solvents, propellants, and fuels. Potential advantages of bioreactors include relatively low operations and maintenance costs and the destruction, rather than mass transfer of, the contaminants. Moreover, regulators and other stakeholders generally accept bioreactor technology as a proven approach for remediation. Nonetheless, there are some limitations to the use of bioreactors, including decreases in effectiveness if contaminant concentrations in the influent water are too high or too low to support microorganism growth and if nuisance microorganisms enter the system. Additionally, the sludge produced at the end of the process may need further treatment or specialized disposal. Bioreactor cost is influenced by the upfront capital needed for installation, setup, and start-up, as well as the operations and maintenance costs associated with longer-term treatment. 4. Constructed wetlands use artificial wetland ecosystems (organic materials, microbial fauna, and algae) to remove metals, explosives, and other contaminants from inflowing water. The contaminated water flows into the wetland and is processed by wetland plants and microorganisms to break down and remove the contaminants. Wetlands, intended to be a long-term remediation approach, can be created with readily available equipment and generally can operate with low maintenance costs. Furthermore, because this technology provides a new ecosystem for plant and animal life, it is generally popular with the public. However, this approach is often more suitable for groundwater that is ultimately discharged to the surface rather than reinjected into the ground. Also, the long-term effectiveness of this treatment is not well-known, as aging wetlands may lose their ability to process certain contaminants over time. Temperature, climate, and water flow rate may negatively impact the processes that break down the contaminants. Applicability and costs associated with constructed wetlands vary depending on site conditions, such as groundwater flow rate, contaminant properties, landscape, topography, soil permeability, and climate. 5. Ion exchange involves passing contaminated water through a bed of resin media or membrane (specific to the particular contaminant) that exchanges ions in the contaminants’ molecular structure, thus neutralizing them. This approach can be useful for dissolved metals (e.g., hexavalent chromium) and can be used to treat propellants such as perchlorate. Once the ion exchange resin has been filled to capacity, it can be cleaned and reused (following a process called resin regeneration). Ion exchange is usually a short- to medium-term remediation technology. This technology allows contaminated water to be treated at a high flow rate and can completely remove the contaminants from the water. However, some substances—such as oxidants or suspended solids—in the incoming water may diminish the effectiveness of the ion exchange resins. Furthermore, different resin types can be needed for different contaminants. Among the factors influencing costs are discharge requirements, the volume of water to be treated, contaminant concentration (as well as the presence of other contaminants), and resin regeneration. For the purposes of this report, ion exchange includes technologies that use ion exchange resins or reverse osmosis membranes to remove contaminants from groundwater, including dissolved metals and nitrates. 6. Adsorption (mass transfer) technologies involve passing contaminated water through a sorbent material—such as activated carbon—that will capture the contaminants (through either adsorption or absorption), thus removing or lessening the level of contaminants in the water. The contaminated water is pumped from the aquifer and passed through the treatment vessel containing the sorbent material. As the contaminated water comes into contact with the sorbent surface, it attaches itself to that surface and is removed from the water. Benefits of this technology include its ability to treat contaminated water to nondetectable levels and its potential for treating low to high groundwater flow rates as well as multiple contaminants simultaneously. However, some contaminants may not be sorbed well or the sorbent unit may require disposal as hazardous waste. Furthermore, this approach is impractical if the contaminant levels are high due to higher costs resulting from frequent changing of the sorbent unit. If the concentrations of contaminants are low or flow rates for treatment can be kept low, then adsorption technology may be a cost- effective approach. 1. Air sparging introduces air or other gases into a contaminated aquifer to reduce concentrations of contaminants such as fuel or chlorinated solvents. The injected air creates an underground air stripper that removes contaminants by volatilization (a process similar to evaporation that converts a liquid or solid into a gas or vapor). This injected air helps to transport the contaminants up into the unsaturated zone (the soil above the water table, where pores are partially filled with air), where a soil vapor extraction system is usually implemented to collect the vapors produced through this process. This technology has the added benefit of often stimulating aerobic biodegradation (bioremediation) of certain contaminants because of the increased amount of oxygen introduced into the subsurface. Typically, air sparging equipment is readily available and easily installed with minimal disturbance to site operations. However, this technology cannot be used if the contaminated site contains contaminants that don’t vaporize or are not biodegradable. In some cases, this technology may not be suitable for sites with free product (e.g., a pool of fuel floating on the water table) because air sparging may cause the free product to migrate and spread contamination. Also, this technology is less effective in highly stratified or heterogeneous soils since injected air tends to travel along paths of least resistance in the subsurface, potentially bypassing areas of contamination. This technology can be less costly than ex-situ technologies because it does not require the removal, treatment, storage, or discharge of groundwater. For the purposes of this report, air sparging includes the related remedial approaches of co-metabolic sparging, sparging using other gases, and in-well air stripping. 2. Bioremediation relies on microorganisms to biologically degrade groundwater contaminants through a process called biodegradation. It may be engineered and accomplished in two general ways: (1) stimulating native microorganisms by adding nutrients, oxygen, or other electron acceptors (a process called biostimulation); or (2) providing supplementary pregrown microorganisms to the contaminated site to augment naturally occurring microorganisms (a process called bioaugmentation). This technology mainly focuses on remediating organic chemicals such as fuels and chlorinated solvents. One approach, aerobic bioremediation, involves the delivery of oxygen (and potentially other nutrients) to the aquifer to help native microorganisms reproduce and degrade the contaminant. Another approach, anaerobic bioremediation, circulates electron donor materials—for example, food-grade carbohydrates such as edible oils, molasses, lactic acid, and cheese whey—in the absence of oxygen throughout the contaminated zone to stimulate microorganisms to consume the contaminant. In some cases, pregrown microbes may be injected into the contaminated area to help supplement existing microorganisms and enhance the degradation of the contaminant, a process known as bioaugmentation. A potential advantage of bioremediation is its ability to treat the contaminated groundwater in place with naturally occurring microorganisms, rather than bringing contaminants to the surface. By using native microorganisms, rather than injecting additional ones, cleanup can be more cost-effective at some sites. However, heterogeneous subsurfaces can make delivering nutrient/oxygen solutions to the contaminated zone difficult by trapping or affecting movement of both contaminants and groundwater. Also, nutrients to stimulate the microorganisms can be consumed rapidly near the injection well, thereby limiting the microorganisms’ contact with the contaminants, or stimulating biological growth at the injection site. In summary, this technology avoids the costs associated with bringing water to the surface for treatment; instead, the main costs associated with bioremediation include: delivery of the amendments to the subsurface (which varies depending on the depth of contamination), the cost of the amendments themselves, and monitoring of the treatment. For the purposes of this report, bioremediation includes the related bioremedial approaches of bioaugmentation, biostimulation, co-metabolic treatment, enhanced aerobic biodegradation, enhanced anaerobic biodegradation, and biobarriers. 3. Enhanced recovery using surfactant flushing speeds contaminant removal in conventional pump-and-treat systems by injecting surfactants into contaminated aquifers or soil to flush the contaminant toward a pump in the subsurface (some distance away from the injection point); this pump removes the contaminated water and surfactant solution to the surface for treatment and disposal of contaminants. Surfactants are substances that associate with organic compounds such as fuels and chlorinated solvents and significantly increase their solubility, which aids cleanup of contaminated aquifers with less flushing water and pumping time. This technology is applicable to both dense and light nonaqueous phase liquids (DNAPL and LNAPL). Benefits of enhanced recovery approaches include the rapid removal of contaminants, which may significantly reduce cleanup times. However, regulatory issues may require special attention due to extra scrutiny for obtaining approvals to inject surfactant solutions; a greater degree of site characterization is often required to satisfy both technical and regulatory requirements. In addition, subsurface heterogeneities and low permeability can interfere with the effective delivery and recovery of the surfactant solution. Furthermore, to the extent that mobilization of organic liquid contaminants is achieved, this approach may be better for LNAPLs than DNAPLs, as LNAPLs tend to migrate upward and DNAPLs downward, possibly trapping them in previously uncontaminated subsurface areas. In addition to the high cost of surfactant solutions, another factor influencing the overall cost of this approach may be the treatment of the surfactant solution that is pumped out of the aquifer. For the purposes of this report, this technology includes related remedial approaches that use co-solvents such as ethanol to improve the solubility of surfactants in the subsurface. 4. Chemical treatments include remediation technologies that chemically oxidize or reduce contaminants when reactive chemicals are injected into the groundwater. This approach converts contaminants such as fuels and explosives into nonhazardous or less-toxic compounds. Depending on the extent of contamination, this process involves injecting chemicals into the groundwater and generally takes a few days to a few months to observe results in rapid and extensive reactions with various contaminants of concern. Additionally, this technology can be tailored to the site and does not require rare or complex equipment, which may help reduce costs. Generally, there are no unusual operations and maintenance costs; however, in-situ chemical treatment may require intensive capital investment for large contaminant plumes or zones where repeated applications or large volumes of reactive chemicals may be required; major costs are associated with injection-well installation (cost influenced by well depth), procurement of the reactive chemicals, and monitoring. Additionally, site characterization is important for the effective delivery of reactive chemicals, as subsurface heterogeneities may result in uneven distribution of the reactive chemicals. For the purposes of this report, chemical treatment also includes various remedial approaches and technologies that chemically oxidize or reduce contaminants in- situ, as well as those that result in the in-situ immobilization and stabilization of soluble metals. 5. Monitored natural attenuation is a relatively passive strategy for in- situ remediation that relies on the naturally occurring physical, chemical, and biological processes that can lessen concentrations of certain contaminants in groundwater sufficiently to protect human health and the environment. The changes in contaminant concentrations are observed through various wells that are placed throughout the contaminated groundwater zone to monitor the level of contamination over time and its migration from its initial location in the subsurface. Some chlorinated solvents and explosives may be resistant to natural attenuation; however, it can still be used in cases of nonhalogenated chlorinated solvents and some inorganic compounds. If appropriate for a given site, natural attenuation can often be less costly than other forms of remediation because it requires less infrastructure, construction, and maintenance. Furthermore, it is less intrusive because fewer surface structures are necessary and it may be used in all or selected parts of a contaminated site, alone or in conjunction with other types of remediation. However, compared with active techniques, natural attenuation often requires longer time frames to achieve remediation objectives. 6. Multiphase extraction uses a series of pumps and vacuums to remove free product, contaminated groundwater, and vapors from the subsurface, treat them, and then either dispose or reinject the treated groundwater. Specifically, one or more vacuum extraction wells are installed at the contaminated site to simultaneously pull liquid and gas from the groundwater and unsaturated soil directly above it. This type of vacuum extraction well removes contaminants from above and below the groundwater table, and can expose more of the subsurface for treatment, notably in low permeability or heterogeneous formations. The contaminant vapors are collected in the extraction wells and taken above ground for treatment. This approach can be used to treat organic contaminants—such as chlorinated solvents and fuels—and can be combined with other technologies, particularly above-ground liquid/vapor treatment, as well as other methods of in- situ remediation such as bioremediation, air sparging, or bioventing. Potential advantages of this technology include its applicability to groundwater cleanup in low permeability and heterogeneous formations and its minimal disturbance to site-specific conditions. However, the system requires complex monitoring and specialized equipment, and it may be difficult or problematic to implement the most effective number of pumps. A major contributor to this technology’s cost is operations and maintenance, which may run from 6 months to 5 years, depending on site-specific factors. For the purposes of this report, multiphase extraction includes the related technologies of bioslurping and dual-phase extraction. 7. Permeable reactive barriers are vertical walls or trenches built into the subsurface that contain a reactive material to intercept and remediate a contaminant plume as the groundwater passes through the barrier. This technology can be used to treat a wide range of contaminants and is commonly used to treat chlorinated solvents and heavy metals. Reactive barriers usually do not require above-ground structures or treatment, allowing the site to be used while it is being treated. However, its use is limited by the size of the plume since larger contaminant plumes are often more difficult to intercept for treatment. Moreover, the barrier may lose effectiveness over time as microorganisms or chemicals build up on the barrier, making rehabilitation or media replacement necessary. The depth of the contaminated groundwater zone and the required barrier may also present some technical challenges. Underground utility lines, rocks, or other obstacles can increase the difficulty of installing a barrier and drive up capital costs. Additionally, because permeable reactive barriers do not treat the contaminant source, but simply the plume, treatment may be required for extended time periods, thus increasing overall cleanup costs. For the purposes of this report, permeable reactive barriers include biotic and abiotic, as well as passive and active treatment barriers. 8. Phytoremediation is the use of selected vegetation to reduce, remove, and contain the toxicity of environmental contaminants, such as metals and chlorinated solvents. There are several approaches to phytoremediation that rely on different plant system processes and interactions with groundwater and contaminants. One approach to phytoremediation is phytostabilization, which uses plants to reduce contaminant mobility by binding contaminants into the soil or incorporating contaminants into plant roots. Another approach is phytoaccumulation, where specific species of plants are used to absorb unusually large amounts of metals from the soil; the plants are later harvested from the growing area and disposed of in an approved manner. A similar process is called rhizofiltration, where contaminated water moves into mature root systems and is circulated through their water supply. Another process can remove contaminants by evaporating or volatilizing the contaminants from the leaf surface once it has traveled through the plant’s system. Phytoremediation offers the benefit of only minimally disturbing the environment and can be used for the treatment of a wide range of contaminants. However, specific plant species required for particular contaminants may be unable to adapt to site conditions due to weather and climate, and phytoremediation may not be an effective approach for deep contamination. While maintenance costs, including cultivation, harvesting, and disposal of the plants, are substantial for this technology, phytoremediation typically has lower costs than alternative approaches. For the purposes of this report, phytoremediation includes phytostabilization, phytoaccumulation, phytoextraction, rhizofiltration, phytodegredation, rhizosphere degredation, organic pumps, and phytovolitilization. 9. Thermal treatments involves either pumping steam into the aquifer or heating groundwater in order to vaporize chlorinated solvents or fuels from the groundwater. The vaporized contaminant then rises into the unsaturated zone and can be removed via vacuum extraction for treatment. There are three main approaches for heating the groundwater in-situ. The first, radio frequency heating, uses the electromagnetic energy found in radio frequencies to rapidly heat the soil in a process analogous to microwave cooking. The second, electromagnetic heating, uses an alternating current to heat the soil and may include hot water or steam flushing to mobilize contaminants. The third uses heating elements in wells to heat the soil. Thermal treatments may be applied to a wide range of organic contaminants and sites with larger volumes of LNAPLs or DNAPLs as well as sites with low permeability and heterogeneous formations. However, the presence of metal and subsurface heterogeneities in the contaminated site may interfere with this process. The heating and vapor collection systems must be designed and operated to contain mobilized contaminants, to avoid their spread to clean areas. The major costs incurred for thermal treatments are for moving specialized equipment to the site, developing infrastructure to provide power, and providing energy to run the system. For the purposes of this report, thermal treatments include related soil-heating technologies, such as steam flushing, conductive heating, and electrical resistance heating. In addition to the contact above, Richard Hung, Lynn Musser, Jonathan G. Nash, Omari Norman, and Diane B. Raynes made key contributions to this report. Jessica A. Evans, Katherine M. Raheb, and Carol Herrnstadt Shulman also made important contributions to this report.
To date, the Department of Defense (DOD) has identified nearly 6,000 sites at its facilities that require groundwater remediation and has invested $20 billion over the past 10 years to clean up these sites. In the past, DOD primarily used "pump-and-treat" technologies to contain or eliminate hazardous contaminants in groundwater. However, the long cleanup times and high costs of using pump-and-treat technologies often make them expensive and ineffective for groundwater remediation. As directed by Public Law 108-375 and as agreed, GAO (1) described current DOD groundwater remediation technologies and (2) examined whether any new technologies are being used or developed outside the department that may have potential for DOD's use and the extent to which DOD is researching and developing new approaches to groundwater remediation. GAO provided the Department of Defense with a draft copy of the report for its review and comment. DOD generally agreed with the contents stating that the report is an accurate summary of DOD's use and field tests of remedial technologies. DOD also provided technical clarifications that have been incorporated, as appropriate. DOD has implemented or field-tested all of the 15 types of generally accepted technologies currently available to remediate contaminated groundwater, including several alternatives to pump-and-treat technologies. Some of these technologies, such as bioremediation, introduce nutrients or other materials into the subsurface to stimulate microorganisms in the soil; these microorganisms consume the contaminant or produce byproducts that help break down contaminants into nontoxic or less-hazardous materials. DOD selects the most suitable technology for a given site on the basis of several factors, such as the type of contaminant and location in the subsurface, and the relative cost-effectiveness of a technology for a given site. DOD has identified a number of contaminants of concern at its facilities, each of which varies in its susceptibility to treatment. GAO did not identify any alternative groundwater remediation technologies being used or developed outside DOD that the department has not considered or used. Most of the new approaches developed by commercial vendors and available to DOD generally use novel materials applied to contaminated sites with existing technologies. DOD actively researches and tests new approaches to groundwater remediation largely by developing and promoting the acceptance of innovative remediation technologies. For example, DOD's Strategic Environmental Research and Development Program supports public and private research on contaminants of concern to DOD and innovative methods for their treatment.
The private sector, driven by today’s globally competitive business environment, is faced with the challenge of improving its service while lowering costs. As a result, many companies have adopted innovative business practices to meet customer needs and retain profitability. Since DOD is facing a similar challenge of providing better service at a lower cost, it has begun to reexamine its business practices. With the end of the Cold War, the DOD logistics system must support a smaller, highly mobile, high technology force with fewer resources. Also, due to the pressures of budgetary limits and base closures, DOD must seek new and innovative ways to make logistics processes as efficient and effective as possible. To supply reparable parts for its approximately 4,900 aircraft, the Navy uses an extensive logistics system based on management concepts largely developed decades ago. The Navy’s system, commonly called a “pipeline,” consists of many activities that play a key role in providing aircraft parts to end-users when and where needed. This pipeline encompasses several functions, including the purchase, storage, distribution, and repair of parts. Another important function of this pipeline is to provide consumable parts (e.g., nuts, bearings, and fuses) that are used extensively to fix reparable parts and aircraft. The Defense Logistics Agency (DLA) provides most of the consumable parts that Navy repair activities need and handles a large part of the warehousing and distribution of reparable parts. Although not as large as the Navy, commercial airlines have similar operating characteristics to the Navy. They maintain fleets of aircraft that use reparable parts and operate logistics pipelines whose activities are similar. For both the Navy and commercial airlines, time plays a crucial role in the responsiveness of logistics operations and the amount of inventory needed. Pipeline complexity also adds to logistics costs by increasing overhead and adding to pipeline times. Condensing and simplifying pipeline operations, therefore, simultaneously improves responsiveness and decreases costs by reducing inventory requirements and eliminating infrastructure (warehouses, people, etc.) needed to manage unnecessary material. The Navy’s overall inventory management philosophy is one of maintaining large inventory levels at many different locations to ensure parts are readily available to meet customers’ needs. As of September 1995, the Navy had reparable inventory valued at $10.4 billion. However, a portion of this inventory is not needed to support daily operations and war reserves. Of the $10.4 billion inventory, the Navy classifies $1.9 billion (18 percent) as long supply—a term denoting that more stock is on hand than is needed to meet daily operations and war reserve requirements.The $10.4-billion and the $1.9-billion inventories were valued using DOD’s standard valuation methodology—reparables requiring repair were reduced by the estimated cost of repair and excess inventory was valued at salvage prices (2.5 percent of latest acquisition cost). Figure 1 details the Navy’s allocation of its inventory to daily operations, war reserves, and long supply. The inventory turnover rate is a measure of how efficiently a business uses its inventory investment and can be expressed as the ratio of the dollar value of repairs to the average inventory value. One commercial airline we visited calculated that, using this ratio, it would turn its reparable inventory over once every 5 months. In comparison, we calculate that, based on fiscal year 1995 repairs, the Navy’s wholesale-level inventory of reparable parts would turn over once every 2 years. The Navy incurs significant costs to manage this large inventory investment. At the wholesale level alone, the Navy estimates it spent almost $1.8 billion to repair, buy, and manage reparable parts during fiscal year 1995 (see table 1). This amount does not include the costs to store and maintain parts at operating locations, such as bases and aircraft carriers. Despite the billions of dollars invested in inventory, the Navy’s logistics system is still often unable to provide spare parts when and where needed. During fiscal year 1995, Navy aircraft were not mission capable 11.9 percent of the time because spare parts were not available to repair the aircraft (see fig. 2). One reason parts were not available was that the Navy’s system often does not provide timely deliveries of parts. The Navy reported that, between October 1994 and June 1995, parts were not immediately available to mechanics at operating locations 25 percent of the time for reparable parts and 43 percent for consumable parts. When a part is not available, an end-user requisitions the part from the wholesale supply system. According to the Navy’s data, the length of time from requisition to delivery of a part takes, on average, 16 days to operating bases and 32 days to aircraft carriers. If the Navy’s wholesale system does not have the item in stock (32 percent of the time for reparable parts), the Navy places the item on backorder. According to the Navy’s data, customers wait over 2.5 months, on average, to receive backordered items. The Navy reported that, as of June 1995, it had more than 31,000 backorders for reparable parts, worth about $831 million. The delay in receiving parts often forces mechanics to cannibalize parts (removing parts from one aircraft to make repairs on another). Between July 1994 and June 1995, the Navy reported that its mechanics at operating bases and on aircraft carriers cannibalized parts at least 70,500 times. This practice is inefficient because the mechanics have to remove a working part from one aircraft and then install the part on a different aircraft. According to Navy guidance, cannibalization is a symptom of a failure somewhere in the logistics system, but, in some instances, can be a viable management tool in keeping aircraft operational. Aircraft squadron officials at several locations we visited, however, told us that cannibalizing parts is a routine practice because the Navy’s system does not consistently provide replacement parts on a dependable basis. The Navy’s large inventory costs and slow customer service are the result of several factors, but the largest contributor is a slow and complex repair pipeline. According to Navy officials, about 75 percent of component repairs are relatively minor in nature and can be done by maintenance personnel at the operating bases. They also stated that, when a part requires more complex and extensive repair (about 25 percent of the time), the process can create as many as 16 time-consuming steps as parts move through the repair pipeline (see fig. 3). Component parts can accumulate at each step in the process, which increases the total number of parts that are needed to meet customer demands and to ensure a continuous flow of parts. Tracking parts through each of the 16 steps listed in figure 3, we estimate, using the Navy’s flow time data, that it can take about 4 months, on average, from the time a broken part is removed from an aircraft until the time it is ready for reissue. Flightline As figure 3 illustrates, a broken part can pass through a number of base- and wholesale-level steps. At the base level, after a mechanic removes a broken part from an aircraft, the item is routed through base maintenance. If the part cannot be repaired at the base, it is then sent to a wholesale storage location, where it sits until scheduled for repair. Once scheduled, it is inducted into repair workshops and fixed, then sent to storage or used to fill a customer’s order. The Navy reported that over 190,000 parts were fixed through this process during fiscal year 1995 at a cost of about $957 million. While the repair pipeline time can take as long as 4 months, on average, it could be significantly longer because it does not include the time parts sit in wholesale storage awaiting repair. The Navy does not measure this step in the process; however, this time could be substantial. For example, the Navy does not promptly forward items to repair workshops after they break. Also, because the Navy schedules most repairs quarterly, many broken items could sit in storage for several months before being repaired. Parts may also sit in storage because many broken items in the Navy’s system are not needed to support daily operations or war reserves. Of the portions of the pipeline that are measured, the time spent receiving and repairing items at repair facilities accounts for the largest amount of pipeline time. Shown in figure 3 as “repair facility receiving” and “repair workshops,” these activities take an average of 73 days to complete. In examining the repair process at two repair facilities, we found that parts can be routed through several different workshops, thereby increasing the time to complete repairs. Functions such as testing, cleaning, machining, and final assembly are sometimes done at different locations at the repair facility. As a result, parts could be handled, packaged, and transported several times throughout the repair process. According to Navy officials, this is a common practice at the Navy’s repair facilities. At one repair facility, we examined 10 frequently repaired pneumatic and hydraulic components and found that about 85 percent of the repair time needed for these parts involved activities such as unpacking, handling, and routing the part to different workshops. The remaining 15 percent of the time was spent on the actual repair of the items. One item we examined had a repair time of 232 hours. However, only 20 hours was needed to actually repair the item; the remaining 212 hours involved time to handle and move the part to different locations. In addition to delays caused by routing parts to different locations, mechanics often do not have the necessary consumable parts (nuts, bolts, bearings, fuses, etc.) that are used in large quantities to repair parts. According to Navy officials, having the necessary consumable parts is another important factor affecting the timely repair of components. The Navy calculates that the lack of parts adds as much as 4 weeks to the average repair time. As of February 1996, the Navy had 11,753 reparable aircraft parts, valued at $486 million, in storage because parts were not available during the repair process to complete repairs. These items, which had been packaged and moved to a warehouse next to the repair facility, had been in storage for an average of 9 months. Figure 4 shows aircraft components awaiting parts in a warehouse at the Navy’s repair depot at Cherry Point, North Carolina. The Navy’s data indicates that DOD’s distribution and transportation system is slow in moving material among storage, repair, and end-user facilities and is another factor adding to the length of the repair pipeline. For example, with the current system, it takes an average of 16 days for a customer to receive a part at an operating base after a requisition is placed. As of June 1995, the Navy estimated that over one-half of this time involved DLA’s retrieval of the part from the warehouse and shipment of the part to the customer. In recognition of a changing global threat, increasing budgetary pressures, and the need for improvements to logistics system responsiveness, the Navy has recently undertaken three primary initiatives aimed at streamlining logistics operations. These initiatives are the regionalization of supply management and maintenance functions, privatization and outsourcing, and logistics response time reductions. The Navy is in the early stages of developing these initiatives and has not yet identified many of the specific business practices that it will use to achieve its goals. We have not reviewed the feasibility of these initiatives. However, we believe the initiatives provide a framework for improvements by focusing on the speed and complexity of the logistics pipeline. Under its regional supply initiative, the Navy is consolidating certain supply operations that are managed by a number of organizations under regionally managed supply centers. For example, naval bases, aviation repair depots, and shipyards each have supply organizations to manage their parts needs. These activities often use different information systems and business practices and their own personnel and facilities. Under the new process, one supply center in each of seven geographic regions will centrally manage the spare parts for these individual operations, with the objective of improving parts’ visibility and reducing the overhead expenses associated with separate management functions. The Navy also hopes this approach will lead to better sharing of inventory between locations, thus allowing it to reduce inventories. The Navy is not consolidating inventories into fewer storage locations; however, it is transferring data and management functions to the centers. Similarly, maintenance activities, such as base-level repair operations and depot-level repair operations, are managed by different organizations. As a result, maintenance capabilities, personnel, and facilities may be unnecessarily duplicated. Under the regional maintenance initiative, the Navy is identifying these redundant maintenance capabilities and consolidating these operations into regionally based repair facilities. For example, in one region, the Navy is consolidating 32 locations used to calibrate maintenance test equipment into 4 locations. The Navy believes that, by eliminating the fragmented management approach to supply management and maintenance, it can decrease infrastructure costs by reducing redundancies and eliminating excess capacity. The Navy also believes that by moving away from highly decentralized operations, it will be better positioned to improve and streamline operations Navy-wide. Both initiatives are in the early phases, however, so broad-based improvements have not yet occurred. The Navy also has an initiative to outsource and privatize functions. This initiative encompasses a broad spectrum of Navy activities, and possible outsourcing of functions within the reparable parts pipeline is only one aspect of this effort. Within the pipeline, the Navy has identified several material management functions, such as cataloging of items and overseas warehousing operations, as potential candidates for outsourcing. In January 1996, the Navy began developing cost analyses to determine whether contracting these functions out would be beneficial. Navy officials told us that they did not know when analyses on all candidates would be completed. One official said, however, that some candidates may be outsourced in 1997 at the earliest. The Navy expects other activities to be targeted for outsourcing in the future. According to Navy officials, those candidates will be identified as the Navy’s initiatives to streamline and improve operations progress. The objective of this initiative is to reduce the amount of time it takes a customer, such as a mechanic, to receive a part after placing an order. This initiative takes into account the series of processes that contribute to ensuring customers get the parts they need. These processes include placing and processing orders; storing, transporting, and distributing inventory; and repairing broken items. The Office of the Secretary of Defense (OSD) has established responsiveness goals that the Navy and other services are encouraged to meet. OSD wants to reduce the time it takes to fill a customer’s order from wholesale stock to 5 days by September 1996 and to 3 days by September 1998. OSD also wants to reduce the average backorder age to 30 days by October 2001. The Navy hopes to achieve these goals by looking at the pipeline as a whole and improving processes where needed. To identify and carry out improvements, the Navy has established a Logistics Response Time team, consisting of representatives from across the Navy and from DLA. Thus far, the team has focused primarily on collecting the data needed to accurately measure pipeline performance. In the spring of 1996, the team expects to begin identifying areas where process improvements should be applied to achieve the biggest gains in performance. This work will then be used to identify specific practices for carrying out these improvements. The airline industry has developed leading-edge practices that focus on reducing the time and complexity associated with logistics operations. We identified four best practices in the airline industry that have the potential for use in the Navy’s system. These practices have resulted in significant improvements and reduced logistics costs, especially for British Airways. These practices include the prompt repair of items, the reorganization of the repair process, the establishment of partnerships with key suppliers, and the use of third-party logistics services. When used together, they can help maximize a company’s inventory investment, decrease inventory levels, and provide a more flexible repair capability. In our opinion, they address many of the same problems the Navy faces and represent practices that could be applied to Navy operations. These practices appear particularly suited to Navy facilities that repair aircraft and components, such as repair depots and operating bases. Certain airlines begin repairing items as quickly as possible, which prevents the broken items from sitting idle for extended periods. Minimizing idle time helps reduce inventories because it lessens the need for extra “cushions” of inventory to cover operations while parts are out of service. In addition, repairing items promptly promotes flexible scheduling and production practices, enabling maintenance operations to respond more quickly as repair needs arise. Prompt repair involves inducting parts into maintenance shops soon after broken items arrive at repair facilities. Prompt repair does not mean that all parts are fixed, however. The goal is to quickly fix only those parts that are needed. One airline that uses this approach routes broken items directly to holding areas next to repair shops, rather than to stand-alone warehouses, so that mechanics can quickly access broken parts when it comes time for repair. These holding areas also give mechanics better visibility of any backlog. It is difficult to specifically quantify the benefits of repairing items promptly because it is often used with other practices to speed up pipeline processes. One airline official said, however, that his airline has kept inventory investment down partly because it does not allow broken parts to sit idle. In addition, the Air Force found through a series of demonstration projects that prompt repair, when used with other practices, could enable operations to be sustained with significantly fewer parts. For example, the Air Force reported in February 1995 that after the new practices were put in place at one location, 52 percent ($56.3 million) of the items involved in the test were potentially excess. The Air Force tested the new practices as part of its Lean Logistics program, which aims to improve Air Force logistics operations. One approach to simplify the repair process is the “cellular” concept. This concept brings all the resources, such as tooling and support equipment, personnel, and inventory, that are needed to repair a broken part into one location, or one “cell.” This approach simplifies the flow of parts by eliminating the time-consuming exercise of routing parts to workshops in different locations. It also ensures that mechanics have the technical support so that operations run smoothly. In addition, because inventory is placed near workshops, mechanics have quick access to the parts they need to complete repairs more quickly. British Airways adopted the cellular approach after determining that parts could be repaired as much as 10 times faster using this concept. Another airline that adopted this approach in its engine-blade repair shop was able to reduce repair time by 50 to 60 percent and decrease work-in-process inventory by 60 percent. Figure 5 shows a repair cell used in British Airways maintenance center at Heathrow Airport. Several airlines and manufacturers have worked with suppliers to improve parts support while reducing overall inventory. Two approaches—the use of local distribution centers and integrated supplier programs— specifically seek to improve the management and distribution of consumable items. These approaches help ensure that the consumable parts for repair and manufacturing operations are readily available, which prevents items from stalling in the repair process and is crucial in speeding up repair time. In addition, by improving management and distribution methods, such as using streamlined ordering and fast deliveries, these approaches enable firms to delay the purchase of inventory until a point that is closer to the time it is needed. Firms, therefore, can reduce their stocks of “just-in-case” inventory. Local distribution centers are supplier-operated facilities that are established near a customer’s operations and provide deliveries of parts within 24 hours. One airline that used this approach has worked with key suppliers to establish more than 30 centers near its major repair operations. These centers receive orders electronically and, in some cases, handle up to eight deliveries a day. Airline officials said that the ability to get parts quickly has contributed to repair time reductions. In addition, the officials said that the centers have helped the airline cut its on-hand supply of consumable items nearly in half. Integrated supplier programs involve shifting inventory management functions to suppliers. Under this arrangement, a supplier is responsible for monitoring parts usage and determining how much inventory is needed to maintain a sufficient supply. The supplier’s services are tailored to the customer’s requirements and can include placing a supplier representative in customer facilities to monitor supply bins at end-user locations, place orders, manage receipts, and restock bins. Other services can include 24-hour order-to-delivery times, quality inspection, parts kits, establishment of data interchange links and inventory bar coding, and vendor selection management. One manufacturer that used this approach received parts from its supplier within 24 hours of placing an order 98 percent of the time, which enabled it to reduce inventories for these items by $7.4 million—an 84-percent reduction. We have issued a series of reports on similar private sector practices that could be applied to DOD’s consumable inventories. These reports recommended new techniques that would minimize DOD’s role in storing and distributing consumable inventories. Companies, such as PPG Industries and Bethlehem Steel, have reduced consumable inventories by as much as 80 percent and saved millions in associated costs by using “supplier parks” and other techniques that give established commercial distribution networks the responsibility to manage, store, and distribute inventory on a frequent and regular basis to end-users. The airlines we contacted provided examples of how third-party logistics providers can be used to reduce costs and improve performance. Third-party firms take on responsibility for managing and carrying out certain logistics functions, such as storage and distribution. Outsourcing these tasks enables companies to reduce overhead costs because it eliminates the need to maintain personnel, facilities, and other resources that are required to do these functions in-house. It also helps companies improve various aspects of their operations because third-party providers can offer expertise that companies often do not have the time or the resources to develop. For example, one airline contracts with a third-party logistics provider to handle deliveries and pickups from suppliers and repair vendors, which has improved the reliability and speed of deliveries and reduced overall administrative costs. The airline receives most items within 5 days, which includes time-consuming customs delays, and is able to deliver most items to repair vendors in 3 days. In the past, deliveries took as long as 3 weeks. Third-party providers can also assume other functions. One third-party firm that we visited, for example, can assume warehousing and shipping responsibilities and provide rapid transportation to speed parts to end-users. The company can also pick up any broken parts from a customer and deliver them to the source of repair within 48 hours. In addition, this company maintains the data associated with warehousing and in-transit activities, offering real-time visibility of assets. The best practices that we observed in the airline industry can prove particularly beneficial when used in an integrated fashion. One airline, British Airways, used all of these practices as part of an overall reengineering effort, and it illustrates the benefits of using such an integrated approach. These efforts have helped transform British Airways from a financially troubled, state-owned airline into a successful private sector enterprise. British Airways today is considered among the most profitable airlines in the world and has posted profits every year since 1983. Table 2 shows several key logistics performance measures of British Airways and the Navy. In addition to implementing the four practices discussed earlier, British Airways took a number of other steps to successfully reengineer its logistics operations. One of the first steps was to undertake a fundamental shift in corporate philosophy, where British Airways placed top priority on customer service and cost containment. This philosophy directed all improvement efforts, and specific practices were assessed on how well they furthered these overall goals. Also, British Airways approached the process of change as a long-term effort that requires a steady vision and a focus on continual improvement. Although the airline has reaped significant gains to date, it continues to reexamine and improve its operations. Additional steps taken by British Airways to reengineer its operations include (1) reorienting the workforce toward the new philosophy; (2) providing managers and employees with adequate information systems to control, track, and assess operations; and (3) refurbishing existing facilities and constructing new ones to accommodate the new practices. As part of the Navy’s current efforts to improve the logistics system’s responsiveness and reduce its complexity, we recommend that the Secretary of Defense direct the Secretary of the Navy, working with DLA, to develop a demonstration project to determine the extent to which the Navy can apply best practices to its logistics operations. We recommend that the Secretary of the Navy identify several naval facilities to participate in the project and test specific practices highlighted in this report. The practices should be tested in an integrated manner, where feasible, to maximize the interrelationship many of these practices have with one another. The specific practices that should be tested are inducting parts at repair depots soon after they break, consistent with repair requirements, to prevent parts from sitting idle; reorganizing repair workshops using the cellular concept to reduce the time it takes to repair parts; using integrated supplier programs to shift the management responsibilities for consumable inventories to suppliers; using local supplier distribution centers near repair facilities for quick shipments of parts to mechanics; and expanding the use of third-party logistics services to store and distribute spare parts between the depots and end-users to improve delivery times. We recommend that this demonstration project be used to quantify the costs and benefits of these practices and to serve as a means to identify and alleviate barriers or obstacles (such as overcoming a strong internal resistance to change and any unique operational requirements) that may inhibit the expansion of these practices. After these practices have been tested, the Navy should consider expanding and tailoring the use of these practices, where feasible, so they can be applied to other locations. In its comments on a draft of this report, DOD agreed with the findings and recommendations. DOD stated that by September 30, 1996, the Deputy Under Secretary of Defense (Logistics) will issue a memorandum to the Secretary of the Navy and the Director of DLA, requesting that a demonstration project be initiated. According to DOD, this project should be started by the first quarter of fiscal year 1997. The Navy will conduct a business case analysis and assess the leading-edge practices highlighted in this report for their applicability in a Navy setting and, where appropriate, will tailor and adopt a version of these practices for use in its repair process. DOD also stated that it will ask the Navy to submit an in-process review not later than 6 months after the inception of the business case analysis. Finally, DOD agreed that after the practices have been tested, the Navy should consider expanding and tailoring the use of these practices so they can be applied to other locations. DOD’s comments are included in appendix I. We reviewed detailed documents and interviewed officials about the Navy’s inventory policies, practices, and efforts to improve its logistics operations. We contacted officials at the Office of the Chief of Naval Operations, Washington, D.C.; U.S. Naval Supply Systems Command, Arlington, Virginia; U.S. Naval Air Systems Command, Arlington, Virginia; U.S. Atlantic Fleet Command, Norfolk, Virginia; and the Naval Inventory Control Point, Philadelphia, Pennsylvania. Also at these locations, we discussed the potential applications of private sector logistics practices to the Navy’s operations. To examine Navy logistics operations and improvement efforts, we visited the following locations: Naval Aviation Depot, Cherry Point, North Carolina; Naval Aviation Depot, Jacksonville, Florida; Oceana Naval Air Station, Virginia Beach, Virginia; Jacksonville Naval Air Station, Jacksonville, Florida; Norfolk Naval Air Station, Norfolk, Virginia; Fleet and Industrial Supply Center, Norfolk, Virginia; Fleet and Industrial Supply Center, Jacksonville, Florida; Defense Distribution Depot, Cherry Point, North Carolina; Defense Distribution Depot, Jacksonville, Florida; and U.S.S. Enterprise. At these locations we discussed with supply, maintenance, and aircraft squadron personnel, the operations of the current logistics system, customer satisfaction, and the potential application of private sector logistics practices to their operations. Also, we reviewed and analyzed detailed information on inventory levels and usage; repair times; supply effectiveness and response times; and other related logistics performance measures. Except where noted, our data reflects inventory valued by the Navy at latest acquisition costs. We did not test or otherwise validate the Navy’s data. To identify leading commercial practices, we used information from our February 1996 report that compared Air Force logistics practices to those of commercial airlines. This information included an extensive literature search to identify leading inventory management concepts and detailed examinations and discussions of logistics practices used by British Airways, United Airlines, Southwest Airlines, American Airlines, Federal Express, Boeing, and Tri-Star Aerospace. We also participated in roundtables and symposiums with recognized leaders in the logistics field to obtain information on how companies are applying integrated approaches to their logistics operations and establishing supplier partnerships to eliminate unnecessary functions and reduce costs. Finally, to gain a better understanding on how companies are making breakthroughs in logistics operations, we attended and participated in the Council of Logistics Management’s Annual Conference in San Diego, California. We did not independently verify the accuracy of logistics costs and performance measures provided by private sector organizations. We conducted our review from June 1995 to April 1996 in accordance with generally accepted government auditing standards. We are sending copies of this report to the appropriate congressional committees; the Secretaries of Defense and the Navy; the Directors of DLA and the Office of Management and Budget; and other interested parties. We will make copies available to others upon request. Please contact me on (202) 512-8412 if you or your staff have any questions concerning this report. The major contributors to this report are listed in appendix II. Charles I. (Bud) Patton, Jr. Kenneth R. Knouse, Jr. Best Management Practices: Reengineering the Air Force’s Logistics System Can Yield Substantial Savings (GAO/NSIAD-96-5, Feb. 21, 1996). Inventory Management: DOD Can Build on Progress in Using Best Practices to Achieve Substantial Savings (GAO/NSIAD-95-142, Aug. 4, 1995). Commercial Practices: DOD Could Reduce Electronics Inventories by Using Private Sector Techniques (GAO/NSIAD-94-110, June 29, 1994). Commercial Practices: Leading-Edge Practices Can Help DOD Better Manage Clothing and Textile Stocks (GAO/NSIAD-94-64, Apr. 13, 1994). Commercial Practices: DOD Could Save Millions by Reducing Maintenance and Repair Inventories (GAO/NSIAD-93-155, June 7, 1993). DOD Food Inventory: Using Private Sector Practices Can Reduce Costs and Eliminate Problems (GAO/NSIAD-93-110, June 4, 1993). DOD Medical Inventory: Reductions Can Be Made Through the Use of Commercial Practices (GAO/NSIAD-92-58, Dec. 5, 1991). Commercial Practices: Opportunities Exist to Reduce Aircraft Engine Support Costs (GAO/NSIAD-91-240, June 28, 1991). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO examined the Navy's aircraft logistics system, focusing on the Navy's efforts to improve and reduce the cost of the system. GAO found that: (1) the best practices identified in the airline industry could improve the responsiveness of the Navy's logistics system and save millions of dollars; (2) the Navy's logistics system is complex and often does not respond quickly to customer needs; (3) the factors contributing to this situation include the lack of spare parts, slow distribution, and inefficient repair practices; (4) some customers wait as long as four months for available parts; (5) the Navy is centralizing its supply management and repair activities, outsourcing certain management functions, and analyzing the effectiveness of its repair pipeline; (6) the best practices employed by the private sector show promise for the Navy because these firms hold minimum levels of inventory, have readily accessible spare parts, and quick repair times; (7) it takes an average of 11 days to repair a broken part in the private sector, as opposed to 37 days in the Navy's repair process; (8) the private-sector average is a result of repairing items immediately after they break, using local distribution centers and integrated supplier programs, and third-party logistic providers; and (9) many of the airline industry's best practices are compatible with the Navy's logistics system.
Under the Federal Land Policy and Management Act of 1976, as amended (FLPMA), BLM manages about 250 million acres of federal land for multiple uses, including recreation; range; timber; minerals; watershed; wildlife and fish; and natural scenic, scientific, and historical values, as well as for the sustained yield of renewable resources. In addition, the Mineral Leasing Act of 1920 charges Interior with responsibility for oil and gas leasing on federal and private lands where the federal government has retained mineral rights. BLM is responsible for managing approximately 700 million mineral onshore acres, which include the acreage leased for oil and gas development. To manage its responsibilities, BLM administers its programs through its headquarters office in Washington, D.C.; 12 state offices; 45 district offices; and 128 field offices. BLM headquarters develops guidance and regulations for the agency, while the state, district, and field offices manage and implement the agency’s programs. Thirty BLM field offices, located primarily in the mountain West, were involved in oil and gas development. To drill for oil or natural gas on leased lands, a company must submit an APD to BLM. APDs are used to approve drilling and all related activities on land leased by a company, including road building; digging pits to store drilling effluent; placing pipelines to carry oil and gas to market; and building roads to transport equipment, personnel, and other production- related materials. After an APD is approved, operators can submit proposals to BLM, in the form of a sundry notice, for modifications to their approved APD. Sundry notices may involve activities like changing the location of a well, adding an additional pipeline, or adding remote communications equipment. Interior and BLM have administrative categorical exclusions in place for numerous types of activities, such as constructing nesting platforms for wild birds and constructing snow fences for safety. To use such an administrative categorical exclusion in approving a project on BLM land, the agency screens each proposed project for extraordinary circumstances, such as significant impacts to threatened and endangered species, historic or cultural resources, or human health and safety or potentially significant cumulative environmental effects when coupled with other actions. When one or more extraordinary circumstances exist, BLM guidance precludes staff from using an administrative categorical exclusion for the project. “(1) Individual surface disturbances of less than 5 acres so long as the total surface disturbance on the lease is not greater than 150 acres and site-specific analysis in a document prepared pursuant to NEPA has been previously completed. (2) Drilling an oil or gas well at a location or well pad site at which drilling has occurred previously within 5 years prior to the date of spudding the well. (3) Drilling an oil or gas well within a developed field for which an approved land use plan or any environmental document prepared pursuant to NEPA analyzed such drilling as a reasonably foreseeable activity, so long as such plan or document was approved within 5 years prior to the date of spudding the well. (4) Placement of a pipeline in an approved right-of-way corridor, so long as the corridor was approved within 5 years prior to the date of placement of the pipeline. (5) Maintenance of a minor activity, other than any construction or major renovation or [sic] a building or facility.” In its process for approving oil or gas projects, BLM’s original guidance provided that the agency can use a section 390 categorical exclusion when a project meets the conditions set forth for any of the five types of section 390 categorical exclusions. BLM guidance still directs staff to document their decision and rationale for using a specific section 390 categorical exclusion. Furthermore, BLM guidance directed its staff when using section 390 categorical exclusions to comply with the Endangered Species Act and the National Historic Preservation Act; to conduct on-site reviews for all APDs; and to add site-specific restrictions or conditions of approval if deemed necessary to protect the environment or cultural resources. In September 2009, we reported that 26 of the 30 field offices with oil and gas activities used almost 6,900 section 390 categorical exclusions to approve oil-and-gas-related activities from fiscal year 2006 through fiscal year 2008. Of these, BLM field offices used section 390 categorical exclusions to approve nearly 6,100 APDs (about 28 percent of approximately 22,000 federal wells approved by BLM) during this period. Three BLM field offices (Pinedale, Wyoming; Farmington, New Mexico; and Vernal, Utah) accounted for almost two-thirds of section 390 categorical exclusions used to approve APDs. Section 390 CX3 accounted for more than 60 percent of the section 390 categorical exclusions used to approve APDs. BLM also used section 390 categorical exclusions to approve more than 800 nondrilling projects from fiscal year 2006 through fiscal year 2008. These approvals were for a wide range of activities, such as changing a well location, adding new pipelines, and doing road maintenance. The Buffalo, Wyoming, field office was the most prominent user of section 390 categorical exclusions for these purposes, approving more than 250 nondrilling projects with section 390 categorical exclusions. The vast majority of BLM officials we spoke with told us that using section 390 categorical exclusions expedited the application review and approval process, but the amount of time saved by field offices depended on a variety of factors and circumstances influencing the extent to which field offices used the exclusions. A frequently cited factor contributing to these efficiency gains was the extent to which proposed projects fit the specific conditions set forth in each section 390 categorical exclusion. BLM officials also identified other factors that contributed to their ability to use section 390 categorical exclusions, including the field office resource specialists’ familiarity with the area of the proposed action, the area’s environmental sensitivity, the extent of the area’s cultural resources, and the proposed action’s extent of surface disturbance. Specifically, BLM officials told us that section 390 categorical exclusions were regularly used to approve projects in areas where sensitive environmental or cultural concerns were few (e.g., no threatened or endangered species, or limited cultural resources in the area), where the resource specialists were familiar with the location of the proposed action, or where the proposed project was not unusual or was likely to have minimal impact on the local environment. Additionally, field office policies could contribute to how often section 390 categorical exclusions were used. The differences in office policies result from field office managers’ comfort with the use of section 390 categorical exclusions and their interpretations of appropriate use. Because it is not always clear how oil and gas development would have proceeded in the absence of section 390 categorical exclusions, BLM officials told us that estimating the amount of time saved by using the exclusions was difficult. In field offices where section 390 categorical exclusions were seldom used to approve APDs or nondrilling actions, officials told us that a typical section 390 categorical exclusion approval document saved a few hours of total staff time. In contrast, in field offices where section 390 categorical exclusions were used more often, the time savings were cumulatively more significant, although officials could not quantify them. Officials in these field offices told us that while the savings for a single APD did not by itself mean that the APD was approved in fewer calendar days, the total number of APDs processed in the office in a given period was probably larger because of the cumulative time saved by using section 390 categorical exclusions. Industry officials with whom we spoke also agreed that BLM’s use of section 390 categorical exclusions had generally decreased APD- processing times and that this increased efficiency was more pronounced in some field offices than in others. Acknowledging that the type of development and the availability of NEPA documents were both critical factors, they also stressed that differences in field office policies, field office operations, and field management personalities generally influenced how readily a given BLM field office used section 390 categorical exclusions. For example, according to industry officials, some field offices were conservative and cautious and therefore reluctant to use section 390 categorical exclusions if even minimal environmental or cultural resource concerns existed. This tendency ran counter to what some industry officials told us was their interpretation of the law—namely, that they believed that section 390 categorical exclusions should be used whenever a project meets the required conditions. Industry officials told us that in some cases BLM was overly cautious in applying section 390 categorical exclusions, in part because BLM feared litigation from environmental groups. Industry officials commented on the lack of consistency among BLM field offices in how section 390 categorical exclusions were used but overall told us that section 390 categorical exclusions were a useful tool and have contributed to expedited application processing. They applauded the exclusions for reducing redundant and time-consuming NEPA documentation and making APD application processing more predictable and flexible. In September 2009, we reported that BLM’s field offices used section 390 categorical exclusions to approve oil and gas activities in violation of the law and also failed to follow agency guidance. Specifically, we found six types of violations of the Energy Policy Act of 2005 and fives types of noncompliance with BLM guidance (see table 1). Overall, we found many more examples of noncompliance with guidance than violations of the law. We did not find intentional actions on the part of BLM staff to circumvent the law; rather, our findings reflected what appear to be honest mistakes stemming from confusion in implementing a new law with evolving guidance. Nevertheless, even though some of the violations of law—such as approving multiple wells with one decision document—were technical in nature, they must be taken seriously. In some instances, violations we found may have thwarted NEPA’s twin aims of ensuring that both BLM and the public were fully informed of the environmental consequences of BLM’s actions. For example, approval of multiple wells on one or more well pads could have required an environmental assessment or environmental impact statement, which would likely have provided additional information on the environmental impacts of approving multiple wells. According to BLM officials, the outcome of the NEPA process likely would have yielded the same result. Nevertheless, the purpose of NEPA is to provide better information for decision making, not necessarily to alter the decisions ultimately made. The projects would likely have been approved, but the specific location and conditions of approval might have differed, and BLM and the public might have had more detailed information on the environmental impacts of the approvals. A lack of definitive and clear guidance from BLM, as well as lack of oversight of field offices’ actions, contributed to the violations of law and noncompliance with BLM’s existing guidance. At the time of our report, BLM had provided several key guidance documents; we found, however, that this guidance did not contain the specificity and examples needed to clearly direct staff in the appropriate use and limits of section 390 categorical exclusions. Specifically, BLM’s guidance at the time said little, if anything, about (1) the documentation needed to support a decision to use a section 390 categorical exclusion or (2) the proper circumstances for using section 390 categorical exclusions to approve modifications to existing APDs through “sundry notices.” Furthermore, BLM headquarters and state offices we spoke with had generally not provided any oversight or review of the field offices’ actions in using section 390 categorical exclusions that could have ensured compliance with the law or BLM guidance. We reported in September 2009 that the lack of clarity in section 390 of the Energy Policy Act of 2005 and in BLM’s implementing guidance led to serious concerns on the part of industry, environmental groups, BLM officials, and others about when and how section 390 categorical exclusions should be used to approve oil and gas development. Specifically, these concerns included the following:  Key elements of section 390 of the Energy Policy Act of 2005 were undefined, leading to fundamental questions about what section 390 categorical exclusions were and how they should be used. This lack of direction left these elements open to differing interpretations, debate, and litigation, leading to serious concerns that BLM was using section 390 categorical exclusions in too many—or too few— instances. BLM officials, environmental groups, industry groups, and others raised serious concerns with the law as a whole. These concerns related to four key elements: (1) the definition of “categorical exclusion” and whether the screening for extraordinary circumstances was required, (2) whether the use of section 390 categorical exclusions was mandatory or discretionary, (3) the meaning of the phrase “rebuttable presumption,” and (4) the level of public disclosure required for section 390 categorical exclusions.  The law’s descriptions of the five types of section 390 categorical exclusions prompted more specific concerns about how to appropriately use one or more of the five types of section 390 categorical exclusions. These concerns related to (1) the adequacy of NEPA documents supporting the use of a particular section 390 categorical exclusion, (2) consistency with existing NEPA documents, (3) the rationale for the 5-year time frame used in some but not all types of section 390 categorical exclusions, and (4) the piecemeal approach to development fostered by using section 390 categorical exclusions.  Concerns about how to interpret and apply key terms that describe the conditions that must be met when using a section 390 categorical exclusion. In particular, each of the five types of section 390 categorical exclusions contain terminology that is undefined in the law and for which BLM had not provided clear or complete guidance. Specifically, the ambiguous terms included (1) “individual surface disturbances” under section 390 CX1, (2) “maintenance of a minor activity” under section 390 CX5, (3) “construction or major renovation or [sic] a building or facility” under section 390 CX5, (4) “location” under section 390 CX2, and (5) “right-of-way corridor” under section 390 CX4. Vague or nonexistent definitions of key terms in the law and BLM guidance led to varied interpretations among field offices and concerns about misuse and a lack of transparency. In September 2009, we reported that the failure of both the law and BLM guidance to clearly define key conditions that projects must meet to be eligible for approval with a section 390 categorical exclusion caused confusion among BLM officials, industry, and the public over what activities qualified for section 390 categorical exclusions. As a result, we suggested that Congress consider amending section 390 to clarify and resolve some of the key issues that we identified, including but not limited to (1) clearly specifying whether section 390 categorical exclusions apply even in the presence of extraordinary circumstances and (2) clarifying what the phrase “rebuttable presumption” means and how BLM must implement it in the context of section 390. In addition, to improve BLM field offices’ implementation of section 390 categorical exclusions, we recommended that BLM take the following three actions: issue detailed and explicit guidance addressing the gaps and shortcomings in its guidance;  provide standardized templates or checklists for each of the five types of section 390 categorical exclusions, which would specify, at minimum, what documentation is required to justify their use; and  develop and implement a plan for overseeing the use of section 390 categorical exclusions to ensure compliance with both law and guidance. While we were working on our September 2009 report, the exact meaning of the phrase “shall be subject to a rebuttable presumption that the use of a categorical exclusion under the National Environmental Policy Act of 1969 (NEPA) would apply” was in dispute in a lawsuit in federal court. In Nine Mile Coalition v. Stiewig, environmental groups sued BLM, alleging that the phrase meant that BLM was required to avoid using a section 390 categorical exclusion in approving a project where extraordinary circumstances were present. BLM settled the case in March 2010, agreeing, among other things, to issue a new instruction memorandum stating that the agency would not use section 390 categorical exclusions where extraordinary circumstances were present. In May 2010, BLM issued “Instruction Memorandum No. 2010-118,” which was the first in a series of guidance documents BLM planned to issue to address the recommendations in our September 2009 report. BLM’s May 2010 instruction memorandum announced several key reforms to the way BLM staff can use section 390 categorical exclusions. These reforms substantially addressed the gaps and shortcomings in BLM’s guidance that we identified in our report, directing that, for example, section 390 CX2 or CX3 no longer be used to approve drilling wells after the law’s allowed 5-year time frame or that section 390 CX3 not be used to approve drilling a well without sufficient supporting NEPA documentation. The memorandum explicitly identified the types of NEPA documents needed to adequately support the use of section 390 categorical exclusions to approve new wells and directed that any supporting NEPA analysis must be specific to the proposed drilling site. The memorandum also directs BLM field offices to ensure that all oil and gas development approved with a section 390 categorical exclusion conform to the analysis conducted in the supporting land use plan and come within the range of environmental effects analyzed in the plan and associated NEPA documents. In addition, the May 2010 instruction memorandum implemented the settlement in Nine Mile Coalition v. Stiewig by requiring BLM field offices to screen for the presence of extraordinary circumstances—such as for cumulative impacts on air quality or critical habitat—whenever considering the use of a section 390 categorical exclusion. According to BLM officials, the agency developed a second instruction memorandum in 2011 to address our recommendation that it standardize templates and checklists its field offices use in approving each of the five types of section 390 categorical exclusions to specify, at a minimum, the documentation required to justify their use. This draft second instruction memorandum was undergoing review by the department when, on August 12, 2011, a decision was reached in Western Energy Alliance v. Salazar. In this case, an oil and gas trade association sued BLM, alleging, among others, that the agency issued its May 2010 instruction memorandum without following proper rule-making procedures and that the instruction memorandum’s provision concerning extraordinary circumstances violated section 390. The court held that the instruction memorandum constituted a regulation that BLM adopted without following proper rule-making procedures, and the court issued a nationwide injunction blocking implementation of the memorandum. The court did not address whether the instruction memorandum was consistent with section 390; neither did it address the meaning of the phrase “rebuttable presumption” in section 390. According to a BLM official, the ruling has prevented BLM from implementing the parts of the May 2010 instruction memorandum directly related to extraordinary circumstances and the use of section 390 CX2 and CX3 and also called into question the issuance of the second instruction memorandum aimed at further addressing our recommendations. In conclusion, it is now uncertain what actions BLM may take in response to the most recent court decision. These actions could include, but are not limited to, moving forward and issuing the May 2010 instruction memorandum as a regulation or possibly appealing the decision. Chairman Lamborn, Ranking Member Holt, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to answer any questions that you may have at this time. For further information about this testimony, please contact Mark Gaffigan or Anu K. Mittal at (202) 512-3841 or [email protected] and [email protected], respectively. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. In addition to the contact named above, Jeffery D. Malcolm (Assistant Director), Mark A. Braza, Ellen W. Chu, Heather E. Dowey, Richard P. Johnson, Michael L. Krafve, and Tama R. Weinberg made key contributions to this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Energy Policy Act of 2005 was enacted in part to expedite domestic oil and gas development. Section 390 of the act authorized the Department of the Interior's Bureau of Land Management (BLM) to use categorical exclusions to streamline the environmental analysis required under the National Environmental Policy Act of 1969 (NEPA) when approving certain oil and gas activities. Numerous questions have been raised about how and when BLM should use these section 390 categorical exclusions. In September 2009, GAO reported on BLM's first 3 years of experience-- fiscal years 2006 through 2008--using section 390 categorical exclusions. This testimony is based on GAO's September 2009 report (GAO-09-872) and updated with information on court decisions that have been reached since the report was issued. The testimony focuses on (1) the extent to which BLM used section 390 categorical exclusions and the benefits, if any, associated with their use; (2) the extent to which BLM complied with the Energy Policy Act of 2005 and agency guidance; (3) key concerns, if any, associated with section 390 categorical exclusions; and (4) how BLM has responded to GAO's recommendations and other recent developments. For its September 2009 report, GAO analyzed a nongeneralizable random sample of 215 section 390 categorical exclusion decision documents from all BLM field offices that used section 390 categorical exclusions and interviewed agency officials and others. GAO's analysis of BLM field office data showed that section 390 categorical exclusions were used to approve almost 6,900 oil-and-gas-related activities from fiscal year 2006 through fiscal year 2008. Nearly 6,100 of these categorical exclusions were used for drilling permits and the rest for other nondrilling activities. Most BLM officials GAO spoke with said that section 390 categorical exclusions increased the efficiency of certain field office operations, but it was not possible to quantify these benefits. GAO reported that BLM's use of section 390 categorical exclusions through fiscal year 2008 often did not comply with either the law or BLM's guidance. First, GAO found several types of violations of the law, including approving projects inconsistent with the law's criteria and drilling a new well after mandated time frames had lapsed. Second, GAO found numerous examples where officials did not correctly follow agency guidance, most often by failing to adequately justify the use of a categorical exclusion. A lack of clear guidance and oversight contributed to the violations and noncompliance. Many instances of noncompliance were technical in nature, whereas others were more significant and may have thwarted NEPA's twin aims of ensuring that BLM and the public are fully informed of the environmental consequences of BLM's actions. In September 2009, GAO reported that a lack of clarity in section 390 and BLM's guidance had caused industry, environmental groups, BLM officials, and others to raise serious concerns about the use of section 390 categorical exclusions. First, fundamental questions about what section 390 categorical exclusions were and how they should be used led to concerns that BLM might have been using these categorical exclusions in too many--or too few--instances. Second, specific concerns were raised about key concepts underlying the law's description of certain section 390 categorical exclusions. Third, vague or nonexistent definitions of key terms in the law and BLM guidance that describe the conditions to be met when using a section 390 categorical exclusion led to varied interpretations among field offices and concerns about misuse and a lack of transparency. As a result, GAO suggested that Congress may want to consider amending the act to clarify section 390, and GAO recommended that BLM clarify its guidance, standardize decision documents, and ensure compliance through more oversight. The Department of the Interior concurred with GAO's recommendations. In May 2010, in response to a court settlement and GAO's recommendations, BLM issued a new instruction memorandum substantially addressing the gaps and shortcomings in BLM's guidance that GAO had identified. In addition, BLM was developing a second instruction memorandum to address GAO's recommendation that it standardize decision documents when, on August 12, 2011, a decision was reached in Western Energy Alliance v. Salazar. The court held that the May 2010 instruction memorandum constituted a regulation that BLM adopted without using proper rule-making procedures and issued a nationwide injunction blocking the memorandum's implementation. According to a BLM official, the ruling has prevented BLM from implementing key parts of the memorandum and called into question the issuance of the second memorandum aimed at further addressing GAO's recommendations. GAO is making no new recommendations at this time.
Because of rising costs, a fundamental reexamination of how the Bureau conducts the decennial census—including improving the cost- effectiveness of the actual count, or enumeration—is needed. In response to external pressures to contain costs, including our 2010 report calling for such a reexamination, the Bureau is researching and testing innovations and improvements (as necessary) in an effort to conduct the 2020 Census at a lower cost per housing unit than the cost estimate of the 2010 Census, while still maintaining high quality. (The 2010 Census cost estimate was approximately $94 per housing unit, in constant 2010 dollars.) Census costs have risen over the years: the cost of the 2010 Census represents a 38 percent increase in the cost per housing unit over costs for the 2000 Census; this in turn was a 76 percent increase over 1990 Census costs. According to Bureau officials, without substantial and bold innovation, the cost of conducting the 2020 Census likely will continue this trend, and may become prohibitive. (Figure 1 illustrates the increase in cost per housing unit from 1970 through 2010.) According to the Bureau’s 2020 Census Business Plan, the rising costs of the 2010 Census were largely driven by several factors, including substantial investments in a major national update of its address list during 2009, just prior to the enumeration in 2010. The address list— referred to by the Bureau as its Master Address File (MAF)—is a data file that contains a list of all known living quarters in the United States and Puerto Rico. Since 2000, the Bureau has used addresses provided by the U.S. Postal Service (USPS) Delivery Sequence File (DSF) as a starting point to update the MAF. The Bureau uses the MAF to support the decennial census as well as the American Community Survey and other ongoing demographic surveys.maintains its Topologically Integrated Geographic Encoding and Referencing system (TIGER): this system contains spatial geographical information that associates MAF address data with TIGER geography data on the Bureau’s maps. In conjunction with the MAF, the Bureau To determine whether and how to reengineer address canvassing for the 2020 Census, the Bureau is conducting ongoing efforts to improve its map and address databases. For example, the Bureau’s Geography Division is working with USPS, other federal agencies, and state, local, and tribal governments on a new program called the Geographic Support System Initiative (GSS-I). This initiative allows government agencies at all levels to regularly share and continuously update their address lists and road data with the Bureau. According to Bureau documents, GSS-I is relying on partnering with federal, state, tribal, and local government entities—as well as the private sector—to meet two Census-related data needs: obtaining accurate, complete, and timely information about where people live (such as address data), coordinates of residential structures, and other map features (such as street centerlines), and detecting information changes, so that the Bureau can identify such things as new roads and structures and update the MAF/TIGER database in response. According to the Bureau’s current plans, state, local, and tribal governments (which maintain address lists for purposes such as emergency response and property assessment) would have the opportunity to share addresses with the Bureau throughout the decade, rather than only during the 2 years prior to the census, as was done for the 2010 Census. One of the Bureau’s efforts to improve its address and mapping database involves two projects within the Bureau’s 2020 Research and Testing Program. These projects are using modeling to predict where changes (e.g., address additions and deletions) are likely to occur in the MAF. These models may be used to identify areas where update activities are required to assure the MAF is as complete and accurate as possible. If the Bureau decides to limit its address canvassing, this information is intended to help determine which areas meet acceptable quality thresholds for address coverage, as well as to identify areas in which address canvassing would be more effective in assuring a complete and accurate address list. In another, related effort—also integrated with the 2020 Research and Testing Program—the Bureau’s Geography Division has a team in place to interactively review address and mapping information (including imagery and other source materials) in order to identify areas in which counts of addresses in the MAF are consistent with numbers of housing units on the ground, as well as to identify areas in need of updating. Bureau officials said that they are planning to test the modeling projects in 2014 and 2015, and to compare them to the interactive review effort, in order to establish evidence of what mix of modeling and imagery-based reviews might best identify areas most in need of updating. Details on how the comparison will be made and tested are not yet available for our review. Additionally, in a test beginning in September 2014, the Bureau will conduct a Partial Block Canvassing Test—a component of their larger Address Validation Test. For this test, Census staff will canvass in areas even smaller than the usual blocks of geography used for canvassing, according to Bureau officials. This is being done under the belief that if the Bureau can demonstrate operational success at canvassing such small areas—what it refers to as “partial block canvassing”—it may be able to reengineer its canvassing operation by targeting efforts to similarly small areas. Doing so would eliminate the expense of canvassing an entire geographic block when only a part of it is in need of update. In another effort, the Bureau is investigating the role and possible contributions the private sector can make in improving its address and mapping databases. According to Bureau officials, reliance on the private sector is necessary in order to maintain the major upgrades that were made to its address and mapping databases for the last decennial. As we have previously reported, it is important for the Bureau to remain on schedule to keep downstream activities on track. As part of its 2020 Census schedule development, the Bureau divided the 14-year life cycle of the 2020 Census into five phases. The life cycle began in fiscal year 2009 with the Options Analysis phase. The second phase, Early Research and Testing, comprises work being done through the Bureau’s GSS-I program and the 2020 Research and Testing Program. This work is intended, in part, to explore how MAF/TIGER updates could be modified to control costs or improve quality. Figure 2 illustrates the sequencing of the five 2020 Census phases. The Bureau is taking a number of actions to help it develop a better understanding of the different sources available for updating address and mapping data and to better position it for cost reduction opportunities in data acquisition while increasing the quality of the MAF/TIGER database. According to Bureau officials, while more significant decisions on data sources remain, they have decided to initially use the following data sources to help improve address and mapping information in the 2020 Census MAF/TIGER database: sufficiently reliable address and geospatial data through GSS-I, provided by state, local, and tribal governments with active address and mapping efforts underway; address, aerial, and spatial data from other federal agencies; and imagery via a commercial source. GSS-I partnerships. As the Bureau continuously updates and maintains data in the MAF/TIGER database, Bureau officials have decided that state, local, and tribal governments that have data of sufficient quality— and that participate in GSS-I partnerships—will be the primary source of address and geospatial data in their geographic areas. For governments whose data passes a series of Bureau content and quality checks, the GSS-I partnership program data will be collected throughout the decade. According to current plans, state, local, and tribal governments that reliably maintain address lists (for purposes such as emergency response and property assessment) would be invited to share addresses with the Bureau throughout the decade, rather than only during the 2 years prior to the census, as was done for the 2010 Census. Federal agencies. Bureau officials have decided that some address, aerial, and spatial data will be collected from other federal agencies. Thus far, Bureau officials indicated that the federal agency data being used to update the MAF/TIGER address and spatial data for 2020 includes address data from the USPS Delivery Sequence File, and satellite imagery provided by the U.S. Department of Agriculture’s (USDA) National Agriculture Imagery Program and the National Geospatial- Intelligence Agency (NGA). These address, aerial, and spatial data are used to identify areas with growth or reduction in the number of housing units. Commercial sources. Bureau officials have decided to use a commercial imagery service, giving the Bureau the capacity to store and manage imagery it has already collected from other sources—such as local governments—and the ability to use imagery provided from the commercial vendor for areas where the Bureau lacks imagery data. The Bureau is exploring other options as well, including in-house use of imagery from other federal agencies or from additional commercial vendors. Additional data sources. In addition to these data source decisions, Bureau officials have been conducting ongoing research and outreach with a variety of entities to identify additional data source options the Bureau could use to meet its address and mapping needs. Table 1 describes data sources the Bureau has already identified that might possibly meet its address and mapping needs. As an example of work with entities outside the Bureau, during fiscal year 2011, the Bureau invited officials from USPS and the U.S. Geological Survey to participate in GSS-I research and development working groups to help identify sources for address and feature updates for the MAF/TIGER database. In September 2011, the Bureau hosted a meeting with state and local governments, nonprofit organizations, and various national associations to discuss techniques for address list development and maintenance, as well as potential pilot programs for data sharing among these groups. The Bureau has also taken steps to work with commercial vendors to review potential sources of commercial data that the Bureau could use to augment or verify data collected through GSS-I. For example, in July 2013, the Bureau completed a market research project with one vendor to evaluate the prospects for using commercially available source data in the event a future need arose for supplemental data from areas of the country where local, state, or tribal governments lack spatial data to provide to the Bureau. During the project, the Bureau obtained information on quality assurance and quality control processes it can use to maintain accuracy of spatial data within the MAF/TIGER database. Additionally, in July 2014, the Census Bureau issued a request for information conducting market research to identify available sources for street centerline data and address data. The solicitation stated that the Bureau was engaging industry in order to identify potential vendors and their ability to provide spatially accurate road networks, addresses, and other geospatial data to supplement the information already available in the MAF/TIGER database. As of August 2014, Bureau officials were continuing to visit vendors, attend conferences with geospatial industry leaders to learn about their address and mapping technologies, and interact with Census Scientific Advisory Committee and National Academy of Sciences panels. Officials at the Bureau explained the rationale for the data source decisions that have been made thus far and provided us with background documents and testimonial evidence about each of the decisions. However, they provided inconsistent support for the decisions themselves. Federal internal control standards and Office of Management and Budget guidance on geospatial dataBureau should be supporting its significant data source decisions—such as for its address and mapping needs—in terms of both cost and quality, as well as with clear management approval for the decisions. Further, these documents should be readily available for examination. Cost considerations. According to leading practices for data sourcing, decisions should be based on consideration of cost. Such considerations could be documented in a variety of ways, including market research, minutes of meetings with relevant discussion, summaries of data cost research, transaction costs, or pricing schedules. The decisions on data sources the Bureau has made thus far—using local, tribal, and state governments; other federal agencies; and a commercial imagery server— involved acquiring free data. For each of those decisions there is no additional charge to the Bureau for obtaining data from the respective source, largely obviating the need to justify cost. For instance, the Bureau established a memorandum of understanding with state, local, and tribal governments participating in GSS-I, indicating that no funds will be exchanged between the Bureau and participants for sharing address and map data. Bureau officials told us that getting such updates at no charge is a key to reducing expensive broader canvassing costs later in the census cycle. The Bureau’s recent congressional budget justification documents echo this argument as well. Additionally, Bureau officials have stated that they are able to obtain data from other federal agencies at no cost and can rely on a vendor-provided imagery service that the Bureau already had access to under other paid licensing arrangements. No additional charges are associated with such access. Moving forward, by improving its use of leading practices to determine the relative cost-effectiveness of using data sources in updating the MAF/TIGER database, the Bureau can better ensure sufficient consideration of costs. When deciding on each additional source, consideration includes indirect costs, such as the incremental costs of data processing or quality assurance. Quality considerations. According to leading practices for data sourcing, decisions should be based on consideration of data quality, including data accuracy, completeness, and timeliness. Such considerations can be documented by market research, minutes of meetings with relevant discussion, summaries of data quality research, relevant test or evaluation results, schedules of data updates and availability, reporting on quality measures, and evidence of successful historical use of the data. Thus far, the Bureau has partially supported consideration of quality across its data sourcing decisions. For example, its decision to rely on address and mapping data submitted from state, tribal, and local governments was based on expectations that many of the governments would be able to provide data of sufficient quality. Bureau officials stated they are relying on procedures to assess the content of submitted data files on a case-by-case basis. Bureau timelines indicate the quality reviews will likely extend over many years, and Bureau officials do not know for sure how many (or which) government sources will be good enough to use. For their decision to rely on USPS, USDA, and NGA for address and mapping data, Bureau officials provided evidence that they reviewed research on the quality of data sources from each of the agencies. For example, since the 2000 Census, the Bureau has successfully relied on USPS address data, and—leading up to the 2010 Census—on USDA aerial imagery as well. Over the years, several evaluations have helped document limitations of the USPS data in particular, so that the Bureau targets its use of them. In addition, the Bureau documented a review of how the quality of NGA imagery sources can help meet Bureau needs. The Bureau summarized detailed market research for several alternative sources of imagery from commercial vendors and other federal agencies related to the positional accuracy and current geographic coverage of the imagery data and the rate at which imagery data were updated. However, it is not clear how the Bureau used that information or whether the quality of the imagery data source exceeded quality standards (or had limitations) compared to others. Bureau officials stated that they selected the commercial imagery service during meetings assessing and comparing options, because it met their needs and its use was included in other licensing arrangements that had already been paid for. However, Bureau officials could provide no contemporaneous documentation of the results of those comparative assessments. By having a systematic process to consider the quality of the data that the Bureau relies on from various sources, the Bureau can help ensure that effective choices are being made and that possible limitations in data that might affect their use are better understood. Documenting decisions. Internal control standards require documentation of significant management decisions, which can be documented with decision memorandums, memorandums of understanding, or meeting minutes indicating acceptance of data source recommendations, or other documentation of senior management approval for deciding upon a specific data source. We found that at the time decisions were made, the Bureau did not document management approval in support of sourcing decisions for any of the data sources discussed above. Bureau officials provided extensive documentation on GSS-I, which indicated that governments across the country would be playing a major role as partners with the Bureau by providing data: however, there was no formal decision documenting (as an accountability check) that senior Bureau leadership had agreed that the net benefits of these sources are greater than alternative data sources. Regarding the decision to rely on several federal agencies for various types of geospatial data, the Bureau similarly lacks a formal record establishing the Bureau’s approval of these sources as official inputs to the 2020 Census. The extensive public interactions documented between Bureau and other agency officials leaves little doubt that the decision to rely on data from these other federal agencies is known and acceptable to senior Bureau officials, yet evidentiary support for the Bureau’s decision to rely on some agencies and not other sources is absent. For the decision to rely on a commercial imagery source, Bureau officials told us that the chief of the Geography Division approved the use of commercial imagery server software (to host freely available imagery acquired from federal, state, and local sources) during a January 2012 meeting, although no record of the meeting was produced and there was no separate documentation of the decision. Developing evidence of management approval for data sourcing decisions can help ensure that as the Bureau moves forward, stakeholders—such as Congress and commercial vendors—have greater transparency regarding the data sources being considered to meet the Bureau’s key address and mapping needs, how decisions are being made, who made them, and on what basis. Such evidence would enhance the accountability of senior Bureau officials for decisions at the time those decisions are made. Cost and quality are two key traits linked to the goals and objectives of the Bureau’s agency-wide strategic plan and feature prominently in other broad strategic documents for the 2020 Census. Yet we found the Bureau does not have guidance outlining the need or the process for ensuring (1) systematic justification for decisions related to using specific data sources in terms of cost, quality, or other important considerations, and (2) documentation of management approval for such decisions. Although the Bureau’s initial data sourcing decisions involved acquiring data at little cost to the Bureau, according to Bureau officials, they have much more to resolve about meeting their address and mapping data needs, with future decisions potentially involving other stakeholders and significant cost. By developing more rigorous support and evidence of such decisions, the Bureau can ensure transparency to Congress, commercial vendors, and other stakeholders. In turn, these efforts could lead to increased stakeholder support for the Bureau’s plans for the 2020 Census and could enable the Bureau to consistently support why the data sources it selected are better than alternatives. Rigor in decision making can also help reduce the Bureau’s risk of failing to select the most cost-effective, accurate, complete, and timely address and mapping data for updating the MAF/TIGER database. The Bureau’s efforts to design a cost-effective enumeration (starting with complete and accurate address and mapping data) present a significant project management challenge, one that demands meticulous planning. However, we found that the Bureau’s approach to meeting its address and mapping needs is missing key elements that comprise a rigorous, integrated plan, including clear and measurable goals, decision milestones at a level where decisions on GSS-I data sources might be tracked, and performance data that management could use to track progress. Without consolidating these elements in a plan that lays out how Bureau officials should monitor progress, it will be difficult for Bureau management (and others) to know that the Bureau is on track to meet its address and mapping needs; it will also be difficult to pinpoint improvement opportunities. To identify key elements of effective project management, we reviewed a number of guides for project management and business process reengineering.planning, we found that the guides contained many elements in common, including the following: Although there is no one best approach to project the project plan should consider all phases of the project and should have clear and measurable goals, or targets; schedules, milestones, and deadlines should be clearly stated; and performance data should be gathered and reported to determine and monitor progress toward goals. Additionally, OMB Circular A-11 specifies that an agency’s general goals should be sufficiently precise to direct and guide agency staff in actions that carry out the agency’s mission and aid the agency in developing annual performance goals. Without these elements of effective project management, the Bureau cannot ensure it will make informed decisions about its data needs in a timely manner. Measurable goals. As discussed earlier, to help address its data needs, the Bureau decided to rely on state, local, and tribal government address and geospatial data that met its quality standards. However, Bureau officials have not set measurable goals for that effort. For example, the chief of the Geography Division estimated that these government sources may be able to provide data coverage for about two-thirds of the over 3,200 counties in the country.provide any detailed support for that estimate, and said that they recognize the need for better information in order to make decisions on these and remaining needs, and are working to gather it. Knowing what percentage of the country—and perhaps, what different regions and different types of geographies, such as rural and urban—may be covered by sufficiently reliable sources would help the Bureau (and others) better know both the nature and magnitude of the remaining data gaps. Whether in terms of estimated numbers of addresses, numbers of housing units, or in terms of area, a measurable goal—such as the amount of address and mapping data expected to be sufficiently reliable—could help inform estimates of the level of effort needed to achieve goals: in turn, this can help inform decisions about resource allocations and other elements of project planning. Measurable goals can also help improve communication with other stakeholders who might be able to help the Bureau fill the data gap. Bureau officials acknowledge that they need to determine and report more precisely the extent of the coverage gaps in addresses and mapping data obtained from reliable government sources. However, Bureau officials could not Similarly, the Bureau’s strategic plan for GSS-I includes high-level goals. However, the plan, its related management plan, and other planning documents lack the specificity needed regarding clear and measurable goals or targets representing the address and mapping results GSS-I needs to achieve. Such goals or targets can help Bureau staff ensure they are effectively taking steps toward meeting address and mapping needs and might help in the identification of helpful sources. For example, the Bureau’s GSS-I Strategic Plan has as one of its goals that the Geography Division be efficient, effective, and adaptable, with a strategic objective of efficient and effective source data acquisition. The plan goes on to list potentially useful strategies, such as expanding the use of available imagery for source data assessments and embracing the most efficient data acquisition methods using viable technologies. However, the GSS-I planning documents do not describe measurable goals or targets (such as the estimated numbers of addresses) expected or needed from state, local, and tribal governments under GSS-I for the efforts to be successful, or to have met stated needs, goals, or related objectives. In addition, the documents do not identify what coverage of the housing in any given area is sufficient to meet the Bureau’s needs. The Bureau’s GSS-I Strategic Plan includes one example of the type of detailed, measurable goal that could better guide planning and decision making. That data need, or target, is described as “maintain a standard of 7.6 meters or better positional accuracy for existing streets and newly acquired streets.” However, this specific information is the only occurrence in the document with this level of detail. Bureau documents elsewhere provide examples of what could serve as measurable goals, if they were formally stated as quality standards needed from all potential sources of address data. For example, a description of the Bureau’s process for reviewing government partner data files describes assessment of, among other characteristics, whether addresses include each of several different required elements and whether road centerline data are updated within 2 years; however, it does not describe measures of how complete the data must be to meet the Bureau’s needs. A detailed integrated plan that includes complete descriptions of what the Bureau deems sufficient quality data for its address and mapping needs can help focus efforts of the Bureau and its private and public partners on satisfying them. Decision milestones. Bureau officials provided a high level master activity schedule through the 2020 Census that includes some decision points, such as go/no go decisions for whether it will reengineer its address canvassing operation (limiting it to only targeted areas) and the deadline for creating and finalizing the address canvassing workload. Additionally, officials provided a version with much more detail through September 2015, when the Bureau plans to announce its preliminary design decisions. However, Bureau staff indicated that they have not yet refined the schedule at the lower level beyond September 2015, such as at a level where decisions about GSS-I data sources might be tracked. Bureau officials also provided a schedule that describes in detail the implementation of a workflow management system used to track the handling of address and other files received from partners, but the system was implemented in 2013 and does not include any future milestones. Whether immediately included in the master activity schedule or not, without schedules moving toward near term goals and decisions included in a more rigorous plan, the Bureau and Congress cannot determine whether the Bureau is on track to meet its key address and mapping needs for the 2020 Census. Specifically, while work is under way in these areas, we found that the Bureau does not have milestones or deadlines for remaining decisions in the following areas: to what extent state, tribal, and local governments can be relied on as acceptably reliable data sources; how the Bureau will obtain address, boundary, and feature data for areas not covered by state, tribal, and local governments; and how good its various data sources are or need to be so as to be acceptably reliable. That is, address data obtained from the sources must meet minimum standards for timeliness, accuracy, and completeness—sufficient coverage of the nation’s housing. Bureau officials indicated that they do not yet have milestones such as these, because, among other reasons, those decisions need to be informed by the 2014 Census Test, occurring in summer 2014, and the results of other tests in 2015. While this creates some uncertainty regarding what the future milestone dates might need to be, without at least preliminary milestones pending the results of census tests, the Bureau may not have sufficient time left to complete remaining activities, especially given the fact that some of the activities may require long lead times for completion (see figure 3). For example, it may take significant time to research and select which data sources are needed, depending on how the Bureau decides to address them. The Bureau has separately set a deadline of December 31, 2016, for finishing the development and awarding of major contracts for systems that will support the 2020 Census, recognizing the lead time as necessary. Furthermore, the Bureau’s 2020 Lifecycle Risk Register states that, “many of the decisions on the final 2020 Census design may not be finalized until late in the research time frame, which could include decisions to outsource some of the development effort. However, acquisitions require established lead times that include set processes and review milestones, both at the agency and at the department level. If 2020 Census design decision milestones do not allow the requisite lead times for acquisition processes and reviews, then the Census Bureau may not be able to procure the necessary products and services in sufficient time to align with the 2020 Census development life cycle.” The Bureau identified this risk and rated it as medium on its scale of low to high, underscoring that it values managing to a timeline of key milestones and deadlines. Monitoring progress. Bureau officials could not provide us with the performance reports they are using to monitor progress toward ensuring that key 2020 address and mapping needs will be met. However, in August 2014, Bureau officials provided us with copies of presentations made to external groups reporting on progress made in reviewing data from state, tribal, and local government partners under GSS-I. For example, one presentation indicated that as of February 17, 2014, as part of the GSS-I effort, the Bureau had contacted 375 partners, 247 of which had provided files. A Bureau report provided to the Office of Management and Budget on August 4, 2014, justifies resource requests for GSS-I and contains performance metrics, but the metrics on address data from state, local, and tribal governments are in terms of numbers of partner files acquired and processed in 2013 and 2014. There is no context given, such as how many potential partners there are, how many governments the Bureau anticipates participating or needs to participate, what level of participation the Bureau seeks to obtain year-by-year, or how complete the coverage of addresses needs to be. Without such context, these numbers do not provide complete information on the extent to which the Bureau is making progress. Furthermore, the metrics the Bureau provided do not address the extent to which the MAF/TIGER data are being updated or improved, such as in numbers of addresses, housing unit structures, linear miles of roads, or geographic square areas. If fewer than expected (or needed) state, tribal, and local partners are agreeing to participate, if less data are being updated than expected, or if the resulting database is not complete enough, then management needs to know in real time so that it can prioritize efforts to access other data sources that may be needed to meet its key address and mapping needs. By developing more accurate and timely documentation of progress on obtaining or updating address and mapping data, the Bureau can also better illustrate to stakeholders that it is effectively managing its data source decisions. Our previous work has found that the Bureau has inconsistently followed key planning practices: for example, in 2012, we found that the Bureau’s high-level schedule for the 2020 Census did not include milestones or deadlines for key decisions needed to support the transition between the planning phases for 2020. While the Bureau has taken some positive that provide steps, such as preparing a series of planning documentshigh-level examples of measurable goals, schedules, and deadlines, it has not put in place all elements integral to large complex projects. The absence of detailed goals, schedules, deadlines, metrics, or data on monitoring progress toward outcomes and the absence of a detailed integrated plan that incorporates these elements means any limitations of the GSS-I strategy may not be fully known or apparent until late in the decade. Without these elements, it will be difficult for the Bureau to ensure that it evaluates the costs and benefits of alternative data sources and measures and reports the Bureau’s progress. It will also be difficult to hold managers accountable for results. Additionally, the Bureau may be lacking the information necessary to make its remaining 2020 address and mapping decisions. As a result, the Bureau is at risk of experiencing increased costs to obtain data for remaining gaps in address and mapping data. Bureau efforts to update its MAF/TIGER database with accurate, complete, and timely address and mapping data are critical for carrying out a precise population count in a fiscally constrained environment. The Bureau has taken initial steps to plan and implement data collection efforts for meeting its key address and mapping needs, such as developing planning documents for its GSS-I program and identifying a series of potential data sources in coordination with internal and external stakeholders. Thus far, these efforts have enabled the Bureau to consider an array of government and commercial sources for updating its address and mapping data through GSS-I as it has made its initial data source decisions. Continued Bureau efforts to prepare for remaining data source decisions would benefit from more systematic efforts to document cost and quality considerations for government and commercial sources. While the Bureau has decided thus far to use data sources that have little or no incremental acquisition cost, enhanced cost and quality assessments of alternative data sources considered for future decisions will position the Bureau to demonstrate transparency in its decision making to Congress, to potential commercial vendors, and to other stakeholders. Implementing processes for supporting data source decisions that meet key address and mapping needs—particularly for assessments of cost and quality— will reduce the risk that the Bureau selects data sources that are not cost- effective or high quality. In addition, when decisions are being made about how to meet key address and mapping needs, it is also important that the Bureau document these decisions, such as through the use of decision memorandums or minutes of meetings where decisions are made. Implementing a process that ensures management approval is documented for key decisions on data sources in the future will help demonstrate accountability for those decisions. As the Bureau moves forward in preparing and refining its design approaches for the 2020 Census, it can take additional steps to improve planning for data sourcing decisions. Currently, the Bureau does not have a detailed integrated plan that incorporates measurable goals. Such a plan would help ensure that the Bureau is collecting sufficient address and mapping data from its private and public partners. The Bureau has also not set a timeline regarding when it needs to make remaining data source decisions: setting a timeline will allow it to reduce the risk of not having enough time to adequately decide upon alternative data sources that, if needed, might potentially reduce cost or increase quality. Additionally, the Bureau has not established a process to monitor and report on progress in the management of GSS-I, which will enable it to better identify gaps in its data collection efforts, as well as to ensure and track the actions needed to fill such gaps in time for the 2020 Census. To help ensure that the Bureau more rigorously considers data sources and remains on schedule to meet its address and mapping needs, the Secretary of Commerce and Undersecretary of Economic Affairs should direct the Census Bureau to take the following three actions: In order to ensure transparency of future decision making, implement a process for documenting the support for data source decisions intended to meet key address and mapping needs and the support for assessing the cost and quality of data sources the Bureau is considering. In order to ensure accountability for key decisions moving forward, implement a process for documenting management approval of key address and mapping data source decisions, such as through decision memorandums or minutes of meetings where decisions occurred. In order to better ensure the Bureau meets its address and mapping needs for 2020 and stays on schedule, develop a detailed integrated plan that includes items such as measurable goals (e.g., estimated numbers of addresses expected or needed from state, local, and tribal governments under GSS-I); schedules and deadlines; and progress monitoring and reporting, and establish a timeline identifying when remaining data source decisions need to be made. We provided a draft of this report to the Department of Commerce and received the department’s written comments on September 23, 2014. The comments are reprinted in appendix II. The Department of Commerce generally agreed with our findings and recommendations and provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Secretary of Commerce, the Under Secretary of Economic Affairs, the Director of the U.S. Census Bureau, and interested congressional committees. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you have any questions about this report please contact me at (202) 512-2757 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. The GAO staff that made major contributions to this report are listed in appendix III. This report reviewed (1) the extent to which the Bureau is considering non-Bureau sources of data to meet its key address and mapping needs for the 2020 Census, and (2) the status of the Bureau’s plans for meeting those needs, paying particular attention to leading practices for project management. To categorize Bureau-identified address and mapping needs, we obtained Bureau documentation of such needs. Because address and mapping needs vary in their definition and level of specificity according to their different use or purpose, they may be dynamic until the Bureau makes sourcing decisions to meet them and some needs were not yet fully identified during the time of our review. To address this issue, the Bureau provided revised lists of key address and mapping needs during the course of our review. For this report, we include key address and mapping needs identified as of June 2014. To review the Bureau’s approaches for developing its address and mapping needs for 2020 Census operations and preparing it for data sourcing decisions, we compared the Bureau’s organizational documents for its approaches–such as strategic, program management, and operational plans–to elements of project planning we identified from industry guides for project management and business process reengineering, as well as other leading management practices we identified in our prior work on establishing a coherent agency mission and integrated strategic goals, and on adopting leading practices for results- oriented strategic planning and reporting. To identify data sources the Bureau considered for meeting key address and mapping needs, we reviewed documents provided by the Bureau for specific data sources used to obtain data for addresses, coordinates of residential structures, and other map features, and specific data sources used to perform change detection related to identifying new roads and structures for updating the MAF/TIGER database. We did not seek to identify an exhaustive list of data sources; rather, we identified those readily attributable to documents (1) that the Bureau created to analyze potential data sources or to explain its consideration of such sources, or (2) that the Bureau solicited from commercial vendors and other sources, even if such documents were more promotional than informative related to the Bureau’s needs. We did not consider unsolicited documents sent to the Bureau from commercial vendors proposing the use or adoption of particular address or mapping data sources or solutions. To review the extent to which the Bureau supported its decisions to use data sources to meet its key address and mapping needs, we obtained a Bureau-provided list of data sources it selected, and documentary and testimonial evidence the Bureau identified as justifying its decision for each source. We also reviewed our prior work on internal controls and Office of Management and Budget guidance on geospatial data, and determined the Bureau should be supporting its significant data source decisions—such as for its address and mapping needs—in terms of cost, quality, and clear management approval of decisions. For both objectives, we interviewed Bureau officials in the Geography Division, 2020 Research and Planning Office, and Decennial Systems and Contracts Management Office to discuss planning and decision- making efforts for meeting key address and mapping needs for 2020 Census operations. We conducted this performance audit from February 2014 to September 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Key contributors to this report include Ty Mitchell, Assistant Director; Tom Beall; Rob Gebhart; Andrea Levine; Mark Ryan; and Timothy Wexler.
A complete and accurate address list is a key building block of a successful census, but developing such a list is costly and labor intensive. For the 2020 Census, the Bureau is reexamining approaches to control cost and maintain accuracy, including approaches to meet its address and mapping needs. GAO was asked to examine potential private sector roles in 2020 Census address list and map development. GAO (1) evaluated the extent to which the Bureau is considering non-Bureau data source opportunities to meet such 2020 needs, and (2) reviewed the status of the Bureau's plans for meeting its key 2020 address and mapping needs. GAO compared Bureau documentation to leading practices for planning, management, and scheduling from industry guides for project management, reviewed relevant documentation, and interviewed Bureau officials familiar with decennial census needs and data source decisions. The U.S. Census Bureau (Bureau) is working with stakeholders to identify various data sources to meet its address and mapping needs. For example, the Bureau has worked with state, local, and tribal governments and with commercial vendors to identify potential data sources to augment or verify data collected through its Geographic Support System Initiative (GSS-I) program. GSS-I allows government agencies at all levels to regularly share and continuously update their address lists and road data with the Bureau. Federal internal control standards and Office of Management and Budget guidance on geospatial data suggest that the Bureau should support significant data source decisions in terms of both data cost and quality. However, the Bureau has inconsistently documented cost and quality support for decisions already made to use address and mapping data from state, tribal, and local governments, other federal agencies, and a commercial vendor. Without a systematic consideration of the quality of the variously sourced data that the Bureau plans to rely on, it cannot ensure that effective choices are being made and that possible data limitations that might affect their use are fully understood. Further, the Bureau did not document management approval in support of its data source decisions at the time that the decisions were made; without such documentation, the Bureau lacks accountability and transparency for future sourcing decisions. The Bureau does not have guidance clearly outlining the need or process for ensuring consideration of cost and quality—primary concerns of the Bureau's reexamination—or documentation of management approval for those data sources selected. By implementing a process for documenting such steps, the Bureau can ensure that data source decisions are transparent to Congress, commercial vendors, and other stakeholders. The Bureau's approach for meeting its address and mapping needs lacks key elements of effective project management outlined in guidance GAO reviewed. Specifically, while the Bureau prepared planning documents to guide GSS-I, it did not include clear and measurable performance goals to help it effectively meet its address and mapping needs; milestones detailed at a level where decisions on GSS-I data sources might be tracked; and performance measures, data, and reporting to help guide planning and track progress toward filling gaps in the Bureau's data needs. While the Bureau has taken some positive steps—such as preparing a series of planning documents that provide high-level examples of measurable goals, schedules, and deadlines—the absence of detailed goals, schedules, deadlines, metrics, or data on monitoring progress toward outcomes, as well as the absence of a detailed integrated plan that incorporates these elements, means any limitations of the GSS-I strategy may not be fully known or apparent until late in the decade. Without these elements, it will be difficult for the Bureau to ensure that it is adequately evaluating the costs and benefits of alternative data sources, measuring and reporting its progress, or holding managers accountable for results. GAO recommends the Bureau implement processes for reviewing the cost and quality of data source selections and for documenting support for those decisions; document management approval of key data source decisions; and—for remaining data source decisions—develop a detailed plan with measurable goals, track performance against these goals, and set a timeline. The Department of Commerce generally agreed with GAO's findings and recommendations.
Commercial space transportation is carried out using launch vehicles operated by private companies. In February 1984, Executive Order 12465 designated the Department of Transportation (DOT) as the lead federal agency for enabling private-sector launch capabilities. In October 1984, the Commercial Space Launch Act (CSLA) gave DOT the authority to, among other things, license and monitor the safety of commercial space launches and promote the commercial space industry. Regulatory oversight of the commercial sector was delegated to the Office of Commercial Space Transportation, within FAA, whose primary means of authorizing space launch activities is through its licensing process. Specifically, FAA’s Office of Commercial Space Transportation is responsible for licensing launch and reentry vehicles and spaceport operations carried out by U.S. citizens or within the United States, except for operations carried out exclusively by and for the federal government. In an informal guidance, FAA defines commercial space launches as those that are licensed by FAA, among other characteristics. During an FAA-licensed launch, several key parties are involved: The spaceport operator is the entity that hosts the launch (or reentry, or both) of the launch vehicle from its launch site. Almost all spaceport operators currently licensed by FAA are state or municipal government entities. The launch company is the entity that conducts the launch of a vehicle and any payload, such as a satellite, a probe, or a spacecraft carrying humans or cargo. The customer is the entity that pays the launch company to carry a payload into space. CSLA and its subsequent amendments require launch companies and spaceport operators to obtain licenses. To obtain a launch or reentry license, a launch company must meet safety and financial responsibility requirements, among other things. FAA finalized its regulations related to financial responsibility and allocation of risk requirements for launch companies for licensed launches in August 1998. Similarly, a spaceport operator must also meet safety requirements to receive an FAA launch site license. FAA’s regulations related to spaceport licensing, which were finalized in October 2000, require spaceport operators to demonstrate the level of safety of the spaceport, including information on trajectory, debris dispersion area, flight corridor, and if necessary, a risk analysis for populated areas. In the following year, FAA issued its first spaceport operator license under these regulations. Although commercial activity has traditionally been from federal ranges, as of July 2016 there were 10 FAA-licensed spaceports to support private sector involvement in space-related activity. Three of the 10 were colocated with federal ranges at an Air Force base or a NASA facility: California Spaceport at Vandenberg Air Force Base, Cape Canaveral Spaceport at Cape Canaveral Air Force Station, and MARS at NASA’s Wallops Flight Facility (see table 1). FAA licenses the operation of spaceports for vertical takeoffs or landings, horizontal takeoffs or landings, or both. Four sites are dedicated to vertical launches, four conduct horizontal launches only, and two can host both types of operations. Figure 1 illustrates examples of vertical and horizontal spaceports. In addition to these sites, there are three private spaceports where individual companies may conduct FAA-licensed or permitted launches. Because the companies own and operate these sites using their own vehicles exclusively, a launch site license is not required. Also, as of July 2016, FAA had conducted pre-application consultations for seven additional sites. Figure 2 illustrates the location of existing and proposed commercial spaceports as of July 2016. CSLAA created a three-tiered approach for sharing liability between the federal government and the private sector for damages to third parties— known as the indemnification regime—to encourage the development of the U.S. commercial space launch industry and promote a competitive environment (see fig. 3). All FAA-licensed commercial space launches and reentries by U.S. companies, whether unmanned or manned and whether from the United States or overseas, are covered by the indemnification regime for third-party damage that results from launch or reentry. Third parties include persons that are not involved in launch or reentry services—those other than the federal government, the launch company, contractors and subcontractors of the federal government or the launch company, and customers of the launch company. The U.S. indemnification regime has a three-tier approach for sharing liability between the federal government and the private sector to cover third-party claims, from when the launch vehicle arrives at the spaceport to the end of a launch. The first tier of coverage is an insurance policy the launch company is required to purchase for an individual launch or set of launches. As part of FAA’s process for issuing a license for a commercial space launch or reentry, the agency determines the amount of insurance a launch company is required to purchase so the launch company can compensate third parties for claims and the federal government for any damage to its property that occurs as a result of activities carried out under the license. FAA calculates the insurance amount to reflect the maximum probable loss that is likely to occur because of a mishap that results in (1) third-party damage, including deaths and injuries on the ground and damage to property caused by anything that resulted from a launch or reentry, and (2) damage to government property. The liability insurance obtained by the launch company also protects its customer(s) (i.e., the entity(ies) paying the launch company to bring a payload into space), the federal government, and their respective contractors and subcontractors from claims by a third party. Launch companies must purchase coverage to meet FAA’s maximum probable loss amount, up to the maximum amount of coverage available in the world market at a reasonable cost, as determined by FAA. This first tier of required insurance coverage is capped at a maximum of $500 million for third-party damages. Additionally, the required insurance coverage is capped at a maximum of $100 million for any potential damage to government property. The second tier of coverage—which is adjusted for inflation and is capped at $3.06 billion in fiscal year 2016 dollars—may be provided by the U.S. government and covers any third-party claims in excess of the specific first-tier amount. For the federal government to be able to make payments for these claims, Congress would need to appropriate funds. The second tier of coverage has never been invoked because to date, no mishaps have resulted in third-party claims in excess of the first tier. The third tier of coverage is for third-party claims in excess of the second tier. Like the first tier, this third tier is the responsibility of the launch company, which may seek insurance above the required first tier amount for this coverage. Unlike the first tier, no insurance for this third tier is required under federal law. Another component of the U.S. indemnification regime for commercial space launches and reentries is cross waivers. A cross waiver provides that each party involved in a launch agrees not to bring claims against the other parties and assumes financial responsibility for damage to its own property or loss or injury sustained by its own employees. FAA officials we interviewed said that the launch company, its customer(s), and FAA (on behalf of the federal government) must sign each cross waiver. FAA verifies these parties’ signatures on each cross waiver as part of the licensing process. These waivers include their respective contractors and subcontractors, who must sign them as well. FAA can also issue permits—rather than licenses—for certain launch activities, such as launch or reentry of a reusable suborbital rocket. Launch companies operating under an FAA-issued permit must purchase insurance under the first tier of the indemnification regime, but they do not gain coverage under the second tier of the indemnification regime. Therefore, permitted activities are excluded from the federal indemnification. Similar to launch companies operating under launch licenses, launch companies operating under permits are not required to purchase insurance under the third tier of the indemnification regime. Launch companies that receive permits for launch activities, rather than licenses, are also required to sign cross waivers as part of the permitting process. According to FAA, no FAA-licensed commercial space launch since CSLAA was enacted has resulted in casualties or substantial property damage to third parties that exceeded the amount of insurance coverage FAA required the launch company to provide. According to FAA officials, in the event of a third-party claim that exceeded the launch provider’s first-tier coverage, FAA would be involved in any negotiation of the federal government’s potential payment, and the Secretary of Transportation would have to approve any settlement to be paid out of the congressional appropriations. Federal statute does not give FAA the authority to require spaceport operators to obtain insurance, but spaceports colocated with federal ranges may be required under federal contractual agreements to insure their property against damage resulting from space launch mishaps. Spaceport insurance coverage varies among the spaceport operators we interviewed, and the stakeholders we spoke with had differing views on the affordability of insurance. Several spaceport operators we interviewed found the financial responsibility regulations for commercial space launches confusing, which could potentially result in their failure to obtain adequate insurance protection. Unlike launch companies, spaceport operators face no federal statutory or regulatory insurance requirements. Specifically, FAA’s spaceport licensing process includes safety requirements for spaceport operators but does not include any insurance requirements. FAA officials stated that their focus is on ensuring public safety and that spaceport operators are responsible for protecting their own property against the risks associated with spaceflight. Operators of spaceports that are located on federal government properties, however, could have federal contracts or agreements that require them to have insurance to protect their own property from damage resulting from space launch mishaps. For example, through Space Act Agreements, which NASA signs with other organizations to formalize partnerships that help NASA achieve its mission, NASA has imposed insurance requirements for an FAA-licensed spaceport that is located on NASA property. As of July 2016, FAA has licensed just one spaceport that is colocated with a NASA facility—MARS at NASA’s Wallops Flight Facility. A Space Act Agreement between NASA and VCSFA, the spaceport operator at MARS, requires, among other things, that VCSFA certify that sufficient insurance is in place to cover MARS property from damage resulting from space launch mishaps. Similarly, the Air Force could also require spaceport operators to hold insurance to protect the spaceport operators’ own property from damage resulting from space launch mishaps, but the Air Force does not always do so. For example, a Space Operations Support Agreement between the Air Force and Space Florida for Cape Canaveral Spaceport sets some insurance requirements for Space Florida, although the agreement does not explicitly mention insurance to cover Space Florida property against damage resulting from space launch mishaps. Three of the 10 spaceports FAA has licensed for commercial activity— MARS, Mojave Air and Space Port, and Spaceport America—have had commercial activity in the last 5 years, and their insurance coverage varies. Even though FAA does not require spaceport operators to hold insurance, representatives of these three commercially active spaceports told us that they have both property and liability insurance coverage to protect themselves from losses resulting from space launch mishaps. Operators of the three commercially active spaceports said they have property insurance coverage either through their contracts with launch companies or through their state government. Operators of two of the three commercially active spaceports said they receive property insurance coverage through the launch companies that operate from their property. Specifically, according to stakeholders, contracts between spaceport operators and launch companies may include provisions requiring the launch company to include the spaceport operator as an additional insured during launch activities. The operator of the other commercially active spaceport said its state provides property coverage for damage resulting from space launch mishaps. Operators of the three commercially active spaceports receive liability insurance coverage through their status as launch companies’ contractors, and additionally through their state if coverage is available. FAA officials said that when a spaceport operator hosts commercial activity at its spaceport, the spaceport operator is considered a contractor of the launch company, which is one kind of “involved party” to a launch. Furthermore, the financial responsibility regulations for commercial space launches state that a launch company must include its contractors and subcontractors as additional insureds on the insurance policy or policies purchased to comply with the insurance requirements FAA sets for launch companies. This means that when a spaceport operator hosts commercial activity at its spaceport, it receives liability coverage under the insurance policy the launch company must purchase to comply with the insurance requirements FAA sets for launch companies. In addition to the liability insurance coverage spaceport operators receive through their status as launch companies’ contractors, operators of two of the three commercially active spaceports said their states provide some degree of liability coverage for damage resulting from space launch mishaps. This coverage may, however, be limited. For example, one commercially active spaceport operator said that it participates in its state’s minor liability program with a cap of $1 million per event, which is small when compared to the $500 million cap FAA sets for the liability insurance launch companies must purchase. Operators of the seven spaceports that have not had commercial activity in the last 5 years have not recently had to obtain insurance to protect their property from damage resulting from launch mishaps. Several of these spaceport operators said they plan to evaluate options for insuring their property against launch-related damage if they host commercial space launches in the future. Operators of three of these spaceports— Cecil Field Spaceport, Midland International Airport, and Ellington Airport—also said their sites are still under development, another reason they have not had to obtain insurance to protect their property from damage resulting from launch mishaps. While these seven spaceports have not had to obtain insurance coverage to protect their property from damage resulting from space launch mishaps, several told us that they have independently purchased property and liability insurance to cover damage resulting from day-to-day operations. While operators of the three commercially active spaceports were able to obtain or receive property and liability insurance coverage, five of the nine spaceport operators we interviewed—including two of the three commercially active spaceports—reported encountering difficulties in obtaining these kinds of insurance for commercial space launches or expressed concerns about their affordability. For example, representatives from one of the three commercially active spaceports explained that when they tried to purchase property insurance to protect the infrastructure at their spaceport from damage resulting from space launch mishaps, insurance providers either declined to provide quotations, provided quotations exceeding or similar to the site’s launch fees, or included substantial deductibles. As a result, this spaceport operator, in negotiations with the launch company that operates from their site, pressed for a provision in their contract specifying that the launch company would include the spaceport operator as an additional insured in its insurance policy to protect the spaceport infrastructure against any damage resulting from the launch company’s activities under the contract. Representatives from this spaceport also told us that based on the insurance quotations they received, purchasing their own property insurance would have been significantly more expensive than it was for the launch company that operates from their site to expand their policy to cover the spaceport infrastructure. Other stakeholders we spoke with said insurance for commercial space launches is currently available and affordable. All five insurance companies and brokers we interviewed said insurance for commercial space launches is currently available, and four also categorized insurance for commercial space launches as affordable. Several insurance companies and brokers said the supply of capital invested in the market for this insurance is currently high, which reduces the cost of insurance. One insurance company we interviewed attributed the high supply of capital to relatively low interest rates across financial markets. Two insurance industry stakeholders also said one reason that the commercial space insurance market is an attractive option for investors is that space launch risks are not correlated with other market risks, such as the risk of a natural disaster or financial market risks, so investors can diversify portfolio risks. Despite the consensus among insurance companies and brokers that insurance for commercial space launches is currently available and affordable, four of the five insurance companies and brokers we interviewed said a catastrophic space launch mishap could reduce its availability. Furthermore, according to two of the insurance companies and brokers we interviewed, the commercial space insurance market is linked to the market for aviation insurance, so a large aviation claim could affect the commercial space insurance market. FAA has not clearly communicated its interpretation of financial responsibility regulations to spaceport operators, and spaceport operators may not have adequate protection as a result. As previously discussed, CSLAA and its amendments require launch companies to purchase insurance to cover damage to third parties in case of a launch mishap. FAA officials told us they believe the statute and regulations are clear that during a launch, a spaceport operator that is an active participant in the launch is not considered a third party. Instead, FAA considers spaceport operators that host commercial space launches to be “involved parties” to launches. Because FAA considers spaceport operators hosting commercial space launches involved parties, any damage to spaceport property may not be covered under the liability insurance policy purchased by a launch company. However, spaceport operators may negotiate property insurance coverage with the launch companies that operate from their spaceports and document agreements related to insurance in their contracts, as described in earlier examples. FAA officials said they think the financial responsibility regulations are clear as- is. FAA officials reported that they have not had any significant internal disputes about how these laws and regulations should be interpreted. However, several spaceport operators we interviewed reported that they find the financial responsibility regulations to be ambiguous. Specifically, they said they are unsure whether they are considered third parties or involved parties to launches. For example, among the spaceport operators, launch companies, and insurance industry stakeholders we interviewed, six said they believe spaceport operators are involved parties; one said it believes spaceport operators are third parties; and six said they believe spaceport operators may be involved parties, third parties, or both, depending on the circumstances. Furthermore, one spaceport operator told us that it asked FAA to clarify its status under a variety of hypothetical scenarios, but FAA officials did not address the confusion about whether the spaceport operator would be considered a third party or an involved party under the scenarios presented. Rather, FAA officials provided general guidance on the financial responsibility regulations. Similarly, another spaceport operator argued that its property should be covered under the liability insurance policy purchased by the launch company that operates from its site because it is a third party. A few stakeholders indicated that guidance or further discussion to clarify the language of the financial responsibility regulations would be useful. Furthermore, several factors can generate additional uncertainty for spaceport operators trying to determine whether they are involved parties or third parties to launches. Ownership of the assets involved in commercial space launches may be split among several different parties, including the federal government, a state or municipal government, a launch company, and a launch company’s customer. Figure 4 shows the main assets involved in vertical space launches, as well as the variety of parties that stakeholders said may own each asset. A few spaceport operators told us that mixed ownership of spaceport assets can create confusion when they are trying to draw a line between their property and other parties’ property. Complex launch arrangements or relationships may render unclear a spaceport operator’s status as an involved party or a third party. For example, FAA officials acknowledged that a spaceport operator could, in theory, be both a third party and an involved party for a given launch but did not provide any real-world examples of this occurring. Moreover, according to FAA officials, a spaceport operator’s status as an involved party or a third party could vary by launch. Spaceport operators may also have a stake in spaceports they themselves do not operate. For example, while Space Florida is licensed to operate Cape Canaveral Spaceport, it also provides financing for space transportation-related infrastructure at other nearby launch facilities. Because relationships like those Space Florida has with operators of nearby launch facilities are not explicitly mentioned in FAA’s financial responsibility regulations, Space Florida officials reported being unsure whether Space Florida is considered an involved party or a third party with regard to the financing it provides at spaceports for which Space Florida is not the licensed operator. When Congress passed CSLA in 1984 and significant amendments to CSLA in 1988, U.S. space launches occurred exclusively at federal ranges, which, as previously mentioned, are U.S. government facilities that can host both government and commercial space launches. At the time, Congress may not have anticipated that state and municipal governments would later become involved in the commercial space industry. For example, FAA first licensed California Spaceport for commercial activity in 1996, nearly a decade after Congress passed significant amendments to CSLA. Additionally, the original financial responsibility regulations for commercial space launches were finalized before regulations implementing a spaceport operator licensing regime were finalized. Congress has stated that state participation in the commercial space industry is in the national interest and of public benefit, and FAA has the dual mission of both regulating and promoting this industry. Furthermore, to carry out its mission, FAA must communicate with spaceport operators. Federal internal control standards state that managers should communicate information externally to achieve the entity’s objectives. However, FAA has not clearly communicated its interpretation of financial responsibility regulations for commercial space launches to spaceport operators. Specifically, FAA has not issued guidance to spaceport operators to clarify when it considers them third parties and when it considers them involved parties. FAA officials told us that while they consult with prospective spaceport operators during the pre-application phase of licensing, they do not have formal guidance to provide to spaceport operators to help spaceport operators understand when they are considered involved parties and when they are considered third parties. According to FAA officials, issuing formal guidance, in general, has not been a high priority; officials believe spaceport operators have several opportunities to ask questions and receive answers about their financial responsibilities, including during pre-application consultation, when renewing a spaceport operator license, during an annual inspection, or informally at any time. If FAA does not clarify and communicate its interpretation of financial responsibility regulations, spaceport operators that consider themselves third parties may mistakenly assume any launch-related damage to their property would be protected under insurance purchased by launch companies operating from their property. Uninsured losses, which could result from such misunderstandings, can be detrimental in several ways. Uninsured losses may require more recovery time than insured losses, delaying federal efforts to encourage state and municipal governments to establish space transportation-related infrastructure. They may also lead to unexpected costs to taxpayers. For example, VCSFA reported that confusion over who should pay for repairs in the aftermath of the mishap at Wallops Island contributed to delays in resuming launches to resupply the International Space Station from this spaceport. Additionally, according to NASA officials, NASA contributed by increasing the value of an existing contract with VCSFA by $5 million (funds that were intended for other space operations projects) to fund infrastructure repairs at MARS. The complicating factors that generate additional uncertainty for spaceport operators—mixed ownership of spaceport assets and the potential for a spaceport operator to be both an involved party and a third party or for their status to vary by launch—also underscore the need for FAA to provide clear guidance to spaceport operators. Stakeholders we interviewed were divided in their opinions on the need to revise the current insurance approach, which constitutes the first tier of the existing indemnification regime. Similarly, they expressed differing views about the options we identified for revising the approach: (1) requiring launch companies to purchase insurance to cover spaceport property, and (2) requiring spaceport operators to purchase insurance for their own property. Stakeholders identified some positive aspects and also raised concerns about each option. Stakeholders we spoke with offered various views on the current insurance approach and on the need to revise it. Several stakeholders noted that the domestic space industry is still in a nascent stage of development and that damage caused by launch mishaps has been limited and infrequent. As a result, a few stakeholders said, the current insurance approach has not yet been sufficiently tested to suggest a need for change. As previously discussed, spaceport operators currently are not required to hold insurance to cover their own property, and launch companies are not explicitly required to purchase insurance to protect spaceport property. In the absence of such requirements, spaceport operators may purchase their own property insurance to protect against damage resulting from launch activity, negotiate insurance protections through contractual agreements with launch companies, or forgo insurance entirely. Few (3 of 10) FAA-licensed spaceports have had commercial activity in the last 5 years. Moreover, as of October 2016, the only FAA-licensed space launch mishap to occur at a nonfederal spaceport was Orbital Sciences Corporation’s incident in October 2014 which resulted in damages to the spaceport. The mishap was resolved among the spaceport, the launch company, and NASA outside of any requirements of the current insurance approach. Stakeholders we interviewed—spaceport operators, launch companies, and insurance industry stakeholders—provided mixed views on the current insurance approach. Specifically, stakeholders were almost evenly split in their views of supporting, opposing, or neither supporting nor opposing the current insurance approach (see table 2). Two of the three launch companies supported continuing the current insurance approach, while spaceport operators and insurance industry stakeholders were more divided on the need for change. Furthermore, stakeholders had mixed views on how well the current insurance approach is working. In particular, some felt that the current approach creates a lack of certainty, creates an uneven playing field between federal and nonfederal ranges, and increases inefficiency, although views were not always uniform: Lack of certainty. Some stakeholders we spoke with said that the current insurance approach—which, for example, relies on contracts between launch companies and spaceport operators to determine insurance coverage—does not promote certainty because contracts can be open to interpretation and unclear. One spaceport operator also said that such confusion is likely to continue as more states build spaceports, because each state and spaceport may have different policies or agreements with each launch company. However, concerns about the current insurance approach lacking certainty were not shared by all stakeholders. We heard from two launch companies and one insurance industry stakeholder that as long as the insurance coverage specified in the contracts is agreed upon in advance of any launches, all involved parties should be certain about the terms and levels of protections. Uneven playing field. Some spaceport operators we interviewed said that the current insurance approach has not promoted a level playing field between federal ranges and state or municipal spaceports. These operators pointed out that federal ranges enjoy a competitive advantage because launch companies are already required to purchase insurance to cover damage to federal property, while nonfederal spaceports are not similarly protected. Inefficiency. Some stakeholders said that the current approach is less efficient because currently insurance coverage has to be negotiated for every launch or set of launches, whereas if insurance were required by law or regulation, such negotiations would be unnecessary. On the other hand, some stakeholders said that negotiating contracts could be more efficient and that the contracts could be adjusted more quickly than creating new regulations or amending existing regulations. However, stakeholders also identified reasons why the current insurance approach should be continued, including greater flexibility, enhanced competition, and assured consistency, although here again views were not always uniform. Greater flexibility. Several stakeholders we interviewed, including at least one from each of our three stakeholder groups, said that using contractual agreements for insurance coverage allows involved parties to individually assess their assets and risks and to make decisions on how best to protect them given the varying characteristics of the launch vehicles and sites. For example, a spaceport may have expensive equipment to protect, or it may be interested in only hosting experimental activity, some of which is designed to fail for testing purposes. In either of these cases, the involved parties can determine the desired level of protection. Enhanced competition. Some stakeholders we interviewed, including at least one from each of our three stakeholder groups, said that the flexibility to make their own business decisions regarding what type and how much insurance coverage to obtain allows for competitive pricing to attract businesses. Specifically, spaceport operators might keep their launch prices low by purchasing less coverage, which might allow them to attract new launch companies to their spaceports. In contrast, some stakeholders, including at least one from each of our three stakeholder groups, said they would like to remove the temptation for spaceport operators to forego insurance in order to attract new customers with lower prices, as it puts those operators at risk of not being able to recover from a mishap. Assured consistency. An insurance industry stakeholder said that because commercial space launch activities require significant advanced planning, changing regulatory conditions after such activities have begun can create an additional expense that they did not consider in their initial plans. One launch company we spoke with added that continuing the current insurance approach is important, as changes to the insurance rules may complicate the business environment for launch companies in the early stages of operations. Based on interviews with FAA, spaceport operators, launch companies, and insurance industry stakeholders, we identified two primary options for implementing a revised insurance approach as it relates to state and municipal spaceports: Require launch companies to purchase insurance to cover spaceport property against damage resulting from launch accidents. This option would likely be implemented through FAA’s launch licensing process by including an insurance requirement for potential damage to spaceport property. Require the spaceport operators to purchase insurance to cover their own property against damage resulting from launch accidents. This option would likely be implemented through the spaceport operator licensing process. While stakeholder groups we interviewed expressed differing views about the options, within stakeholder groups, views on the potential options were fairly consistent. Most spaceport operators we interviewed favored the option to require launch companies to purchase insurance, while most launch companies favored continuing the current approach. Others, such as those among the insurance industry stakeholders we interviewed, tended to favor the option to require launch companies to purchase insurance. In general, stakeholders tended to oppose options where the burden of purchasing the insurance was on them. Stakeholder groups we interviewed expressed differing views about the option of requiring launch companies to purchase insurance to cover spaceport property against damage resulting from a mishap. Nearly all spaceport operators and insurance industry stakeholders we interviewed supported this option, while launch companies either opposed it or were neutral (see table 3). Similarly, stakeholder groups we interviewed expressed differing views about the option of requiring spaceport operators to purchase insurance to cover their own property against damage resulting from mishaps. Most spaceport operators opposed this option, while launch companies either opposed it or were neutral, and insurance industry stakeholders were divided (see table 4). In addition, stakeholders we interviewed said that one or both options would benefit participants by leveling the playing field, increasing certainty, and increasing efficiency: Leveled playing field. Many stakeholders we interviewed, including at least one from each of the three stakeholder groups, said that the option of requiring launch companies to purchase insurance to cover spaceport property would help promote a level playing field with federal ranges. This is because commercial spaceports would then receive the same level of insurance protection as federal spaceports, which launch companies are already required to cover in their insurance policies. According to a number of spaceport operators and insurance industry stakeholders we spoke with, this option would be more equitable, as state and municipal spaceports are no different from federal ranges in terms of function or capabilities. Increased certainty. Depending on which option was implemented, several stakeholders from all three stakeholder groups said each of the potential options would provide certainty to all parties on what would be covered, and by whom, should a mishap occur. Moreover, they said that one or both options would provide certainty to all involved parties that spaceport operators would have the financial means to repair damage quickly after a mishap and resume launch activities without keeping the launch customers waiting. In addition, several stakeholders, including at least one from each stakeholder group, said that the option of requiring launch companies to purchase insurance would promote investment in and development of spaceports. Specifically, some stakeholders said that investors and owners would have greater assurance that the assets they have invested in would have adequate protections in the event of a launch mishap. Increased efficiency. Several stakeholders, including at least one from each stakeholder group, said that either option would make contract negotiations more efficient, as the insurance protections would be clearly stipulated in law. However, stakeholders raised several concerns about one or both options to revise the current insurance approach. Specifically, stakeholders said one or both options could provide less flexibility, increase costs for some participants, and limit participants’ ability to do business in some ways. Less flexibility. Some stakeholders we spoke with told us that one or both options could reduce flexibility in various ways. For example, one spaceport operator said that either option would require launch companies or spaceport operators to purchase insurance to cover spaceport property rather than allowing them to make decisions on how best to protect and manage their risk and property assets. For example, this representative said that spaceport operators may not want to insure some of their own property because of its low value. According to two spaceport operators, requiring full protection of this property could burden a spaceport operator to take on a cost it otherwise would not. For example, requiring spaceport operators to purchase insurance may also be overly burdensome for spaceports that host experimental activity or for those that have more facilities and expensive assets to insure. Higher costs. Some stakeholders also noted that one or both options could increase costs or shift them to certain participants, depending on which option was implemented. A few spaceport operators expressed concerns that the option of requiring spaceport operators to purchase insurance would be onerous because of the potentially high cost of securing such coverage. According to some insurance industry stakeholders, the cost for the launch company to add the spaceport as an insured party under the launch company’s policy would be less than the cost for the spaceport to have its own policy covering the same property. On the other hand, some stakeholders, including at least one from each of our three stakeholder groups, raised the concern that the option of requiring launch companies to purchase insurance would increase their cost of conducting business. According to one insurance industry stakeholder it could disproportionately affect smaller launch companies that may have fewer resources. Additionally, according to another insurance industry stakeholder and one spaceport operator, the increased cost of conducting business for launch companies could reduce the amount of launch activity that would otherwise take place. However, a few stakeholders—namely, spaceport operators and insurance industry stakeholders—said that the government maximum probable loss calculation would not be significantly different if state and municipal spaceport property were included, as the addition of such property would present little increased cost in the maximum probable loss calculation. Limited ability to do business. For one or both options, several stakeholders, including at least one from each of our three stakeholder groups, expressed concern that requiring the purchase of insurance would negatively affect participants’ ability to do business. For example, one spaceport operator raised the concern that if launch companies were required to purchase insurance, the amount of liability insurance required to protect each spaceport would become part of the launch company’s business decision regarding which spaceport to partner with. As a result, according to a launch company and a spaceport operator, such a requirement could affect competition between spaceports because some spaceports would require less insurance due to less property that needs protection. In another example, some stakeholders said that requiring spaceport operators to purchase insurance may be more burdensome for newer spaceports due to their limited track records or for spaceports with lower risks. Stakeholder views were mixed on which party was in the best position to determine risk. Both spaceport operators and insurance industry stakeholders said that the responsibility should be on the entity that has the most control over launch activities and is in the best position to avoid causing damage. Specifically, they said that launch companies perform the risky activities and exercise the most control over those activities, and it would therefore be most fair for the launch companies to be responsible for insuring against damages caused by those activities. One spaceport operator also said that because the launch companies, which conduct the launches, have a clearer idea of the risk of each launch (e.g., the vehicle’s track record), they are better positioned to make an informed decision on insurance coverage for those risks. However, two stakeholders said that spaceport operators are aware of the risks of their involvement and are well positioned to make informed business decisions about whether or not to purchase insurance and to what extent. In addition to the issues stakeholders raised, limiting costs to the federal government before and after a disaster is another relevant consideration for revising the current insurance approach, as our prior work suggests. The potential cost to the federal government of revising the current insurance approach related to FAA-licensed spaceports depends on the accuracy of the related maximum probable loss calculation. As discussed previously, this calculation evaluates and estimates the risk and potential losses associated with launch activity and the corresponding insurance coverage a launch company must purchase. An inaccurate calculation that understates the amount of insurance a launch company must obtain would increase the exposure to the federal government, as the insurance amount would be less than the potential losses associated with the launch activity and the property would be inadequately protected. In a July 2012 report, we found that the potential cost to the federal government of indemnifying third-party losses is currently unclear because it depends in part on a calculation that may not be sound. We recommended that FAA review and periodically reassess its maximum probable loss methodology, including assessing the reasonableness of the assumptions used. FAA is currently evaluating its maximum probable loss methodology. We have an ongoing review to independently assess the methodology used by FAA. Congress has clearly expressed an interest in the development of the commercial space industry, which has begun to move beyond launching exclusively from federal ranges to launching from state, municipal, and private spaceports. Expansion in the number of spaceport operator licenses—and the potentially complex ownership and contractual arrangements at the spaceports FAA has licensed—developed largely after the legislation authorizing the current indemnification approach were established. The spaceport operators we spoke with expressed confusion about the financial responsibility regulations for commercial space launches, which could potentially result in gaps in insurance protection. Among other things, FAA is tasked with regulating and promoting commercial space launches by the private sector, as well as facilitating the expansion of U.S. commercial space transportation. Gaps in insurance protection can result in uninsured losses, which can, in turn, hinder the development of space-transportation-related infrastructure that supports the commercial space launch industry. Given the growth in nonfederal spaceports, ensuring that spaceport operators have an accurate understanding of the financial responsibility regulations will only become more important. To better ensure spaceport operators’ understanding of FAA’s financial responsibility regulations for commercial space launches, we recommend that the Secretary of Transportation ensure that the FAA Administrator provides additional communication to clarify FAA’s interpretation of the financial responsibility regulations for commercial space launches. The forms of communication could include, among other things, issuing additional guidance or using other forums to clarify when a spaceport operator is a third party to a launch and when it is not. We provided a draft of this report to the Department of Transportation for its review and comment. The Department of Transportation provided us with technical comments, which we incorporated as appropriate, but did not comment on the recommendation. We will send copies of this report to the appropriate congressional committees, the Secretary of the Department of Transportation, and the Administrator of the National Aeronautics and Space Administration. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Alicia Puente Cackley at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last part of this report. GAO staff who made major contribution to this report are listed in appendix III. The report examines (1) the insurance coverage spaceport operators have in place to protect themselves from losses resulting from space launch mishaps and (2) stakeholder views on the need to change the current insurance approach and options for revising it. To address these objectives, we reviewed our prior related reports and other studies and analyzed relevant laws and regulations. We reached out to all 10 spaceport operators licensed by the Federal Aviation Administration (FAA), as of July 2016, and conducted semistructured interviews with 8 of these 10 spaceport operators. Two declined to be interviewed (due to transitions they were making in their operations) but one of these two provided written responses to our semistructured interview questions. Their responses were incorporated as appropriate. Therefore, we analyzed information from 9 of the 10 FAA-licensed spaceports. We also conducted semistructured interviews with launch companies. We selected all launch companies that had conducted more than one commercial space launch from a spaceport in the last 5 years for interviews. Of the seven launch companies we identified that conducted a launch in the last 5 years, we interviewed four. Of the remaining three, two declined our request and another was no longer in operation. We also included one launch company that had not launched in the last 5 years but has been active in advancing the commercial space launch activities. Therefore, we interviewed a total of five launch companies. Furthermore, we conducted semistructured interviews with all five key insurance industry stakeholders—three insurance brokers and two insurance companies—that had provided coverage to the commercial space industry, and two industry associations. Additionally, we interviewed officials from FAA and National Aeronautics and Space Administration (NASA). We also visited two spaceports selected based on various factors including number of years in operation, colocation with federal ranges, commercial activity within the last 5 years, and occurrence of a commercial space launch mishap. We visited Mid-Atlantic Regional Spaceport at NASA’s Wallops Flight Facility in Wallops Island, Virginia, because it is the site of the most recent mishap at a nonfederal spaceport, among other factors. In addition, we visited Cape Canaveral Spaceport because it has conducted many commercial space activities within the last 5 years, among other factors. In addition to analyzing information from our semistructured interviews, to examine the insurance coverage spaceport operators have put in place to protect themselves from losses resulting from space launch mishaps, we requested documentation, such as agreements with language related to insurance coverage, from a nonprobability sample of operators of spaceports. Specifically, we requested documentation from one spaceport that conducts vertical launches and one that conducts horizontal launches to understand how spaceports are protecting themselves against losses from space launch mishaps. We reviewed agreements from one spaceport; the other spaceport provided standard insurance language in an email. We selected these two spaceports based on factors such as number of years in operation and commercial space launch activities within the last 5 years. In some cases, the information we requested is proprietary, and spaceport operators said that they could not provide it. To examine the stakeholder views on the need to change the current insurance approach and options for revising it, we first conducted semistructured interviews with all stakeholders as described earlier, and based on their inputs, we sent a questionnaire to the stakeholders for their opinion on the options identified. We excluded industry associations from our follow-up questionnaire because many of their member organizations received our questionnaire individually, and members’ views for the options are reflected in our analysis. We sent questionnaires to nine spaceport operators—those that we interviewed or received written responses from—and received responses from all nine. Of the five launch companies, we sent questionnaires to four and excluded one launch company because during our first round of data collection, its representatives expressed that they did not feel comfortable providing their opinion on the options. We received responses from three of the four launch companies. Lastly, we sent questionnaires to five insurance industry stakeholders and received responses from all five. A copy of our questionnaire is included as appendix II. We conducted this performance audit from January 2016 to November 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Patrick A. Ward (Assistant Director), Chir-Jen Huang (Analyst-in-Charge), Caitlin Cusati, Shilpa Grover, Anne Kruse, Maureen Luna-Long, Jessica Sandler, Jennifer Schwartz, Joseph Silvestri, Jena Sinkfield, Molly Traci and Shana Wallace made key contributions to this report.
The U.S. commercial space industry has expanded, conducting eight launches in 2015 compared with none in 2011. These launches have traditionally been from federal facilities, but as of July 2016, there were 10 nonfederal FAA-licensed spaceport operators supporting both private and federal space activity. Almost all of these spaceport operators are local government entities. The complexity of the arrangements at these spaceports, and a mishap in October 2014 where the spaceport was not adequately insured, have raised questions about insurance coverage for spaceport assets, including potential federal involvement. Congress included a provision in statute for GAO to report on the potential inclusion of local government property in the existing indemnification regime for commercial space launches. This report examines (1) the insurance coverage spaceport operators have in place and (2) stakeholder views on the need to change the current insurance approach and on options for revising it. GAO reviewed key documents; interviewed FAA and NASA officials and representatives of FAA-licensed spaceports, launch companies, insurance brokers, and insurance companies; and selected two spaceports to visit based on launch activity. Of the 10 spaceport operators (entities that host launches from their property) that are currently licensed by the Federal Aviation Administration (FAA), 3 have had commercial activity in the last 5 years, and all 3 told GAO that they have both property and liability coverage to protect themselves from losses resulting from space launch mishaps. Federal laws and regulations do not require spaceport operators to have insurance, but operators of nonfederal spaceports that are located on federal property could have federal contracts that require them to have insurance to protect their own property from damage resulting from space launch mishaps. Moreover, for launches licensed by FAA, since the Commercial Space Launch Act Amendments of 1988, FAA has required launch companies (firms that conduct or will conduct the launch of vehicles and payloads) to purchase insurance to cover damage to the uninvolved public, as well as damage to federal government property, in case of a launch mishap. Launch participants may also choose to negotiate additional insurance coverage through launch-specific contracts. However, spaceport operators said that they find the regulations that determine financial responsibility for commercial space launches to be confusing. Specifically, several spaceport operators GAO interviewed said that, based on their interpretation of the financial responsibility regulations, they were unsure whether their property would be covered under a launch company's insurance policy or whether they would need to purchase their own insurance for their property to be covered. FAA's mission includes encouraging, facilitating, and promoting commercial space launches by the private sector, among other things. Furthermore, federal internal control standards state that management should externally communicate the necessary quality information to achieve the entity's objectives. Unless spaceport operators have a clear understanding of FAA's financial responsibility regulations, a risk exists that they may not obtain adequate insurance against losses in the event of mishap. Uninsured losses, in turn, could potentially cause delays in resuming commercial launches following a mishap and unnecessary costs to the federal government, both of which could hinder the development of the domestic commercial launch industry. Stakeholders in the space launch industry are divided on the need to change the current insurance approach, in which insurance for spaceports is not required but can be negotiated through contracts between launch companies, which operate launch vehicles, and spaceport operators, which run spaceports. Stakeholders identified some positive aspects of the current insurance approach—for example, some said that negotiating contracts specific to each launch allows for greater flexibility. However, they also raised concerns, including a lack of certainty about coverage for potential damage. GAO identified two potential options for requiring protection for spaceports: (1) requiring launch companies to purchase insurance to cover spaceport property and (2) requiring spaceport operators to purchase insurance to cover their own property. In general, stakeholders tended to oppose the option in which the burden of purchasing the insurance was on them. Specifically, most spaceport operators GAO interviewed favored the first option, while most launch companies favored continuing the current approach. Stakeholders discussed benefits associated with both options—for example, they said that both options could increase certainty by specifying which party was required to insure spaceport property. However, they also noted challenges, such as higher costs for the party required to purchase the insurance and decreased flexibility to customize their use of insurance depending on the details of a particular launch. GAO recommends that FAA provide additional communication to clarify its interpretation of the financial responsibility regulations for commercial space launches. The Department of Transportation provided technical comments.
Genetic engineering refers to a modern set of tools that can be used for precisely modifying the genetic makeup of crops, animals, or microorganisms in order to introduce, remove, or rearrange specific genetic material conferring desired traits. Genetic engineering techniques allow for faster development of new crop varieties, since the gene or genes for a given trait of interest can be readily incorporated into a plant or animal species to produce a new variety. GE varieties have been developed for many crops, plants, trees, and flowers. As of October 2015, USDA had deregulated 118 GE plants, with corn, soybeans, and cotton being the most prevalent. Common classes of traits engineered into crops include insect resistance, herbicide tolerance, resistance to viruses, and other changes to enhance product quality. A number of different techniques can be used to modify organisms. To date, genetic engineering has relied extensively on the use of a particular bacterium to introduce traits into plants. To do this, developers remove the elements of the bacterium harmful to the plant, for example, and use the disarmed bacteria to insert new genetic material to facilitate the desired genetic change. The bacterium used to introduce genes is not the only plant pest involved in genetic engineering. Small segments of DNA from plant viruses are sometimes inserted into GE crops to control the expression of genes of interest. Some of the bacteria and viruses used in genetic engineering to transfer genetic material into crops are defined as plant pests under USDA’s regulations, meaning that they can directly or indirectly injure or cause disease or damage to plants. In addition to bacterial transformation, it is possible to introduce genes with physical technologies. These technologies include particle bombardment (e.g., gene gun, or biolistics, where particles are coated with DNA containing the desired traits and shot into the target cells), and electroporation (the application of an electric current to a cell membrane in order to open a channel through which DNA may pass). Developers have also found many genetic sequences from plants that perform the same function as the aforementioned plant virus genes, that is, they control the expression of the introduced genes of interest. Thus, it is possible to produce genetically engineered plants that do not contain plant pest genetic sequences, according to USDA officials. Alternative technologies, in particular genome editing technologies, have come into more widespread use. In many cases, crops produced using some of these alternative technologies cannot be distinguished from their non-GE counterparts. These alternative technologies tend to be more precise and efficient. These technologies are distinguished by the use of artificial versions of nucleases, or “molecular scissors,” that cut DNA at specific locations, which is a cornerstone of the newer genetic engineering technologies. Genome editing can be used to create deletions, substitutions, and gene insertions. These technologies also do not necessarily require use of a plant pest to introduce genetic changes. GE crops may become unintentionally mixed with non-GE crops at various points in the supply chain, from production to market. Cross- pollination is a natural process that some crops depend on for reproduction that can result in unintended mixing at the farm level when pollen from one crop fertilizes plants in a nearby field. For example, GE pollen may drift to a nearby non-GE field and fertilize those crops, and the resultant seeds and associated crops may have unintended GE traits when planted. This is especially true for cross-pollinated crops, such as corn, but much less true for a crop like soybeans that is primarily self- pollinated. But since corn pollen can move relatively long distances, and since corn plants naturally cross-pollinate, non-GE corn may be pollinated by GE corn if these crops are planted close enough to each other. Commingling is unintended mixing that occurs after crops are harvested, when GE crops or their residue accidentally come into contact with non- GE crops during transport, storage, handling, or processing. For example, if a railcar transports GE grains one day and then non-GE grains the next day, there is a chance that residual traces of the GE crop shipment could end up in the non-GE shipment. Since GE and non-GE crops are generally indistinguishable in appearance, it is difficult to prevent commingling without segregation methods. For purposes of this report, we are referring to both cross-pollination and commingling as unintended mixing. The Advisory Committee on Biotechnology and 21st Century Agriculture (AC21) was originally established in 2003 and was charged with providing guidance to USDA on issues, identified by the Office of the Secretary, including examining the long-term impacts of biotechnology on the U.S. food and agriculture system and recommending how USDA might address those impacts. In 2011, the Secretary of Agriculture revived AC21 to address, among other things, what types of compensation mechanisms, if any, would be appropriate to address economic losses by farmers in which the value of their crops is reduced by unintended GE presence (unintended mixing of GE and non-GE materials), and what would be necessary to implement such mechanisms. Unintended mixing may result in economic losses by farmers, for instance, if pollen from a field of GE corn drifts and pollinates non-GE corn in a neighboring field and the resulting grain is harvested. In this case the non-GE farmer may receive a lower price for the crop or the shipment may be rejected by a buyer if the shipment exceeds a predetermined level of GE content. AC21 comprises of representatives from a cross-section of the agricultural community, including farmers, seed companies, food manufacturers, organic farming organizations, state government, biotechnology companies, and medical professionals. AC21’s recent focus has been to strengthen coexistence, meaning the ability of the agriculture sector to maintain different production systems. Coexistence specifically involves the concurrent cultivation of non-GE crops (e.g., conventional, organic, and identity-preserved) and GE crops. AC21 defined identity-preserved crops as those of an assured quality in which the identity of the material is maintained from the germplasm or breeding stock to the processed food product on a retail shelf. Coexistence issues arise when the production-related activities of one farmer affect another farmer, potentially resulting in costs for the other farmer. Farmers could adopt measures to prevent mixing of GE and non- GE crops, such as using buffer zones between different crop types, which may result in smaller yields and additional costs because of the acreage taken out of production to create the buffer zone. Three federal agencies share responsibility for overseeing GE crops— USDA, EPA, and FDA. Each agency has specific responsibilities for certain activities with GE crops, but not all of the agencies are necessarily involved in overseeing each activity or use of a GE crop. The agencies apply their general authorities under statutes that are relevant to each agency’s responsibilities for overseeing GE crops specifically, as shown in table 1. Under the Plant Protection Act (PPA), USDA is responsible for preventing the importation or dissemination of plant pests and noxious weeds into or within the United States. A noxious weed is any plant or plant product that can injure or cause damage to crops, livestock, interests of agriculture, public health, or the environment, among other things. USDA may prohibit or restrict the importation, entry, export, or movement in interstate commerce of, among other things, GE crops that might introduce or disseminate a plant pest or noxious weed. Under its regulations, USDA allows individuals, including GE crop developers, to petition the agency to determine deregulated status for a GE crop if enough evidence has been collected showing that it poses no more of a plant pest risk than the equivalent non-GE crop, and it is not designated as a noxious weed. If USDA deregulates a GE crop, it is no longer subject to the restrictions of the plant pest provisions of the regulations relating to GE crops. However, USDA could later find the GE crop to be a plant pest or noxious weed on the basis of new data or analysis, and place restrictions on the importation, entry, export, or movement of the GE plant. Under the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA), EPA is responsible for regulating the genetic materials engineered into a crop to produce pesticides that ward off insects, bacteria, and viruses, as well as the pesticidal substance that the crop ultimately produces. These are known as plant-incorporated protectants. As with conventional chemical or biological pesticides, EPA regulates the sale, distribution, and use of GE pesticides, and they must be registered before they are distributed or sold. In addition, EPA regulates the sale, distribution, and use of pesticides used in conjunction with GE crops engineered to be tolerant to those pesticides. Under the Federal Food, Drug, and Cosmetic Act (FFDCA), FDA regulates to ensure the safety of most of the food supply, while USDA, under its authority, is responsible for the safety of meat, poultry, processed egg products, and catfish. FDA regulates to ensure the safety of foods and food products from plant sources, including food from GE crops, which must meet the same requirements as foods from non-GE crops. FDA also has in place a voluntary premarket consultation program and encourages developers of GE crops to consult with the agency before marketing their products. EPA, FDA, and USDA, generally have taken steps to regulate GE crops, including those derived from alternative technologies, but USDA has not updated its regulations to oversee all GE crops. EPA and FDA officials said that they apply the same legal authorities and oversight processes to regulate GE crops from alternative technologies that they do for other GE and non-GE crops, regardless of how they are derived. Conversely, USDA’s regulations pertaining to GE crops address only GE crops for which the donor, vector, or recipient of genetic material is a plant pest. Although USDA proposed revising its regulations pertaining to the importation, interstate movement, and environmental release of certain genetically engineered organisms in 2008 to bring the regulations into alignment with the PPA and update the regulations in response to advances in genetic science and technology, it later withdrew its proposed rule. However, according to USDA officials, the agency needs to update its regulations to also subject GE crops that either do not use plants pests or use plant pests but do not result in plant pest DNA in the GE crop developed to the same restrictions and requirements as GE crops for which the donor, vector, or recipient of genetic material is a plant pest. EPA officials said that they regulate all pesticides, including those engineered into crops (GE pesticides) using the same legal authorities regardless of how they are derived, and FDA officials said that they apply the same legal authority to regulate GE crops from alternative technologies that they do for non-GE crops. Accordingly, EPA uses the same oversight process to regulate GE pesticides engineered into crops using alternative technologies that it does for GE pesticides engineered into crops using other technologies. FDA’s program to voluntarily work with companies to consider food safety issues is followed for any type of GE crop brought to FDA for consideration, regardless of the technology used to develop it, according to FDA officials. EPA regulates a GE pesticide in a crop when it meets the definition of a pesticide under FIFRA and is intended for such use, regardless of how the pesticide was created or the technology used to develop it. EPA officials stated that as new GE technologies are developed, many will eventually make their way to EPA for analysis and consideration as part of the pesticide registration process. For example, in May 2010 EPA registered a GE pesticide based on the plum pox virus. This pesticide was engineered into varieties of the European plum tree to give the tree the ability to resist the virus, which affects the quality of fruits and can leave infected trees unable to produce fruit. This GE pesticide was developed based on ribonucleic acid interference. According to EPA officials, EPA considers this a pesticide because it was designed to defeat a virus, and therefore it mitigates a pest. EPA officials said they will evaluate GE pesticides using alternative GE technologies on a case- by-case basis as they are brought to EPA for pesticide registration. Before EPA can register a pesticide, a company must provide data demonstrating that the pesticide will not pose unreasonable risks to human health or the environment when used in accordance with widespread and commonly recognized practice. According to EPA documents and officials, when assessing the potential risks of pesticides—including those that are GE pesticides—EPA requires studies from applicants examining factors such as potential risks to human health, environmental fate and effects (e.g., potential for gene flow to non- GE crops), and the need for management plans to mitigate the potential development of pest (e.g., insect or weed) resistance in the field. EPA officials stated that they follow the process outlined in an internationally accepted guideline issued by the Codex Alimentarius Commission for risk assessment when examining new genetic material that has been introduced into a plant. FDA regulates to ensure the safety of foods, including foods derived from GE crops, under the FFDCA and its implementing regulations. Foods derived from plant varieties developed through genetic engineering are subject to the same safety requirements as foods derived from non-GE crops. In May 1992, FDA established its policy regarding the review of GE foods in its Statement of Policy: Foods Derived from New Plant Varieties. This policy describes the kinds of assessments FDA recommends that companies perform to ensure that foods and feeds from new plant varieties are as safe as comparable foods and feeds already on the market, and otherwise do not raise regulatory concerns. In its 1992 policy, FDA stated that it was not aware of any information that showed foods from GE crops, as a class, were different from comparable foods in a meaningful or uniform way or that they have a different or greater safety concern than foods developed by non-GE plant breeding. FDA officials said that the basic principle as expressed in the agency’s 1992 policy is that the traits and characteristics of foods should be the focus of safety assessments for new varieties of food crops, not the technologies used to develop them. In addition, FDA has the authority under the FFDCA to seek an order to remove any food—including any foods derived from GE crops—from the market if the food is unsafe, or adulterated, under the law. FDA can also seek sanctions against those marketing such a food. According to documentation available on FDA’s website, FDA’s priority is to ensure that all foods, including those derived from GE crops, are safe and otherwise in compliance with the FFDCA and applicable regulations. In 1995, FDA established a voluntary premarket consultation process, through which companies are encouraged to notify the agency before marketing a food produced from a GE crop and voluntarily submit a summary of the developer-performed safety assessment that, among other things, (1) identifies distinguishing attributes of new genetic traits, such as the source and function of the genetic material, the purpose of the modification, and the estimated concentration of the new material in food derived from the GE crop; (2) provides information regarding whether any new material in food made from the GE crop is known or suspected to be a toxin or allergenic, and the basis for concluding that the GE-derived food can be safely consumed; and (3) compares the composition or characteristics of GE-derived food to that of its non-GE counterpart with special emphasis on important nutrients and toxins that occur naturally in the food. FDA scientists then evaluate this safety assessment, which includes tests done by the developer, to determine whether it contains sufficient information to conclude that the developer has addressed all matters relevant to the safety and regulatory status of the GE food. FDA officials said that such testing provides a way to detect undesirable traits at the developmental stage and defer marketing until any concerns are resolved. When FDA’s team of scientists is satisfied with the developer’s submission and has no further questions regarding safety or other regulatory issues based on the developer’s information, the consultation is considered complete, and FDA provides a letter to the developer stating that it has no further questions. Although the consultation process is voluntary, according to FDA documentation and agency officials, it is the agency’s experience that companies developing foods and feeds do not commercially market food or feed from their GE crops until they have received this letter or have satisfied any other agency requirement, if applicable. As of November 2015, 108 voluntary premarket consultations had been completed representing more than 150 different crop varieties, according to FDA’s website and FDA officials, and FDA officials said they are not aware of any GE product intended for marketing that has not first gone through FDA’s voluntary consultation process—that is, developers expected to consult with FDA prior to marketing have been doing so. According to USDA officials, USDA needs to update its regulations to assess certain potential plant and environmental health risks associated with GE crops derived from alternative technologies. According to USDA officials, USDA’s regulations for GE crops do not capture the full authority to protect plant and environmental health provided by the PPA, and are not broad enough to allow USDA to restrict all GE crops that may pose a risk to plant health. Under current regulations, USDA, through its Animal and Plant Health Inspection Service (APHIS), restricts the introduction and dissemination of GE crops for which the donor, vector, or recipient of genetic material is a plant pest, such as a bacterium or virus, until the agency assesses certain potential plant and environmental health risks and determines the regulated article does not pose a potential plant pest risk. For example, a gene that confers resistance to the herbicide glyphosate that was sequenced from a bacterium has been used extensively to transform varieties of corn, soybean, and alfalfa crops, according to USDA officials. USDA also regulates any plant pests that have been genetically engineered. For example, USDA regulates the diamondback moth that has been engineered with genes that disrupt reproduction in this pest. This moth and its larvae are known pests for crops such as cauliflower, cabbage, and broccoli. USDA regulates new GE crop varieties in field trials. During field trials, developers that are issued authorizations to release GE crop varieties through field trials must follow specific controls outlined in those authorizations to avoid unauthorized release or unintended mixing of GE and non-GE crops, among other things. For example, if a developer inserted a trait that confers glyphosate tolerance into a type of grass using a plant pest, the developer would be required to submit a request for authorization to move or conduct outdoor plantings of this GE plant. Developers may petition USDA to deregulate their GE crops if they can demonstrate that these crops do not represent a plant pest risk. Commercialization of GE crops may follow deregulation. In contrast, if the developer engineered the grass conferring the same glyphosate tolerance, but did so by using a GE technology that did not involve a plant pest or did not result in plant pest DNA in the grass developed, the developer would not require an authorization, unless USDA later finds the GE crop to be a plant pest on the basis of new data or analysis. The reason, USDA officials said, is that the GE technology used to insert the desired trait did not involve use of a plant pest and no plant pests were otherwise used or inserted. Although developers sometimes request authorization to conduct field trials of GE crops that do not meet the definition of a regulated article (e.g., because a plant pest was not used or a plant pest was used but no plant pest DNA was in the GE crop developed), USDA’s regulations requiring an authorization are limited to situations where a plant pest was involved in the genetic engineering as a donor, vector, or recipient of genetic material, rather than on the potential risk to plant and environmental health associated with the plant and the introduced trait, as shown in figure 1. Moreover, some GE crops developed using genetic engineering technologies that do not involve the use of a plant pest, or use a plant pest but do not result in plant pest DNA in the crop developed, could pose weediness risks, according to USDA officials. Specifically, there could be unintended cross breeding with a related wild plant species that could make the new plant a noxious weed. For example, if a drought tolerance gene unintentionally moved from GE sorghum to Johnsongrass, a wild relative of sorghum, the resultant Johnsongrass could become a more aggressive or noxious weed in dry environments. According to USDA officials, a GE crop could be regulated using USDA’s noxious weed authority under the PPA, but to date USDA has not done so. This is because USDA’s existing noxious weed regulations were not designed for crops, according to USDA officials. As of November 2015, USDA had received 44 letters of inquiry from GE crop developers asking whether their GE crops are subject to USDA regulations. As of that date, according to agency officials and USDA’s website, USDA had determined that 30 of these GE crops are not subject to USDA regulations and 1 GE crop is subject to USDA regulations; the agency’s responses to the remaining 13 letters were pending. Most of these inquiries were for GE crops developed using technologies that did not involve a plant pest, or did involve the use of a plant pest but did not result in plant pest DNA in the crop developed, putting them beyond the scope of USDA’s regulations. USDA officials said they expect the number of GE crops developed with alternative technologies that do not use a plant pest, or that use a plant pest but do not result in plant pest DNA in the crop developed, to increase in the future because these technologies are generally more efficient and precise than technologies using plant pests. For example, USDA officials observed that the plant science community is excited about what can be accomplished with the newest gene editing technologies, noting that such technologies can provide for GE crop development at greater speeds and lower costs. In responding to the letters of inquiry from GE crop developers, USDA officials said that they consider information provided by the developer on the GE technology used, recipient crop, and introduced trait. If the inquiry is the first of its kind, USDA will work with various APHIS programs to ensure that they do not have plant health concerns for which other authorities could be used to protect plant health. While USDA may consider potential risks associated with the GE crop variety, its final response to the developer is solely focused on whether the GE crop is regulated and generally does not include information on the potential risks. In 2008, in part to respond to advances in genetic science and technology and address potential risks, if any, posed by GE crops developed through alternative technologies, USDA proposed a rule that included the possibility of using the noxious weed provisions of the PPA. These provisions would have expanded USDA’s review to apply to new GE crop varieties that represent a potential noxious weed risk. According to USDA officials, the proposed rule was somewhat ambiguous with regard to what would be regulated and that created confusion for stakeholders. According to the proposed rule and USDA officials, a developer would talk to USDA if there was any doubt about whether the variety needed to be regulated. Although USDA took steps to update its regulations to capture any GE crop that may pose a risk to plant health, USDA ultimately withdrew the proposed rule in February 2015 because of issues raised by the public and industry, including a lack of clarity in several key aspects of the rule, according to USDA officials. For example, many of the public comments said that the proposed rule was not clear about what was to be included or excluded in USDA’s regulatory scope and that USDA had not been sufficiently clear about how it would implement the proposed changes. In addition, according to USDA officials, commenters said they were unsure whether this was a voluntary process and did not know under what circumstances USDA would require regulation. In withdrawing the proposed rule, USDA decided that an updated proposed rule was needed, noting it wanted to engage stakeholders anew. In February 2015, USDA officials said they were considering updating the regulations to address shortcomings in USDA’s existing regulations and to take advantage of 28 years of experience regulating products of biotechnology to focus the program on those products that present a plant health risk, regardless of which technologies were used in their development. Executive Order 13563 states, among other things, that to facilitate the periodic review of existing regulations, agencies shall consider how best to promote retrospective analysis of rules that may be outmoded, ineffective, and insufficient, and to modify, streamline, expand, or repeal them in accordance with what has been learned. In addition, an Office of Management and Budget (OMB) memorandum on this executive order states that agencies should explore how best to evaluate regulations in order to expand on those that work and to modify, improve, or repeal those that do not. Candidates for reconsideration include rules that new technologies or unanticipated circumstances have overtaken, according to this memorandum. Furthermore, a July 2015 Memorandum from the Executive Office of the President stated that advances in science and technology have dramatically altered the biotechnology landscape, referenced new technologies, and called on USDA, EPA, and FDA to, in part, formulate a long-term strategy to ensure that the Federal regulatory system is equipped to efficiently assess the risks, if any, associated with future products of biotechnology. USDA is currently in the early stages of the process of considering updating its regulations. In May 2015, USDA hosted a series of webinars and began providing opportunities for the public to provide initial feedback on how the regulations might be improved. USDA also created a website devoted to stakeholder engagement regarding USDA’s regulation of GE crops. According to this website, the agency’s intention is to use an open and robust policy dialogue to drive the development of a forward- looking rule that will provide a foundation for its future regulatory activities. As of June 2015, USDA had received comments from over 221,000 individuals from its stakeholder engagement efforts, according to USDA officials. Withdrawing the 2008 proposed rule allows USDA to discuss regulatory issues in ways that were not possible previously. USDA officials said that they expect to publish a notice of intent and do a programmatic environmental impact statement in early 2016 to consider a number of alternatives for an updated proposed rule. The officials also said that USDA intends to publish a proposed rule no later than September 2016. However, USDA officials said that they do not have a timeline for finalizing a new rule. Our body of work has shown that by setting implementation goals and a timeline, an organization builds momentum and can show progress from day one, thereby helping ensure an initiative’s successful completion. In addition, our body of work has shown that timelines with milestones and interim steps can be used to show progress toward implementing efforts or to make adjustments to those efforts when necessary, and that without defined tasks and milestones, it is difficult for an agency to set priorities, use resources efficiently, measure progress, and provide management a means to monitor this progress. USDA officials noted that the process for finalizing a rule is challenging and would be difficult to do in the remaining time under the current administration. Although publishing the Notice, impact statement, and proposed rule in the coming months are good first steps, without setting a timeline, with milestones and interim steps, for updating its GE crop regulations, it will be difficult for the agency to set priorities, use resources efficiently, measure progress, and provide management a means to monitor the agency’s progress in promulgating a new rule. In addition, until a rule is finalized, USDA will not be able to fully assess the potential risks to plant and environmental health posed by GE crops created with alternative technologies. Completing a new rule to update USDA’s regulations is particularly important given that the number of GE crops developed with alternative technologies is expected to grow. USDA has limited data on the extent and impact of unintended mixing of GE and non-GE crops from production to market. Nonetheless, USDA has taken some steps to address unintended mixing of GE and non-GE crops. In addition, farmers and the agribusiness industry (i.e., industries associated with agricultural production and services, such as shipping and processing) have taken steps to address unintended mixing. According to USDA officials and several stakeholders, USDA has limited data on the unintended mixing of GE and non-GE crops from production to market, making it difficult to know the extent of such mixing and the associated economic losses experienced by farmers. According to USDA officials, because GE crops on the market have been determined to be as safe as non-GE crops, are legal for farmers to cultivate, and are often destined for commingled commodity supplies, pollen movement between GE and non-GE crops on the market has been neither regulated nor tracked. In its 2012 report on enhancing coexistence, AC21 recommended that USDA fund or conduct research in a number of areas relevant to the promotion of coexistence in American agriculture, including quantification of actual economic losses incurred by farmers as a result of unintended GE presence (unintended mixing) and occurrences of these losses over time and in different geographic regions. Such research would enable USDA to gather more information on the extent and economic impact of the unintended mixing of GE and non-GE crops. USDA officials identified two primary ways that the presence of GE crops can have an economic impact on farmers producing non-GE crops because of incurring additional costs: (1) by necessitating measures by farmers to prevent unintended mixing before harvest; and (2) through lost value on shipments rejected by grain-handling companies for exceeding contract specifications for allowable GE presence in a shipment after harvest. Measures farmers can take to prevent unintended mixing include using buffer zones, such as extra rows of alternative crops or empty space, intended to serve as a physical barrier between GE and non-GE crops, or planting crops at different times than neighboring crops to stagger the periods when each crop is pollinating. However, according to USDA officials and some stakeholders, these measures can result in decreased yields because of reduced acreage for production or a shorter growing season. USDA officials said assigning dollar values to preventive measures taken by farmers can be difficult and must consider geography, climate, or weather, which can differ substantially between areas. According to USDA officials, the cost of such preventive measures is generally factored into the contractual price for the non-GE crop as these measures may be required by the buyer. In addition, USDA has limited data on the number of times crop shipments have been rejected because they have exceeded a specified level of unintended GE presence. Further, according to USDA officials and the AC21 report, data on the extent to which GE and non-GE crops are commingled within the supply chain are not available, in part, because these data are considered proprietary by grain-handling companies. Furthermore, there is limited public data on the contracted prices for non-GE crop supplies, further challenging efforts to develop economic loss information. USDA officials said that the National Agricultural Statistics Service (NASS) and the Economic Research Service (ERS) have generally not collected information on unintended mixing between GE and non-GE crops in past farmer surveys because no specific request had been made by other USDA agencies to obtain this information. NASS and ERS are the USDA agencies principally responsible for conducting farmer surveys. The NASS and ERS missions are, in part, to provide timely, accurate, and useful statistics in service to U.S. agriculture and to inform and enhance public and private decision making on economic and policy issues, respectively. Further, according to NASS’s strategic plan, NASS provides key statistical information and basic research essential for making informed policy decisions. As part of an effort to obtain some information on unintended GE presence in non-GE crops, ERS included a related question in the 2010 Agricultural Resource Management Survey. The results of this survey indicated that approximately 2.5 percent of organic corn farmers responding had shipments rejected by a buyer because of the presence of GE material. However, the survey did not ask these respondents to quantify all economic losses or indicate when such losses were incurred, and the survey asked only about corn crops. In 2014, USDA’s Organic Survey, administered by NASS, and partly in response to the AC21 recommendation to fund or conduct research on the quantification of economic losses incurred by farmers as a result of unintended GE presence, included a question asking organic farmers if they had experienced an economic loss because of unintended GE presence in their crops offered for sale, and if so, to quantify their three most recent losses. The NASS survey data were released in September 2015, and showed the existence of economic losses because of unintended GE presence in non-GE crops, although at very small levels. According to USDA officials, the survey data estimate $6.1 million in economic losses because of unintended GE presence for organic farmers from 2011 to 2014, in comparison to billions of dollars in sales for organic farmers during this period. In addition, of the estimated 14,093 organic farms, only an estimated 92 farms, or less than 1 percent, reported GE- related losses. USDA officials said that the results of the 2014 Organic Survey do not provide complete information on the economic impacts of unintended GE presence because, in part, the survey only included organic farmers, their direct marketplace losses, and their three most recent losses. According to USDA documentation, prior to fielding this survey, a number of USDA officials, including APHIS, ARS, ERS, and Office of the Secretary officials, as well as the Chair of USDA’s Organic Working Group, noted that it would also be useful to collect information on other ancillary economic costs, such as the costs of reshipping and re-storing rejected shipments, as well as the costs associated with finding new buyers for rejected shipments. However, for the 2014 Organic Survey, NASS officials said that NASS and other stakeholders decided to limit the number of questions on economic losses due to unintended GE presence given time constraints on deploying the survey, and because of space restrictions. In addition, NASS officials we interviewed noted the content of the question was primarily directed by USDA’s Risk Management Agency in light of AC21 discussions about the possibility of offering crop insurance coverage for losses associated with unintended GE presence. NASS officials said that adding additional questions on economic costs to future organic surveys might be possible, but would need to be considered in light of how a longer survey might affect farmer participation. They also said any changes to future surveys would have to be approved by OMB. Without more complete information on economic losses and other costs, USDA is missing an opportunity to better understand the economic impacts of unintended GE presence. As discussed, NASS’s mission is, in part, to provide timely, accurate, and useful statistics in service to U.S. agriculture. Further, OMB guidance directs federal agencies to (1) periodically review information systems to determine how mission requirements might have changed and whether the information continues to fulfill ongoing and anticipated mission requirements, and (2) ensure the information delivers the intended benefits to the agency and customers. Although this guidance does not apply to USDA’s survey efforts, it serves as an example of a best practice. In addition to wanting more information on the losses sustained by organic farmers because of unintended GE presence, USDA officials said similar information is needed for non-organic producers who do not use GE seed varieties and who take preventive measures, such as buffer zones, to minimize the potential of GE crops affecting their crops. Further, these officials said that while they lack information on the number of nonorganic producers seeking to market their non-GE crop as identity- preserved (i.e., crops of a specific genetic variety, which might bring a higher price), the acreage planted with identity-preserved corn and soybeans is significantly greater than the acreage planted with organic versions of these crops. For example, they noted that the former numbers in the millions of acres, while the latter is in the hundreds of thousands of acres. Thus, these officials said that the potential economic impacts of the unintended presence of GE material in the crops of identity-preserved producers may be even greater than the impacts on organic producers. However, USDA currently has no efforts under way to survey these identity-preserved producers on this issue. Without including producers growing identity-preserved crops, in addition to producers growing organic crops, in its survey efforts, USDA lacks statistically-valid data needed to understand the full scope of the potential economic impacts from unintended GE presence. In turn, without these data on these impacts, including the number of farmers and types of crops affected and the nature and extent of the associated economic losses, USDA is missing key information essential for making informed policy decisions on ways to better promote coexistence as called for by AC21. USDA is not responsible for preventing the unintended mixing of GE material in non-GE and organic crops during cultivation and after these crops enter the supply chain, but has, nonetheless, taken some steps to focus on this issue. For example, the Secretary of Agriculture has made strengthening coexistence among different agricultural production methods a priority. However, USDA officials said that while there are many steps that USDA can take to help farmers produce crops that meet their customers’ needs, segregating GE and non-GE crops is generally a private sector function. As discussed, in February 2011, the Secretary of Agriculture reactivated AC21. In reactivating AC21, USDA announced that it would take further steps to address the larger issue of coexistence between different types of production methods in U.S. agriculture. In November 2012, after a number of public meetings and the solicitation of public comments, AC21 issued its report on enhancing coexistence, which made five broad recommendations for strengthening coexistence among different agricultural production methods, in particular between the production of GE and non-GE crops. The recommendations were that USDA should fund or conduct research, such as the quantification of actual economic losses incurred by farmers as a result of unintended GE presence and occurrences of these losses over time and in different geographies; fund education and outreach initiatives to strengthen understanding of coexistence between diverse agricultural systems; develop mechanisms that foster crop stewardship and mitigate potential economic risks derived from unintended gene flow between crop varieties and promote and incentivize farmer adoption of appropriate stewardship practices; develop a plan for ongoing evaluation of commercially available non- GE and organic seed varieties and identification of market needs for producers serving GE-sensitive markets; and evaluate data gathered under the first recommendation regarding actual economic losses and in considering loss data, if warranted, implement a compensation mechanism to help address such losses. Although NASS added the survey question on possible economic losses to the 2014 Organic Survey in part because of an AC21 recommendation, USDA officials stated that USDA may not have the authority to implement some of the other recommendations in the AC21 report. For example, these officials said that USDA currently does not have the authority to compensate farmers who experience losses because of the unintended presence of GE material in their non-GE crops. Some stakeholders we interviewed said that non-GE farmers, including organic farmers, may not be adequately compensated in the marketplace to cover losses resulting from the unintended mixing of GE and non-GE crops. Other stakeholders, however, said that these farmers chose to grow non-GE crops with the knowledge of the potential for unintended mixing with GE crops, balancing that risk against the higher prices they can get in the marketplace for non-GE crops, particularly organic crops. In March 2015, USDA held an invitation-only workshop for selected farmer, nonprofit organization, academic, and other stakeholders, available through a webcast for the public to view, to obtain additional input on how to further advance understanding of agricultural coexistence. After this workshop, USDA solicited public comments on key ongoing USDA initiatives, as well as proposed initiatives, in response to recommendations from AC21. Some of the ongoing initiatives include improving new crop insurance options for farmers not growing commodity crops, eliminating an insurance premium surcharge for organic farmers, supporting an organic seed finder database to help better understand the seed market and identify needs for increased sources of specific types of organic seed, and outreaching to the public on how to foster communication and collaboration to strengthen coexistence. Some of the proposed initiatives include the following: Developing a coexistence education and outreach strategy with the goal of getting farmers to understand and accept responsibility for both the biological and social consequences of their farming practices. Developing updated procedures and a plan for handling and prioritizing the evaluation of relevant germplasm stocks and developing cost-effective approaches for assessing unintended GE presence and mitigating that presence in those stocks. Using USDA conservation programs, where applicable, to help finance farmers’ measures to promote coexistence, such as creating buffer zones. USDA does not have a timetable for the implementation of the AC21 report recommendations or any newer coexistence activities but has tracked the implementation of the ongoing initiatives closely, according to USDA’s Office of the Secretary. USDA has begun implementation of nearly all recommended activities that it currently has the authority to implement. Some of the activities, for example, research recommendations, are long-term projects. USDA has indicated that it will be considering the 2014 Organic Survey data along with other economic information on coexistence it is gathering in deciding on additional future steps. USDA developed a document that describes its main coexistence activities in December 2015 and posted it on the AC21 webpage. According to USDA officials and some stakeholders, farmers and the agribusiness industry generally take measures to minimize unintended presence of GE material in non-GE and organic crops through pollen flow during cultivation or unintended mixing in storage, shipping, and processing channels. Commodity group stakeholders described the current crop commodity system as one that handles grains, oilseeds, and other crops in bulk to keep the prices of food low. They said the infrastructure was not built to address potential mixing of GE and non-GE crops, so the crops may be unintentionally mixed at multiple points in the supply chain. For example, GE and non-GE grain can be unintentionally mixed in rail cars, in barges, or at grain elevators because there generally is not a separate infrastructure for each type of grain. According to industry stakeholders we spoke with, even with these challenges, farmers and the agribusiness industry often take measures to keep GE and non- GE crops segregated to meet customer demand. These measures include the following: Physical separation of crops. Different crop types may be physically separated by buffer zones, and seed producers may use “pinning maps” to see the location of other reproductively similar seed crops being grown in their area. Temporal separation of crops. Farmers may plant their crops earlier or later than surrounding farms to minimize pollination of their crops by nearby GE crops. In transit and processing, a grain elevator or other facility that handles both GE and non-GE crops might only accept GE or non-GE crops on certain days of the week to avoid unintended mixing. Testing and inspection. Buyers may test or inspect arriving shipments of non-GE crops to determine if there is GE material present. Tolerance levels and contract specifications. Buyers, such as grain handlers, may have tolerance levels for GE content in non-GE shipments that they are willing to accept (e.g., less than 0.9 percent GE material). These tolerance levels are sometimes included in contract specifications between buyers and farmers. Contracts may also specify farm-level measures, including buffer zones, which are required by buyers in the contracts with farmers. Cleaning of shared equipment and storage. Farm-level, transit, and handling equipment and storage infrastructure may be cleaned on a regular schedule, or after its use for GE crops, to decrease the likelihood of unintended mixing with non-GE crops. Dedicated infrastructure. In some instances, growers, transporters, and processors may use distinct equipment and facilities to process non-GE crops separately from GE crops. Such infrastructure may include dedicated silos; transportation systems, including rail cars or containers; handling systems and grain elevators. Figure 2 provides more information on measures to decrease unintended mixing of GE and non-GE crops. Some stakeholders said that many of the associated costs of these measures are passed on to consumers or others in the supply chain. For example, a grain handler would charge its customers a risk premium for a non-GE crop shipment, and companies will determine risk premiums based on the frequencies that crops are above the acceptable level of GE material present and if a crop is sourced from a location where a lot of GE crops are cultivated. Stakeholders said that companies would otherwise not be able to absorb these costs unless the end-use consumers—for example, those buying organic or non-GE foods—are willing to pay a higher amount and take on the additional costs associated with these measures. Stakeholders disagree on who should be held responsible for any financial losses caused by unintended mixing of GE and non-GE crops and how to go about maintaining coexistence. Some stakeholders suggest that non-GE crop farmers receive a premium price, which would help to cover the higher production costs (e.g., costs of preventive measures) and risks of unintended mixing. Others suggest that GE crop farmers are responsible for the unintended presence of GE material in neighboring non-GE and organic crops and should be liable for any related financial losses on the neighboring farms. USDA officials and some stakeholders have said that each production type has its associated production costs and risks, and it is an individual farmer’s business decision as to which production type to choose, taking into account these factors. USDA, EPA, and FDA provide varying degrees of information to the public about their oversight of GE crops. USDA and EPA generally provide detailed information and updates on actions relating to their oversight of GE crops through their websites, live forums, and other means of communication, including Federal Register notices. FDA provides information to the public on its voluntary premarket consultation process for GE crops. (See app. II for more detail on the information that the three agencies provide to the public about their oversight of GE crops.) In addition, USDA and FDA have different roles and approaches in labeling food that might contain GE ingredients. USDA certifies organic products, which are intended to be non-GE. Companies also can hire USDA to evaluate if their products are meeting company-specified non- GE standards. FDA maintains that there is no food safety reason to label GE foods (see app. III on USDA and FDA labeling; app. IV provides information on stakeholder perspectives and legislative actions on the labeling of foods derived from GE ingredients). According to USDA’s APHIS strategic plan for fiscal years 2015 to 2019, USDA will use traditional communication tools, including publications, public service announcements, and newer technologies, to reach its stakeholders, partners, and customers. In addition, USDA regularly provides information and updates on actions and meetings on its website relating to its oversight of GE crops and other GE organisms, and offers opportunities for public input. For example, APHIS’s Biotechnology Regulatory Services (BRS), which is responsible for implementing USDA regulations for certain GE crops that may pose a risk to plant health, holds annual public stakeholder meetings that are open to all interested parties to foster engagement in and ensure transparency of BRS’s regulatory activities. USDA also routinely informs the public about its actions related to oversight of GE crops through notices in the Federal Register, and maintains a list of all open and previous relevant Federal Register notices on its website. For example, USDA notified the public via the Federal Register a month in advance of its 2-day workshop on coexistence so that the public could listen in by telephone or webcast, and USDA shared information on how listeners could provide comments after the workshop. USDA also uses the Federal Register to alert the public to the availability of preliminary determinations and related assessments for new GE crops for which developers are seeking nonregulated status to commercialize these crops. In addition, APHIS has made educating the public about biotechnology an area of emphasis in its strategic plan for fiscal years 2015 to 2019. According to EPA officials, the agency has made it a policy priority to increase engagement with the public on GE technologies and their applications using a variety of platforms. For example, EPA’s Office of Pesticide Programs has an outreach program that is responsible for communicating to the public—through trade publications, media, and an EPA e-mail distribution list that has about 11,000 subscribers—all of EPA’s actions on regulatory decision making regarding pesticides. On its website, EPA provides updates on actions related to oversight of GE crops, including pesticide registrations such as those intended for use with GE crops. As part of these updates, EPA made its predecision rationale for the registration of the pesticide Enlist Duo, an herbicide intended for use on some herbicide-tolerant GE crops, available for public comment. EPA officials also cited other ways the agency provides information to the public with respect to its GE crops oversight. For example, the agency maintains a list of pending pesticide registration decisions that are open for public comment in a docket on its website with links to the respective comment pages for these pesticides on Regulations.gov. FDA officials said that the agency has developed more consumer-friendly information on foods derived from GE crops, which is made available on FDA’s website, including a question-and-answer web page on foods derived from GE crops and the text of FDA congressional testimonies on its oversight of foods derived from GE crops. In addition, FDA maintains a biotechnology web page that includes FDA’s 1992 policy statement and makes recommendations about what kinds of assessments companies can perform to help determine that GE plant varieties are just as safe as their non-GE counterparts. The FDA web page also includes guidance documents for industry, including consultation procedures under FDA’s 1992 policy statement, information on recommended premarket notification concerning foods from GE plants, and guidance for industry on voluntary labeling whether foods have or have not been derived from GE plants. FDA officials stated that this information is generally targeted to a more technical audience as opposed to the general public. FDA officials also stated that the agency has developed more consumer- friendly information on biotechnology and GE plants on its website. According to FDA officials, FDA does not post developer submissions, including the safety and nutritional assessments of the GE crop submitted and the supporting data, on its website. In addition, these officials said the agency does not post information on consultations that were withdrawn before finalization or the reasons they were withdrawn, including any FDA concerns. However, FDA officials said that the safety and nutritional assessments are available in accordance with FDA’s public information regulations and administration policies, and that FDA proactively publishes a summary of the consultation at the conclusion of each consultation. FDA officials stated that interested parties are able to obtain the developer submissions and related data that are not trade secrets or confidential commercial information from the agency by submitting Freedom of Information Act (FOIA) requests. Stakeholders expressed varying perspectives on FDA’s voluntary premarket consultation process. For example, some stakeholders noted the difficulty of going through the FOIA process to access the underlying data, and the lack of a public comment period or public notice prior to a consultation’s completion. FDA provides information on the voluntary premarket consultation process on its website, including submission date and developer name; the type of GE crop submitted; the trait being genetically engineered into the crop (e.g., insect resistance); the intended use (e.g., human food or animal feed); and whether the product required EPA review (when a plant-incorporated protectant, i.e., pesticide, is produced). Some stakeholders also said that FDA’s final response letters and related notes to the file do not demonstrate what FDA has done to analyze the companies’ claims; the agency posts on its website the date and text of final response letters to the developers of GE crops marking the completion of the consultations. FDA officials said that information on the voluntary premarket consultation is often of a highly technical nature, and if FDA were to post this information, the agency would have to evaluate what information in the submission may lawfully be disclosed, revise the electronic files to make them more accessible under section 508 of the Rehabilitation Act of 1973, and then review the material to ensure its accuracy before posting, a process that would take considerable staff time. For this reason, FDA officials said that providing the underlying data and further detail on a voluntary consultation in response to an occasional FOIA request is more efficient. USDA and FDA have different roles in labeling food that might contain GE ingredients. As a result, the agencies differ in their approach to providing information to the public on GE food ingredients and the labeling of GE food ingredients. USDA currently provides information to the public about the GE content of a food product through two programs: USDA’s National Organic Program and its Process Verified Program. The Organic Foods Production Act of 1990 directs the Secretary of Agriculture to establish a national organic certification program. Under the National Organic Program, a program managed by USDA’s Agricultural Marketing Service, products can receive a USDA organic seal if they meet specific national standards. USDA develops the standards for organically produced agricultural products to assure consumers that products with the USDA organic seal meet consistent, uniform standards. According to USDA officials, the Process Verified Program was started in 1999 and is conducted on a fee-for- service basis by USDA’s Agricultural Marketing Service. Exercising its authority under the Agricultural Marketing Act of 1946, the Agricultural Marketing Service serves as a third-party auditor, physically visits a site, and verifies that a company’s processes meet standards that a company sets for itself. Companies, such as grain handlers or poultry, pork, and cattle producers and processors, submit their processes to USDA for verification. According to USDA officials, the Process Verified Program allows consumers to be assured that what they are buying adheres to the company’s standards when they see USDA’s process verified seal on packaging. FDA regulates food labeling and enforces prohibitions against misbranded foods. According to FDA documentation and agency officials, FDA applies the same labeling principles to foods regardless of whether they are derived from GE or non-GE sources. The agency maintains it has no basis for concluding that foods derived from GE sources differ from their non-GE counterparts in any meaningful or uniform way solely based on their method of production, and therefore there is no basis for requiring labeling that indicates a food was developed through GE techniques. FDA provides information on its website about why foods from GE plants are not currently required to be labeled to inform consumers about how the food was produced. The agency acknowledges on its website that there is strong consumer interest in knowing whether foods were produced using GE methods and that FDA supports voluntary labeling, maintaining that such statements must be truthful and not misleading. FDA finalized guidance to industry in November 2015 on voluntary labeling indicating whether foods have or have not been derived from GE plants. This guidance contains nonbinding recommendations, and states that labeling by manufacturers on a wholly voluntary basis regarding whether a food was or was not bioengineered is acceptable to FDA, provided that such labeling is truthful and not misleading. In addition, the guidance states that FDA encourages food manufacturers to ensure that labeling terminology concerning the use of modern biotechnology in the production of food or its ingredients be accurate and consistent and that the integrity and meaning of scientific terminology be preserved to help ensure clear communication in food labeling. The guidance also states that a manufacturer that claims that a food product or its ingredients, including foods such as raw agricultural commodities, are GE or not GE should substantiate that the claim is truthful and not misleading. The guidance provides methods a manufacturer may use to substantiate the claim, including documentation of handling practices and procedures (those with control over growing, harvesting, storing, and distribution should consider appropriate recordkeeping to document whether foods are or are not produced using genetic engineering including segregation procedures). GE crops make up more than 90 percent of major U.S. crops such as corn, soybeans, and cotton. USDA, FDA, and EPA are responsible for regulating GE crops in the United States, with USDA generally ensuring that GE crops do not pose risks to plant and environmental health. Historically, USDA oversight has focused on GE crops that were created using plant pests, such as a bacterium or virus. In recent years, USDA has received an increasing number of inquiries from GE crop developers regarding whether their GE varieties—created using alternative technologies that either did not involve the use of a plant pest or did involve the use of a plant pest but did not result in plant pest DNA in the crop developed—are subject to USDA regulations. USDA acknowledges that its regulations overseeing GE crops have not kept pace with these technological developments and do not cover all GE crops. In February 2015, USDA withdrew its 2008 proposed rule that sought to revise its regulations regarding the importation, interstate movement, and environmental release of certain genetically engineered organisms to bring the regulations into alignment with the PPA and update the regulations in response to advances in genetic science and technology. USDA officials said that USDA intends to publish a proposed rule no later than September 2016 but that they do not have a timeline for finalizing an updated rule. Publishing a notice of intent, programmatic environmental impact statement, and proposed rule in the coming months are good first steps, but without setting a timeline, with milestones and interim steps, for updating its GE crop regulations, it will be difficult for the agency to set priorities, use resources efficiently, measure progress, and provide management a means to monitor the agency’s progress in promulgating a new rule. In addition, until a rule is finalized, USDA will continue to lack regulatory authority to fully assess the potential risks, if any, to plant and environmental health posed by GE crops created with alternative technologies, in particular those that either do not use plant pests or use plant pests but do not result in plant pest DNA in the crop developed. Furthermore, USDA has limited data on unintended mixing of GE and non-GE crops, making it difficult for USDA to identify the extent and impact of the unintended mixing. The Secretary of Agriculture reactivated AC21, which has prioritized the promotion of agricultural coexistence. In its 2012 report on enhancing coexistence, AC21 recommended that USDA should fund or conduct research that would enable USDA to gather more information on the extent and economic impact of unintended mixing. USDA’s 2014 Organic Survey was an important first step to gather data on the economic losses experienced by non-GE farmers, but the data collected do not provide complete information on economic impacts caused by unintended mixing of GE and non-GE crops. Without collecting additional information in future organic surveys, such as the costs of reshipping and re-storing shipments rejected because of unintended GE presence, as well as the costs associated with finding new buyers for such shipments, USDA is missing an opportunity to better understand the economic impacts of unintended GE presence. In addition, USDA does not have data on economic losses because of unintended GE presence for non-GE producers other than for organic producers who seek to market their crops as identity-preserved. Without collecting similar data from producers of identity-preserved crops, USDA lacks statistically valid data needed to understand the full scope of potential economic impacts from unintended GE presence. In turn, without these data, including the number of farmers and types of crops affected and the nature and extent of economic losses, USDA is missing key information essential for making informed policy decisions on ways to better promote coexistence as called for by AC21. We are making three recommendations to the Secretary of Agriculture. To improve USDA’s ability to oversee GE crops, we recommend that the Secretary of Agriculture direct the Administrator of APHIS to develop a timeline, with milestones and interim steps, for updating its existing regulations to cover GE crops developed with alternative technologies that either do not use plant pests or use plant pests but do not result in plant pest DNA in the crop developed. To improve USDA’s ability to better understand the economic impacts of unintended mixing of GE and other crops, we recommend that the Secretary of Agriculture take the following two actions: Direct the Administrator of NASS to work with all relevant USDA stakeholders, including APHIS and the Organic Working Group, to determine what additional information should be sought in future organic surveys, such as the costs of reshipping and re-storing shipments rejected because of unintended GE presence, as well as the costs associated with finding new buyers for such shipments. Direct the Administrator of NASS to include producers, growing identity-preserved crops, in addition to organic producers in USDA’s survey efforts. We provided a draft of this report to USDA, EPA, and the Department of Health and Human Services for review and comment. USDA provided written comments, which are reproduced in appendix V. USDA said that it generally agreed with the report’s recommendations. EPA and the Department of Health and Human Services did not provide written comments. USDA and the Department of Health and Human Services’s FDA also provided technical comments that we incorporated as appropriate. Concerning our first recommendation in the draft report, to develop a timeline, with milestones and interim steps, for updating its existing regulations to cover GE crops developed with newer technologies that do not depend on the use of plant pests, USDA said that it agreed, in part, with the recommendation. Specifically, USDA said it had developed an internal timeline for outlining key milestones and interim steps, all with associated target dates, for updating the regulations that cover GE organisms. While USDA may have such a timeline now, it did not provide us with this timeline during the course of our work. In addition, USDA stated USDA's proposed regulations are being developed to address products of biotechnology, regardless of laboratory technique used to create or modify the genome. Thus, the intention of the proposed rule currently in development (as well as the 2008 proposed rule) is the overall protection of plant health through regulation of GE organisms that may pose a plant pest or noxious weed risk, with no relation to the technology used to develop the GE organism. In response to USDA’s comments, we modified our recommendation so that instead of discussing GE crops developed with newer technologies that do not depend on the use of plant pests, the recommendation discusses GE crops developed with alternative technologies that either do not use plant pests or use plant pests but do not result in plant pest DNA in the crop developed. Concerning our second recommendation, that NASS work with all relevant USDA stakeholders, including APHIS and the Organic Working Group, to determine what additional information should be sought in future organic surveys, such as the costs of reshipping and re-storing shipments rejected because of unintended GE presence, as well as the costs associated with finding new buyers for such shipments, USDA stated that it agreed. Specifically, USDA stated that NASS works with all relevant stakeholders to determine what information is needed for future organic surveys. For example, USDA said that since the 2014 Organic Survey, NASS has held a series of meetings with APHIS officials and the Chair of USDA’s Organic Working Group to discuss the 2014 Organic Survey results and how to move forward with future survey questions. USDA stated that the most recent of these meetings was held in January 2016 and that at the conclusion of this meeting, NASS, APHIS, and the chair of the working group vowed to keep meeting as well as to bring more stakeholders into the discussions, which will ensure that future organic surveys and related surveys involving GE related questions will have the necessary attention to obtain data, such as the need to better understand the economic impacts of unintended mixing of GE crops. Such actions, if taken, would address our recommendation. Concerning our third recommendation, that NASS include producers growing identity-preserved crops, in addition to organic producers in USDA’s survey efforts, USDA did not indicate whether it agreed or disagreed. USDA stated that NASS’s overall survey programs currently include identity-preserved crops and conventional and organic producers. USDA described the sample design for its survey programs, specifically that the design includes area and list frames, and their definitions. However, the point of this recommendation is that NASS should survey producers growing identity-preserved crops regarding their potential economic losses from unintended GE presence, as is being done for organic producers. As we noted in the report, U.S. acreage planted to identity-preserved crops is significantly greater than that planted to organic crops; yet, little is known about the economic costs to identity- preserved farmers of unintended mixing. Until NASS surveys producers growing identity-preserved crops on these potential economic costs, USDA will continue to lack statistically valid data needed to understand the full scope of potential economic impacts from unintended GE presence. While USDA stated that it generally agrees with our recommendations, it also stated that it takes issue with five themes that are repeated throughout the report. Specifically, USDA states that it is concerned that the intent of its current efforts to update the regulations is misstated; the intent of the 2008 proposed rule is misstated; plant pest and newer technologies are inappropriately conflated; newer technologies are presented as inherently more risky; and the 'Am I Regulated?' inquiries are presented as escaping regulation because they were developed using newer technologies. USDA further clarified its position on these five themes, stating first that its current intention for updating its biotechnology regulations remains the same as it was in 2008: to protect plant health from plant pests and noxious weeds regardless of the method to transform the organism. In this regard, USDA said that in several places in our report we incorrectly state that that 2008 proposed rule was intended to capture newer technologies. We do not say that the intent of the 2008 proposed rule or USDA’s current efforts to update its biotechnology regulations is to capture new technologies. However, we do say that underlying USDA’s efforts to update its regulations is the goal of subjecting new GE crops developed with newer (now alternative) technologies to a more comprehensive assessment of potential risks before commercialization of these new crops. As discussed, GE crops developed with the use of a plant pest are subject to a comprehensive assessment of their potential risks before the crops can be commercialized. Such assessments are not required, under USDA’s biotechnology regulations, for GE crops developed with alternative technologies. As a result, there is a gap in USDA’s current regulatory coverage that the agency has been seeking to close for more than 10 years, starting with the development of the 2008 proposed rule and continuing with its current efforts to update its biotechnology regulations. In addition, USDA stated that we used the phrase “newer technologies that do not involve a plant pest" several times and that this phrase incorrectly conflates the use of newer technologies with the use of a plant pest component when there are older technologies (e.g., biolistics) that do not involve a plant pest and newer technologies that do (e.g., TALENS). USDA stated that its statutory authority is to prevent the introduction and dissemination of plant pests and noxious weeds in the United States, and as such, it may regulate any GE organism which "the Administrator determines is a plant pest or has reason to believe is a plant pest" and may regulate, as necessary, any plant that poses a risk as a noxious weed. We acknowledge that some newer technologies, such as TALENS, involve the use of a plant pest, and some older technologies, such as biolistics, do not. However, we note that while TALENS involves the use of a plant pest in developing a new GE crop, no plant pest DNA remains in the crop developed. Thus, that crop is not subject to USDA’s biotechnology regulations unless USDA later learns, after the crop has been commercialized, of a plant pest or noxious weed concern. It is that distinction we were trying to draw with our use of “newer technologies,” that is, those technologies that result in a new GE crop that is not subject to USDA regulation under its biotechnology regulations. In this regard, during our work USDA, EPA, and FDA officials told us that the field of biotechnology is rapidly evolving with the introduction of new GE organisms and new and emerging technologies that do not depend on the use of a plant pest. In light of USDA’s comments, we have revised the report to substitute the use of “alternative technologies” for “newer technologies.” Further, we have revised the report to define alternative technologies as those in which the GE crop developed contains no plant pest DNA. This would include technologies such as TALENS that use a plant pest, and those technologies that do not use a plant pest at all. Moreover, USDA also stated that the draft report implies that some of the newer technologies are inherently more risky and that USDA has no reason to believe that newer technologies, such as gene editing, are riskier or present any new risks, as compared to older technologies. USDA stated that it concludes, as did the Office of Science and Technology Policy in 1986 when it issued the Federal Coordinated Framework for the Regulation of Biotechnology, that potential risk of a GE organism is derived from the characteristics of the GE organism itself and the environment in which it is introduced and not from the technology that was used for the GE organism. We disagree that we have characterized newer (now alternative) technologies as inherently more risky. Instead the report discusses the potential risks of new GE crops developed with these technologies that are not subject to USDA’s biotechnology regulations. For example, these crops are not subject to USDA’s permit and notification requirements, the conduct of confined field trials, or the submission of detailed information for review by USDA scientists. Further, under USDA’s regulations, developers of new GE crops developed using alternative technologies do not need to petition USDA to “deregulate” their product before commercializing it. However, in light of USDA’s concern, we have revised the report to further qualify “potential risks” by adding “if any” after this phrase. We also have revised the report to make clear that we are focusing on those new GE crops in which the crop contains no plant pest DNA, regardless of whether the technology employed used a plant pest or not. Finally, USDA said the report gives the incorrect impression that many of the 'Am I Regulated?' inquiries from GE crop developers escape USDA regulations because they were developed using newer (now alternative) technologies. USDA stated that most inquiries to date concern GE plants developed with biolistics, which is an older technology, and that for those inquiries where it determined that the organism in question was not a regulated article, it first concluded that there was no reason to believe the organism presented a plant pest risk. As discussed, we have made changes to the report defining alternative technologies as those in which the GE crop developed contains no plant pest DNA. This includes technologies where a plant pest may have been used initially as part of the GE crop development process. It also includes technologies that do not use plant pests at all in the development of a GE crop. Further, while USDA may conclude there is no reason to believe the organism covered by an inquiry presents a plant pest risk, that conclusion is not based on a comprehensive assessment (e.g., the conduct of confined field trials and the submission of detailed information for review by USDA scientists) of the potential risks of that organism (i.e., a new GE crop). As discussed, under USDA’s current biotechnology regulations, only those new GE crop varieties developed with and containing plant pest DNA are subject to a comprehensive USDA assessment of potential risks before commercialization, and it is this regulatory gap, at least in part, that USDA seeks to close in updating its biotechnology regulations. At present, only after commercialization of a GE crop created with an alternative technology, can USDA, if it becomes aware of a possible plant pest risk or noxious weed risk, take regulatory action. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of Agriculture, the Secretary of Health and Human Services, the Commissioner of the Food and Drug Administration, the Administrator of the Environmental Protection Agency, the Director of the Office of Management and Budget, and other interested parties. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who make key contributions to this report are listed in appendix VI. We reviewed federal oversight and information on genetically engineered (GE) crops. Our objectives were to examine (1) the steps the U.S. Department of Agriculture (USDA), Environmental Protection Agency (EPA), and Food and Drug Administration (FDA) have taken to regulate GE crops, including those derived from alternative technologies; (2) what data USDA has on the extent of unintended mixing of GE and non-GE crops, and what steps, if any, have been taken to prevent such mixing; and (3) the extent to which USDA, EPA, and FDA provide information to the public on GE crops they oversee. In this report, we define alternative technologies as those in which the GE crop developed contains no DNA of a plant pest, such as a bacterium or virus. This includes technologies in which a plant pest may have been used initially as part of the GE crop development process. It also includes technologies that do not use plant pests at all. In general, to achieve our objectives, we interviewed officials or obtained documentation from USDA, EPA, and FDA. We also interviewed nonfederal stakeholders, including biotechnology, food industry, consumer, environmental, farm, and commodity group representatives, and those from academia. Industry and commodity groups included the Agricultural Retailers Association, American Seed Trade Association, American Soybean Association, American Sugarbeet Growers Association, Association of Official Seed Certifying Agencies, Biotechnology Industry Organization, Cargill, Clarkson Grain, Grocery Manufacturers Association, National Corn Growers Association, National Grain and Feed Association, North American Export Grain Association, and Organic Seed Growers and Trade Association. Consumer groups included the Center for Food Safety, Center for Science in the Public Interest, Institute for Responsible Technology, Consumers Union, and Organic Consumers Association. In addition, we interviewed officials from the American Association for the Advancement of Science, American Farm Bureau Federation, Biology Fortified, Environmental Working Group, Food & Water Watch, Genetic Literacy Project, National Association of State Departments of Agriculture, National Conference of State Legislatures, National Family Farm Coalition, National Organic Coalition, and the Non-GMO Project, as well as six academics who are agricultural economists studying the potential economic impacts of GE crops. We identified the academics through a literature review, as well as through the “snowball sampling” technique. More specifically, to determine how federal agencies regulate GE crops derived from alternative technologies, we interviewed agency officials from USDA, EPA, and FDA, as well as representatives of 35 external, nonfederal stakeholders, generally using a standard set of questions. We took several steps to identify external, nonfederal stakeholders to interview for our work. First, we considered those stakeholders interviewed by GAO for our 2008 report related to GE crops and whether they conduct work in the areas related to this engagement. Second, we identified individuals through literature reviews. Third, we used the snowball method—where each stakeholder was asked to propose or recommend additional stakeholder groups for GAO to interview. We selected the 35 stakeholders to ensure that we captured a broad spectrum of views on GE crop issues. Findings from the interviews of this sample of stakeholders cannot be generalized to those we did not speak to. We additionally gathered information from the National Academy of Sciences, including publicly available information from three public meetings and 10 webinars associated with the academy’s ongoing study on GE crops. We did not evaluate the underlying science behind alternative GE technologies or the scientific basis of regulatory decisions related to GE crops made by USDA, EPA, and FDA. To examine what data, if any, exist on unintended mixing of GE crops and non-GE crops, we obtained USDA’s strategic plans and reports, including USDA’s Advisory Committee on Biotechnology and 21st Century Agriculture reports with recommendations addressing options to minimize the mixing of GE and non-GE crops. We also interviewed USDA officials who regulate, oversee, or set standards for cultivation, shipping, handling, and packing of major commodity crops, to determine the extent of USDA’s role, if any, with respect to addressing the unintended mixing of GE and non-GE crops. Further, we interviewed stakeholders identified in our first objective to determine nongovernmental roles in preventing the unintended mixing of GE and non-GE crops in the supply chain. We also conducted a literature search to identify and review studies on the actual or potential impact of GE crops on other crops. We did not review GE crops regulated under USDA’s permit and notification field trial processes, or the extent to which these crops are affecting the supply chain, as USDA’s Inspector General was reviewing these subjects. The USDA Inspector General issued this report in September 2015. Instead, the focus of our report is those GE crops that have been deregulated and are available for commercialization. To determine the extent to which USDA, EPA, and FDA are providing the public with information on GE crops, we interviewed agency officials and reviewed agency documentation regarding how these agencies reached regulatory or policy decisions related to GE crops, and examined the extent to which that information is disseminated publicly, for example on the agencies’ websites. We also interviewed stakeholders identified in our first objective to determine their perspective on the adequacy of the agencies’ provision of information on GE crops to the public. To examine USDA and FDA approaches to labeling GE food ingredients, we reviewed relevant laws and agency guidance and interviewed agency officials on applicable programs. Specifically, at USDA, we reviewed documents and interviewed officials with respect to USDA’s National Organic Program and Process Verified Program. At FDA, we reviewed FDA documentation and written statements from FDA officials on the agency’s labeling principles. The documentation included FDA’s 1992 Statement of Policy: Foods Derived from New Plant Varieties, and its 2015 guidance to industry on voluntary labeling indicating whether foods have or have not been derived from GE plants. We conducted this performance audit from August 2014 to March 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix discusses the efforts of three agencies—the U.S. Department of Agriculture (USDA), the Environmental Protection Agency (EPA), and the Food and Drug Administration (FDA)—to provide information to the public on the oversight of genetically engineered (GE) crops. USDA’s Animal and Plant Health Inspection Service’s (APHIS) strategic plan for fiscal years 2015 to 2019 includes education and outreach efforts to ensure that GE organisms do not pose plant pest risks when released into the environment and as an alternative to rulemaking. According to the plan, USDA will use traditional communication tools, including publications and public service announcements, in addition to newer technologies to reach its stakeholders, partners, and customers, and plans to expand its use of technology. Further, APHIS’s Biotechnology Regulatory Services (BRS) 2015 to 2018 strategic plan states that USDA will use high-quality analysis in decision making and will strengthen risk assessment models to focus on GE organisms that pose a risk to plant health. USDA will also seek to keep stakeholders aware of its regulatory actions. USDA regularly makes efforts to provide information relating to its oversight of GE crops and to offer opportunities for public input. For example: APHIS’s BRS, which is responsible for implementing USDA regulations for certain GE crops that may pose a risk to plant health, holds annual public stakeholder meetings that are open to all interested parties to foster engagement and transparency in BRS’s regulatory activities. BRS makes available on its website all letters of inquiry from GE crop developers asking whether their GE crops are subject to USDA regulations, as well as BRS’s response to each inquiry. BRS makes available on its website a list of all pending and completed petitions submitted by developers to receive a determination on whether their new GE crops are likely to pose a plant pest risk and therefore would be regulated and provides links to supporting documentation, including guidance for submitting a petition, the developer’s initial petition, BRS’s preliminary assessment, and BRS’s final assessment and decision. BRS has posted to its website an online video explaining how USDA regulates biotechnology. USDA sought public input through the Federal Register on how to foster communication and collaboration between farmers to strengthen coexistence following the release of the Advisory Committee on Biotechnology and 21st Century Agriculture report in 2012. EPA officials stated that the agency has a significant interest in being transparent and assuring the public that the federal government is taking all measures necessary to ensure human and environmental safety. These officials said that EPA provides updates on its actions related to GE crops, including making its predecision rationale for pesticide registrations, available for public comment. For example: During the registration process for the pesticide Enlist Duo, an herbicide intended for use on some herbicide-tolerant GE crops, EPA made its assessments of the pesticide and its rationale for regulating the pesticide available for public comment for 30 days. According to agency officials, EPA extended the comment period on this pesticide for an additional 30 days and evaluated public comments before making a final registration decision. EPA then notified the public about which changes the agency had made in response to public comments. EPA occasionally convenes a Scientific Advisory Panel to provide independent scientific advice on a wide range of health and safety issues related to pesticides, including those related to GE crops, and information such as archived transcripts from the panel’s meetings are made publicly available on EPA’s website. EPA’s Office of Pesticide Programs has an outreach program that is responsible for communicating to the public all of EPA’s actions on regulatory decision making regarding pesticides. Through this program, EPA disseminates information through trade publications, media, and an EPA e-mail distribution list that has about 11,000 subscribers. According to EPA officials, the outreach program also employs press releases and is increasing its use of social media tools to engage the public on newer platforms. FDA officials stated that the agency has developed more consumer- friendly information on foods derived from GE crops. This information, made available on FDA’s website, includes a question-and-answer web page on foods derived from GE crops, a 2013 statement on labeling of foods from GE crops, a consumer update on FDA’s role in regulating the safety of foods from GE crops, and the text of FDA congressional testimonies on its oversight of foods derived from GE crops. Information on FDA’s voluntary premarket consultation process available on the agency’s website includes submission date and developer name; the type of GE crop submitted; the trait being genetically engineered into the crop (e.g., insect resistance); intended use (e.g., human food or animal feed); whether the product required EPA review (when a plant- incorporated protectant, i.e., pesticide, is produced); FDA’s “note to the file” summarizing FDA’s evaluation of the information submitted by the developer and the consultation’s outcome; and the date and text of FDA’s final response letter, also called a “no further questions” letter, to the developer, that marks the completion of the consultation. Additional information about voluntary premarket consultations that is not otherwise trade secret or confidential commercial information may be obtained through a Freedom of Information Act (FOIA) request, according to FDA documentation and officials. From September 2002, the earliest date for which FDA maintains digital internal records on FOIA requests, through July 2015, FDA received at least one FOIA request for 22 (39 percent) of the 56 voluntary premarket consultations completed during that time. FDA officials stated that although developers generally note that their submissions typically do not contain very much trade secret or confidential commercial information, some companies request that their information not be published on the Internet. Further, FDA officials noted that this information is often of a highly technical nature and if FDA were to post this information, these officials said FDA would have to evaluate what information in the submission may lawfully be disclosed, revise the electronic files to make them more accessible under section 508 of the Rehabilitation Act of 1973, and then review the material to ensure its accuracy before posting, a process that would take considerable staff time. Noting that their staff already face a number of competing priorities, these officials questioned whether routinely preparing companies’ detailed data for posting on FDA’s website was a good use of staff time, especially since these data can be obtained by FOIA request. The U.S. Department of Agriculture (USDA) and Food and Drug Administration (FDA) have different roles in the regulation of food labeling, including the labeling of foods that might contain genetically engineered (GE) ingredients, based on their respective statutory authorities. As a result, the agencies differ in their approach to providing information to the public on GE food ingredients and the labeling GE food ingredients. USDA currently provides information to the public about the GE content of a food product through two programs: USDA’s National Organic Program and its Process Verified Program. The Organic Foods Production Act of 1990 directs the Secretary of Agriculture to establish a national organic certification program. Under the National Organic Program, a program managed by USDA’s Agricultural Marketing Service, products can receive a USDA organic seal if they meet specific national standards. USDA develops the standards for organically produced agricultural products to assure consumers that products with the USDA organic seal meet consistent, uniform standards. Specifically, according to USDA’s policy and regulations, USDA’s National Organic Program establishes standards for organic certification, which forbids the use of GE methods, among other things, in the production of organic crops. Products bearing the USDA organic seal have received a process-based certification, in addition to other national standardized factors set by USDA’s National Organic Program required to attain organic status. USDA’s National Organic Program standards for certified organic crops prohibit the use of sewage sludge, synthetic fertilizers, synthetic pesticides, and genetic engineering. Meat and poultry products that qualify as USDA organic may make a “Non-Genetically Engineered” claim based on their organic certification. However, the USDA organic seal itself does not bear the term non-GE. In addition, the National Organic Program requires that certifying agents conduct residue testing from a minimum of 5 percent of operations that they certify. According to USDA officials, the Process Verified Program was started in 1999 and is conducted on a fee-for-service basis by USDA’s Agricultural Marketing Service. Exercising its authority under the Agricultural Marketing Act of 1946, the Agricultural Marketing Service serves as a third-party auditor, physically visits a site, and verifies that a company’s processes meet standards a company sets for itself. According to USDA documentation and officials, these processes include, for example, how crops are grown or how livestock are raised, and whether the products are handled and processed according to specific guidelines. Companies, such as grain handlers or poultry, pork, and cattle producers and processors, submit their processes to USDA for verification. According to USDA officials, the Process Verified Program allows consumers to be assured that what they are buying adheres to the company’s standards when they see USDA’s process verified seal on packaging. The USDA website includes the process points for all companies for which the Agricultural Marketing Service has completed process verifications and displays what standards USDA used to audit the company’s process or processes. For example, the Agricultural Marketing Service has done process verifications to evaluate whether a company is feeding poultry a vegetarian diet, is not treating its livestock with antibiotics, or is handling grains in accordance with specific contract specifications (e.g., that they are segregating the grains in a way that satisfies a contract). USDA’s Agricultural Marketing Service completed its first process verification for a non-GE process in May 2015. It allowed a company to market its raw organic corn and soybeans by saying they were produced using a process intended to result in a product with GE content below a specific threshold. The company’s claim was that its non-GE process results in corn and soybeans that do not exceed 0.9 percent GE content. According to agency officials, USDA’s program verified the company’s process for the crops, although it did not address the content of any final products. USDA officials stated that there has to be transparency if a food processor is going to use the USDA Process Verified Program seal. For example, the packaging of soy milk produced from non-GE process- verified soybeans would have to specify that it was made from non-GE soybeans, but could not imply that the final product, which includes other ingredients, had been produced in accordance with the same non-GE standards, unless those were also process verified by USDA. USDA officials stated that other companies have approached USDA about potentially pursuing their own non-GE process verification. These officials said that USDA will continue to operate the program by evaluating companies’ processes against the companies’ own standards because USDA officials do not have the statutory authority to define what is a universal non-GE process standard. These officials stated that setting a government standard for what constitutes a non-GE food or process would probably require legislative action by Congress. FDA regulates food labeling and enforces prohibitions against misbranded foods. According to FDA documentation and agency officials, FDA applies the same labeling principles to foods regardless of whether they are derived from GE or non-GE sources. The agency maintains it has no basis for concluding that foods derived from GE sources differ from their non-GE counterparts in any meaningful or uniform way solely based on their method of production, and therefore there is no basis for requiring labeling that indicates a food was developed through GE techniques. Further, according to its 1992 policy on GE foods, FDA maintains that it has no basis to conclude that as a class, foods developed with GE techniques present any different or greater safety concern than foods developed by non-GE plant breeding. According to FDA officials, scientific studies, information, and data FDA has reviewed since it issued its 1992 policy, including data and information evaluated through its voluntary premarket consultation process, reflect this same conclusion. FDA provides information on its website about why foods from GE plants are not currently required to be labeled to inform consumers about how the food was produced. The agency acknowledges on its website that there is strong consumer interest in knowing whether foods were produced using GE methods and that FDA supports voluntary labeling, maintaining that such statements must be truthful and not misleading. FDA finalized guidance to industry in November 2015 on voluntary labeling indicating whether foods have or have not been derived from GE plants. This guidance contains nonbinding recommendations and states that labeling by manufacturers on a wholly voluntary basis regarding whether a food was or was not bioengineered is acceptable to FDA, provided that such labeling is truthful and not misleading. In addition, the guidance states that FDA encourages food manufacturers to ensure that labeling terminology concerning the use of modern biotechnology in the production of food or its ingredients be accurate and consistent and that the integrity and meaning of scientific terminology be preserved to help ensure clear communication in food labeling. In addition, according to the guidance if a food derived from GE plants is significantly different from its traditional counterpart such that the common or usual name or existing statement of identity no longer adequately identifies or describes the new food, the name of the new food must be changed to a term that accurately identifies or describes the new food; if a GE food or one of its constituents differs from its traditional counterpart regarding how the food is used or the consequences of its use (for example, if the GE food behaves differently than its traditional counterpart when used in a comparable way, such as in frying or canning), a statement must be made on the label to describe the difference(s) in use or the consequence(s) of its use; and if a food derived from GE plants contains an allergen that consumers would not expect to be present in the food based on the name of the food, the presence of that allergen must be disclosed on the label. The guidance also states that a manufacturer that claims that a food product or its ingredients, including foods such as raw agricultural commodities, are GE or not GE should substantiate that the claim is truthful and not misleading. The guidance provides methods a manufacturer may use, including documentation of handling practices and procedures (those with control over growing, harvesting, storing, and distribution should consider appropriate recordkeeping to document whether foods are or are not produced using genetic engineering, including segregation procedures), use of certified organic food (compliance with USDA’s requirements can be used to support food labeling claims about the production of food without the use of genetic engineering), and the use of validated test methods (to confirm the presence of bioengineered material in food derived from GE plants). Stakeholders we interviewed have differing views with respect to labeling of foods derived from genetically engineered (GE) ingredients. Proponents of mandatory GE labeling, including some consumer rights groups, argued that consumers have a right to know what is in their food. They also said that mandatory GE labeling would allow members of the public to make more informed decisions about what they purchase and consume. In addition, some proponents said mandatory GE labeling would be a low-cost way for companies to better inform consumers. For example, an analysis requested by the Consumers Union in 2014 estimated the cost of introducing such a national standard, if passed on to consumers through higher prices, would be less than $10 per family each year. Some opponents of GE labeling suggested that labeling foods containing GE ingredients—particularly without a demonstrated food safety risk—would confuse or unnecessarily alarm consumers. They estimated the costs of mandatory GE labeling could be as much as $400 to over $800 per family each year, as companies pass the costs on to consumers of changing packaging or switching to non-GE suppliers to avoid a label. However, some stakeholders said that a federal standard for GE labeling would promote clarity for consumers and prevent inconsistent policies. For example, one stakeholder said that if no national standard is imposed, states may act on their own, resulting in a system with policies that differ from state to state, creating confusion, negative impacts on interstate commerce, and additional costs for consumers as product packaging and labeling would have to be tailored to each individual state. A number of bills were introduced in the 114th Congress related to labeling foods containing GE ingredients. A bill titled “Genetically Engineered Food Right-to-Know Act” was introduced in the Senate and House in February 2015, among other things, to establish a consistent and enforceable standard for labeling of foods produced using genetic engineering. A bill titled “Safe and Accurate Food Labeling Act of 2015” was introduced in the House in March 2015 that would effectively prohibit mandatory labeling of GE foods, including any state-level labeling requirements. That legislation would make FDA’s voluntary premarket consultation process mandatory, establish a USDA certification for non- GE foods similar to the current National Organic Program, and preempt any state-level legislation requiring GE labeling. As of July 2015, according to the National Conference of State Legislatures, various bills were introduced in more than 30 states since 2011 to address GE labeling at the state level. Some bills proposed a mandatory labeling system, under which a product containing any GE ingredients must be labeled as such. Other proposals involved a voluntary labeling system that would set labeling standards for products that do not contain GE ingredients and, in some cases, implement a system for verifying and labeling products as non-GE. As of July 2015, three states had passed mandatory labeling laws for food products made from GE ingredients. Vermont enacted legislation in May 2014, which requires mandatory labeling of all GE foods beginning July 1, 2016. In addition, Connecticut and Maine enacted legislation in June 2013 and January 2014, respectively, on mandatory labeling of GE food products. Connecticut’s law will go into effect when four other states adopt similar legislation, including at least one state bordering Connecticut, and the combined population of northeast states adopting such legislation must exceed 20 million. Maine’s law requires five contiguous states, including Maine, to pass a law requiring GE labeling before its law will go into effect. In addition to the contact named above, James R. Jones, Jr. (Assistant Director), Cheryl Arvidson, Kevin S. Bray, Barbara El Osta, Cindy Gilbert, Adrian Pavia, Caitlin Rice, Aaron Shiffrin, and Kiki Theodoropoulos made key contributions to this report. Genetically Engineered Crops: Agencies Are Proposing Changes to Improve Oversight, but Could Take Additional Steps to Enhance Coordination and Monitoring. GAO-09-60. Washington, D.C.: November 5, 2008. Genetically Modified Foods: Experts View Regimen of Safety Tests as Adequate, but FDA’s Evaluation Process Could Be Enhanced. GAO-02- 566. Washington, D.C.: May 23, 2002. International Trade: Concerns over Biotechnology Challenge U.S. Agricultural Exports. GAO-01-727. Washington, D.C.: June 15, 2001. Intellectual Property: Deposits of Biological Materials in Support of Certain Patent Applications. GAO-01-49. Washington, D.C.: October 16, 2000. Biotechnology: Information on Prices of Genetically Modified Seeds in the United States and Argentina. GAO/T-RCED/NSIAD-00-228. Washington, D.C.: June 29, 2000. Biotechnology: Information on Prices of Genetically Modified Seeds in the United States and Argentina. GAO/RCED/NSIAD-00-55. Washington, D.C.: January 21, 2000.
Three agencies have primary responsibility for regulating GE crops and food in the United States: USDA, EPA, and FDA. USDA and industry groups estimate that at least 90 percent of many major commercial crops, such as corn and soybeans, are GE varieties. Proponents say GE crops offer greater pest resistance, use less labor-intensive processes to control weeds, and result in increased productivity to feed growing populations. Opponents cite a lack of consensus on impacts to agriculture, the environment, and human health. GAO was asked to review oversight and information on GE crops. This report examines (1) steps EPA, FDA, and USDA have taken to regulate GE crops; (2) the data USDA has on the extent and impact of unintended mixing of GE and non-GE crops, and what steps have been taken to prevent such mixing; and (3) the extent to which USDA, EPA, and FDA provide information to the public on GE crops. GAO analyzed legislation, regulations, and agency policies and reports and interviewed agency officials and stakeholders, including representatives from the biotechnology and food industries and consumer, farm, environmental, and commodity groups. The Environmental Protection Agency (EPA), Food and Drug Administration (FDA), and U.S. Department of Agriculture (USDA), have taken steps to regulate genetically-engineered (GE) crops (i.e., crops whose genetic makeup has been modified), but USDA has not updated its regulations to oversee GE crops derived from alternative technologies in which the GE crop developed contains no plant pest DNA. EPA regulates certain GE crops as part of its pesticide registration process. FDA, through its voluntary consultation process, works with companies that develop GE crops to consider food safety issues. EPA and FDA apply the same legal authorities and oversight processes to regulate GE and non-GE crops, regardless of how a GE crop was developed. Conversely, USDA's GE crop regulations pertain only to crops for which the donor, vector, or recipient of genetic material is a plant pest. In 2008, USDA took steps to update its regulations to capture GE crops developed with alternative technologies. However, in February 2015, USDA withdrew its proposed rule because, in part, the scope of this rule was not clear. USDA still intends to update its regulations, but has not established a timeline for doing so. GAO's body of work has shown that without milestones and interim steps it can be difficult for an agency to set priorities, measure progress, and provide management a means to monitor the agency's progress in promulgating a new rule. In addition, until a rule is finalized USDA will continue to lack regulatory authority to assess the potential risks, if any, posed by GE crops created with alternative technologies. USDA has limited data on the extent and impact of unintended mixing of GE and non-GE crops, according to USDA officials and stakeholders. USDA officials said that the agency has generally not collected information on unintended mixing in past farmer surveys because no specific request had been made to obtain this information. In a 2012 report, the USDA Advisory Committee on Biotechnology and 21st Century Agriculture (AC21) recommended that the agency fund or conduct research, including quantifying actual economic losses (e.g., loss of a premium price for an organic crop), incurred by farmers as a result of unintended mixing. In its 2014 Organic Survey, USDA surveyed organic farmers on economic losses from unintended GE presence in their crops offered for sale. The survey results indicated that economic losses caused by unintended GE material in organic crops offered for sale exist, although at very small levels. However, USDA does not have similar data for farmers using non-GE seed and marketing their crops as identity-preserved (i.e., a specific genetic variety of a crop). USDA officials said identity-preserved crop acreage is significantly greater than organic crop acreage. Without including farmers growing identity-preserved crops in addition to those growing organic crops in its survey efforts, USDA is missing key information on the potential economic impacts of unintended mixing. Nonetheless, USDA has taken some steps to address unintended mixing, such as reviving AC21, as have farmers and the agribusiness industry. USDA, EPA, and FDA provide varying degrees of information about their oversight of GE crops to the public. USDA and EPA regularly provide information and updates on actions relating to their oversight of GE crops on their websites and use a number of mechanisms to obtain public input on their actions. FDA provides information on GE crops relating to its consultation process. GAO recommends, among other things, that USDA set a timeline for updating its regulations and include farmers growing identity-preserved crops in its survey efforts to better understand the impacts of unintended mixing. USDA generally agreed with these recommendations.
The B-1, a long-range heavy bomber that began operations in 1986, was designed primarily to carry nuclear munitions. Effective October 1997, B-1 units were no longer assigned the nuclear mission. The B-1 continues, however, to support Air Force conventional wartime missions, and planned modifications will provide the B-1 the future capability to deliver precision-guided munitions. The Air Force is currently authorized 70 “mission-coded” B-1s, that is, aircraft that are fully funded in terms of operations and maintenance, load crews, and spare parts. Currently, 52 B-1s are operated by active duty units. The remaining 18 are assigned to the reserve component—10 to the Kansas Air National Guard and 8 to the Georgia Air National Guard. The Air Force has announced plans to increase the number of fully funded B-1s to 84 over the next several years by funding aircraft currently held in reserve. It is expected that this fleet of 84 aircraft will be assigned to both active and reserve component units, as shown in table 1. In general, reserve component B-1 units are considered just as capable of carrying out operational missions as their active duty counterparts. Both the Kansas and Georgia B-1 reserve units train to mobilize and deploy fully mission-ready B-1s on short notice to support the conventional war plans of theater commanders in chief. Like their active duty counterparts, reserve component units are routinely subjected to standardized Air Force operational evaluations. In a recent Air Force operational readiness inspection, unit personnel and aircraft from the Kansas Air National Guard demonstrated their ability to satisfactorily perform their assigned wartime mission. The Georgia unit attained initial operational capability status in December 1997 and expects to conduct its first operational inspection in November 1998. Our analysis of five operational factors the Air Force considers in assessing whether a mission is suitable for reserve component participation indicates that assigning more B-1s to the reserve component than the Air Force has announced would not adversely affect peacetime and wartime missions. The following summarizes the results of this analysis. When aircraft are permanently based overseas, enough aircraft must be in the active component to ensure that an adequate number of stateside positions are available for personnel returning from overseas. However, since B-1s are based only in the United States, the assignment of more B-1s to the reserve component would not affect overseas presence and stateside rotations. For a mission to be suitable for the reserve component, peacetime training requirements must allow sufficient lead times to enable part-time reservists to arrange absences from their full-time civilian employment. According to Air National Guard B-1 unit officials, aircrews must fly about four times per month, which can easily be scheduled around part-time reservists’ civilian employment. Moreover, the B-1 has not been involved in any peacetime operations that have required frequent or unscheduled participation by reserve component personnel. Except for the additional 24 hours reserve component units are allowed to recall unit personnel and mobilize their forces prior to deployment, there is little distinction between the kinds of wartime missions assigned to reserve component units and their active duty counterparts. Notwithstanding the additional time that reserve component units may require to mobilize, regional combatant commands stated that current conventional threat warning times provide ample time for reserve component B-1 units to mobilize and meet the earliest planned mission response times. Should an unforeseen contingency arise with little or no warning, other active duty bomber units would continue to retain the capability to provide the first response. B-1 personnel have not experienced excessive peacetime personnel tempo rates—frequent and lengthy temporary duty assignments away from their home operating locations. This is due in part to the political sensitivities of other countries to the temporary overseas basing of B-1s during peacetime. Air Force data showed that B-1 personnel were on temporary duty for an average of 48 days during fiscal year 1997, much less than the Air Force’s maximum desired standard of 120 days. Thus, personnel tempo rates would not preclude placing more B-1s in the reserve component. The ability to recruit personnel into the reserve component is highly dependent on the location of the unit. Recruiting officials said it is not possible to recruit sufficient reserve component personnel at two of the current five B-1 locations. None of our options include placing more B-1s in the reserve component at these locations. For the three other locations, recruiting officials said that recruiting sufficient reservists was possible given adequate time and resources but that recruiting would be difficult for some of our options. Force mix studies on active and reserve forces have traditionally asserted that it is less costly to operate a reserve component unit than an active duty unit of comparable size and mission. Indeed, the potential for savings was the primary reason cited by the Air Force for establishing reserve component B-1 units with the Kansas and Georgia Air National Guard. In its September 1994 response to the Senate Appropriations Committee’s request for details on transferring bombers to the reserve component, the Air Force stated that placing bombers in the reserve component was fiscally prudent, with no anticipated loss in war-fighting capability. The force mix studies we reviewed noted that the cost to operate a reserve component unit is generally lower than for an active duty unit for several reasons. First, reserve component aircrews are more experienced than their active duty counterparts and require fewer flying hours to meet mission training requirements. Second, reserve component units employ fewer full-time military personnel than active units. Additionally, because of the part-time manning of traditional reserve component units, there are fewer requirements for permanent and costly base infrastructure—such as family housing and base medical care facilities—necessary to support full-time active duty personnel and their families. Table 2 describes six options for assigning more B-1s to the reserves and shows the estimated savings the Air Force could achieve by implementing these options. Savings range from $87.1 million to $235.3 million during fiscal years 1999-2003. By way of illustration, option 1—converting in place an existing active squadron of 12 aircraft—could produce $87.1 million in operational savings over the 5-year period. Option 5—consolidating B-1s at one active and two reserve locations—may be more challenging to implement but could result in greater savings. For example, under option 5, the Air Force would need to convert one active duty base to a reserve component base and consolidate B-1 operations at two other existing locations, one active and one reserve. As shown in table 2, this option could save an estimated $230 million over the 5-year period. The savings include $217.7 million from operational savings and $43.3 million from the elimination of B-1 military construction projects programmed at two of the bases where the B-1 would no longer be assigned. In calculating the net savings, we took into account one-time costs of about $26 million to move an active duty C-130 unit at the converted base to another location and $5 million to construct a squadron operations facility to accommodate an additional B-1 unit at another location. It should be noted that the estimated $230-million savings under option 5 and the $208.6-million savings under option 4 do not include additional savings the Air Force expects would result from reducing the number of B-1 operating locations to less than five. According to Air Force active and reserve component logisticians, reducing the requirement to support five B-1 bases would help ease current shortages in B-1 support equipment and war reserve mobilization kit spare parts reported by B-1 operating units and reduce future expenditures for B-1 support equipment and spare parts. Moreover, converting an active base to a reserve component base could result in lower costs to operate hospital, family housing, and other facilities associated with active duty units. Appendix I presents the potential costs and savings related to each option and the actions the Air Force would need to take to implement each option. Whether the Air Force chooses among our options or develops options of its own, we believe millions of dollars could be saved without reducing mission capability by placing more B-1s in the reserve component. Therefore, we recommend that the Secretary of Defense direct the Secretary of the Air Force to prepare a plan to place more B-1s in the reserve component and seek congressional support for the plan. As you know, 31 U.S.C. 720 requires you to submit a written statement on actions taken on this recommendation to the Senate Committee on Governmental Affairs and the House Committee on Government Reform and Oversight not later than 60 days after the date of the report and to the Senate and House Committees on Appropriations with the agency’s first request for appropriations made more than 60 days after the date of the report. In written comments on a draft of this report, DOD partially concurred with our findings. While DOD agreed that the mix of B-1s at active and reserve components needs further study, it believed that our recommendation that the Secretary of the Air Force develop a plan to place more B-1s in the reserve component is too strong without looking at war mobilization requirements and severe limitations on basing options. DOD believes it has the right mix of B-1s in the active and reserve components and stated that it has no plans at this time to move more B-1s to the reserves or to implement any of our force mix options. DOD agreed, however, to (1) use our report, along with other analyses, to develop a mission-capable, cost-effective force mix; (2) study in detail our force mix options where savings may exist; and (3) ask the Secretary of the Air Force to thoroughly review our report to determine whether it is operationally feasible and cost-effective to move more B-1s to the reserves. DOD also said that after the Air Force conducts a thorough review of the bomber force mix, the results will be incorporated into the upcoming budget cycles. We agree that war mobilization requirements and basing options are important factors and, in fact, considered them in our analysis. Specifically, we assessed five operational factors, including mission response times, that the Air Force considers in determining whether a mission is suitable for reserve component participation. Except for the additional 24 hours reserve component units are allowed to recall unit personnel and mobilize their forces prior to deployment, there is little distinction between the kinds of wartime missions assigned to reserve component B-1 units and their active duty counterparts. Furthermore, we note that in its September 1994 response to the Senate Appropriations Committee’s request for details on transferring bombers to the reserve component, the Air Force stated that placing bombers in the reserve component could be done with no anticipated loss in war-fighting capability. Because our audit revealed no operational reason to limit the number of B-1s in the reserve component to the current level, and a range of basing options is available, we continue to believe that our recommendation is sound. DOD further expressed concern that some of our options would significantly change bases’ loading patterns and that it lacks continuing base closure authority. We agree with DOD that several of our options could result in changes to the base aircraft loading patterns. However, DOD has a range of options for moving more B-1s into the reserve component that could be accomplished within existing authority. We met with Air Combat Command civil engineering officials and were assured that the B-1 bases included in our force mix options have the capacity to accommodate additional B-1s being moved to the reserves. Lastly, DOD stated that the Congressional Budget Office’s model appears to overstate the savings for our options by excluding modernization and initial training costs. Since the entire B-1 fleet is already being modernized, the same modernization costs will be incurred whether the B-1s are in the active or reserve component. We acknowledge that the model did not capture some of the one-time costs, including initial training costs, that would be incurred. However, additional costs would be relatively small and would be recouped from the annual operational savings realized by adding B-1s to the reserve component. DOD’s comments are reprinted in their entirety in appendix II. We held extensive discussions with Air Force officials in Headquarters, U.S. Air Force; the Air Force Reserve Command; the Air National Guard; and the Air Combat Command and researched reports, documents, and prior studies to determine the operational factors the Air Force uses to assess the suitability of missions for the reserve component. We used these factors to develop criteria to assess the feasibility of increasing the reserve component’s participation in the B-1 mission. We visited all five B-1 bases and the Air Combat Command to assess the active and reserve component units’ mission requirements and operational capabilities. We discussed force mix issues with operations, plans, and training officials. From these visits, we obtained information such as planned force structure, base capacity, recruiting potential, and military construction costs and used it to develop force mix options. We analyzed the recruiting, response times, and cost implications for each option. Estimates of recruiting potential were developed by the Air Force Reserve and the Air National Guard. To assess how more B-1s in the reserve component would impact wartime mission response requirements, we obtained information from operational plans, unit capability requirements, and the combatant commands for the theaters in which the B-1 would be employed. To assess the potential savings from placing more B-1s in the reserve component, we used operational cost estimates developed by the Congressional Budget Office and other costs Air Force officials provided such as for the military construction and movement of an operational unit that would be required to implement some of our options. We did not determine whether any of the options we presented would require congressional notification under 10 U.S.C. 2687, base closures and realignments. Neither did we obtain estimates of one-time personnel costs, such as severance pay for civilian employees or change of station costs for active duty personnel. We performed our review from September 1996 to December 1997 in accordance with generally accepted government auditing standards. We are sending copies of this report to interested congressional committees and members, the House National Guard and Reserve Caucus and the Senate National Guard Caucus, the Secretary of the Air Force, the Commander of the Air Combat Command, the Commander of the Air Force Reserve Command, the Director of the Air National Guard, the Director of the Congressional Budget Office, and the Director of the Office of Management and Budget. Please contact me at (202) 512-3504 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix III. Convert an existing active 12-aircraft squadron at Dyess Air Force Base, Texas, to a reserve squadron at Dyess. No change at the other four B-1 locations. Table I.1 shows the number of B-1s at each base under the Air Force’s announced plan and under our option 1. The Congressional Budget Office estimated that this option would save $87.1 million in operational expenses. These expenses include direct and indirect costs such as fuel, maintenance, military pay, training, and medical care. According to Air Force Reserve and Air National Guard recruiters, recruiting to implement this option is possible. These officials estimate that an additional three to eight recruiters would be needed for about 2 years to recruit the required personnel. The cost for these additional recruiters is relatively minor and was not deducted from the savings shown above. Convert an existing active 18-aircraft aircrew training squadron at Dyess Air Force Base, Texas, to a reserve squadron at Dyess. No change at the other four B-1 locations. Table I.2 shows the number of B-1s at each base under the Air Force’s announced plan and under our option 2. The Congressional Budget Office estimated that this option would save $130.6 million in operational expenses. Air Force Reserve and Air National Guard recruiters concluded that recruiting for this option would be difficult. They estimated that an additional three to eight recruiters would be needed for about 2 years to recruit the required personnel. The cost for these additional recruiters is relatively minor and was not deducted from the savings shown above. Convert an existing active 18-aircraft aircrew training squadron and a 6-aircraft squadron at Dyess Air Force Base, Texas, to reserve squadrons at Dyess. No change at the other four B-1 locations. Table I.3 shows the number of B-1s at each base under the Air Force’s announced plan and under our option 3. The Congressional Budget Office estimated this option would save $174.2 million in operational expenses. Air Force Reserve and Air National Guard recruiters concluded that recruiting for this option would be difficult but not impossible. They estimated an additional four to eight recruiters would be needed for 2 years to recruit the required personnel. The cost for these additional recruiters is relatively minor and was not deducted from the savings shown above. Establish a reserve component unit of 54 B-1s at Dyess Air Force Base by reducing to zero both the active duty unit of 36 B-1s at Dyess and the reserve component units of 10 and 8 B-1s at McConnell and Robins Air Force bases, respectively. Convert Dyess from an active to a reserve component base. Increase the active duty unit at Ellsworth from 24 to 30 B-1s by reducing the active duty B-1 unit at Mt. Home from 6 to zero. Move an active duty C-130 unit at Dyess to another (unspecified) location. Table I.4 shows the number of B-1s at each base under the Air Force’s announced plan and under our option 4. The Congressional Budget Office estimated that this option would save $261.3 million in operational expenses. Additionally, $43.3 million in military construction funds planned for fiscal years 1999-2003 would be saved by removing the B-1 units from Mt. Home and Robins. However, according to estimates from Air Force officials, these savings would have to be reduced by $26 million to cover the cost of relocating the C-130 unit at Dyess and by $70 million for military construction costs at Dyess to accommodate the additional 18 B-1s. Thus, the net potential savings are estimated at $208.6 million. This option could produce other savings that are not shown in table I.4. For example, reducing the B-1 operating bases to two could help ease the shortages in B-1 support equipment and mobilization kit spare parts reported by B-1 operating units and reduce future expenditures for B-1 support equipment and spare parts. Converting Dyess from an active to a reserve component base could also produce an undetermined amount of savings from reduced permanent and costly base infrastructure—such as family housing and base medical care facilities—necessary to support full-time active duty personnel and their families. Moreover, by placing additional B-1s at Dyess and Ellsworth, the Air Force could take advantage of unused capacity at those locations. Air Force Reserve and Air National Guard recruiters concluded that recruiting for this option at Dyess would be difficult but not impossible. They estimated an additional six to eight recruiters would be needed for about 2 years to recruit the required personnel. The cost for these additional recruiters is relatively minor and was not deducted from the savings shown above. Establish a reserve component unit of 38 B-1s at Dyess Air Force Base by reducing the active duty unit at Dyess from 36 to zero B-1s and adding 2 more B-1s to Dyess from Robins. Convert Dyess from an active to a reserve component base. Increase the active duty unit at Ellsworth from 24 to 36 B-1s by reducing the active duty unit B-1s at Mt. Home from 6 to zero and the reserve unit B-1s at Robins from the remaining 6 to zero. Move an active duty C-130 unit at Dyess to another (unspecified) location. No change to the reserve component unit at McConnell. Table I.5 shows the number of B-1s at each base under the Air Force’s announced plan and under our option 5. The Congressional Budget Office estimated that this option could save $217.7 million in operational expenses. Additionally, $43.3 million in military construction funds planned for fiscal years 1999-2003 could be saved by removing the B-1 units from Mt. Home and Robins. However, according to estimates from Air Force officials, these savings would have to be reduced by $26 million to relocate the C-130 unit at Dyess and $5 million to construct a squadron operations facility at Ellsworth to accommodate an additional operational unit. Therefore, net potential savings under this option are estimated at $230 million. This option could produce savings that are not shown in table I.5. For example, reducing the requirement to support fewer than five operating bases could help ease the shortages in B-1 support equipment and mobilization kit spare parts reported by B-1 operating bases and reduce future expenditures for B-1 support equipment and spare parts. Converting Dyess from an active to a reserve component base could also produce an undetermined amount of savings from reduced permanent and costly base infrastructure—such as family housing and base medical care facilities—necessary to support full-time active duty personnel and their families. Moreover, by moving 12 additional B-1s to Ellsworth, the Air Force could take advantage of the unused capacity at Ellsworth. Air Force Reserve and Air National Guard recruiters determined that recruiting at Dyess would be very difficult but not impossible. They estimated that an additional four to eight recruiters would be needed for at least 2 years to recruit the required personnel. The cost for these additional recruiters is relatively minor and was not deducted from the savings shown above. Establish a reserve component unit of 36 B-1s at Dyess Air Force Base by reducing the active duty unit at Dyess from 36 to zero. Convert Dyess from an active to a reserve component base. Move an active duty C-130 unit at Dyess to another (unspecified) location. No change at the other four B-1 locations. Table I.6 shows the number of B-1s at each base under the Air Force’s announced plan and under our option 6. The Congressional Budget Office estimated that this option would save $261.3 million in operational expenses. However, to convert Dyess to a reserve component base, the active C-130 unit at Dyess would have to be moved at an estimated cost of $26 million. Therefore, net potential savings under this option are estimated at $235.3 million. Converting Dyess from an active to a reserve component base could produce an undetermined amount of savings from reduced permanent and costly base infrastructure—such as family housing and base medical care facilities—necessary to support active duty personnel and their families. Air Force Reserve and Air National Guard recruiters assessed the recruiting for this option to be difficult but not impossible. They estimated that an additional six or more recruiters would be needed for about 2 years to recruit the required personnel. The cost for these additional recruiters is relatively minor and was not deducted from the savings shown above. George O. Morse, Evaluator-in-Charge Leslie M. Gregor, Senior Evaluator Suzanne K. Wren, Senior Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO reviewed the cost and operational implications of assigning more B-1 bombers to the reserve component, focusing on: (1) whether operational factors preclude greater reserve component participation in the B-1 mission; and (2) options for increasing the number of B-1s assigned to reserve component units and their effect on operations and costs. GAO noted that: (1) Air Force active and reserve components consider essentially the same operational factors in determining whether a mission is suitable for the reserve component; (2) factors Air Force officials consider include: (a) overseas presence; (b) peacetime training; (c) mission response times; (d) personnel tempo; and (e) personnel recruiting; (3) GAO's assessment of these factors showed that they do not preclude assigning more B-1s to the reserve component; (4) B-1s are not based overseas, peacetime training can be scheduled around part-time reservists' civilian employment, reserve units could mobilize to meet mission response times, and personnel tempo rates for B-1 unit personnel do not exceed the Air Force's maximum desired standard; (5) however, the lack of availability of recruitable personnel in some locations limits where reserve units can operate; (6) if the Air Force were to assign more B-1s to the reserve component than are currently planned, the cost to operate the B-1 fleet could be reduced--without adversely affecting day-to-day peacetime training or critical wartime missions or closing any bases; (7) GAO developed six options for assigning more B-1s to the reserves; and (8) based on Congressional Budget Office cost savings projections and GAO's analysis of other one-time costs, GAO estimates that implementing these options could produce savings ranging from $87.1 million to $235.3 million during the last 5 years (1999-2003) of the current Future Years Defense Program.
GPRA was enacted in August 1993 to improve the effectiveness and efficiency of federal programs by establishing a system to set goals for program performance and to measure results. Congress passed GPRA because a lack of precise goals and performance information on the results of federal programs had hindered federal managers from improving the effectiveness and efficiency of federal programs. The same lack of clear goals and information on results had hindered congressional policymaking, spending decisions, and oversight. For a more detailed description of GPRA’s requirements, see appendix I. In recent years, the states we selected confronted similar challenges and, in response, implemented reforms similar to those required by GPRA. However, the implementation of their reforms varied. For example, in some states, management reforms took a broad, statewide focus, while in others, reforms were primarily implemented at the agency level. The length of time that the states were involved in management reforms also varied. For example, experience among the states with strategic planning had ranged from 2 to 9 years at the time of our review. Table 1 provides an overview of the results-oriented management reforms implemented or being considered by the six states we visited. Our objective was to identify some of the experiences state governments had in implementing management reforms that were reported as successful and thus, may assist federal agencies in implementing GPRA. In particular, we examined management reforms that are similar to those required by GPRA, such as strategic planning and performance measurement, and the alignment of management systems. We examined those reforms that had been reported as being successful. We reviewed current literature on public management and interviewed state management authorities at the Government Accounting Standards Board, the National Governors’ Association, the National Academy of Public Administration, and the National Conference of State Legislatures. To obtain further information on states for inclusion in this report, we reviewed the annual “The State of the States” report issued by Financial World magazine, a number of state strategic plans, state planning and budgeting documents, and our prior report summarizing selected states’ experiences with performance budgeting. We looked for states that had sought to increase their focus on program results through initiatives similar to the key components of GPRA, such as strategic planning, performance measurement and reporting, and performance budgeting. We selected Florida, Minnesota, North Carolina, Oregon, Texas, and Virginia for our review because they were implementing some or all of these reforms. To identify state governments’ experiences, we asked state officials to guide us to agencies and programs that had begun to implement management reforms. To identify successes that may be applicable for federal agencies’ GPRA implementation, we looked for reforms similar to the key components of GPRA that were reported to result in improvements for the agencies or programs so that we could focus on what aspects of management reforms were successful and why. We interviewed state executive and legislative branch officials, agency administrators and managers, and service delivery and audit staff. We also interviewed public interest group officials and private industry leaders to obtain an external perspective on selected state reforms. We reviewed state and agency documents, such as state and agency strategic plans, human resource management plans, and information management plans. We also reviewed pertinent state legislation, budget documents, performance reports, audit reports, customer surveys, and training materials. Because our objective was to identify the relevant experiences of the state governments we selected, we relied on the states’ own evaluations and assessment of their reforms. We did not independently verify the accuracy of the information provided by the states. As shown in table 1, each of the six states generally implemented some form of strategic planning, performance measurement, or management systems alignment. However, we discussed a limited number of examples in this report that represented the most complete or illustrative experiences the states had at the time of our visit in implementing those management reforms. We did our work from April 1993 to April 1994 in Washington, D.C., and the six states in accordance with generally accepted government auditing standards. We focused on identifying successful management reforms implemented in the six states we selected that may help federal agencies in their efforts to implement GPRA and become more results oriented. We did not obtain comments from the states on a draft of this report. However, in October and November 1994, we asked officials in each of the states to verify the accuracy of the information presented on their respective states. These officials said that the report accurately characterized the experiences of their respective states at the time of our review. Oregon officials said that the strategic planning process was used as a means for state executive and legislative branch officials, agencies, and other stakeholders, such as citizens’ groups, to reach consensus on the priority issues for the state. They also said that obtaining statewide consensus on goals helped state agencies work together across agency boundaries to address common goals. Officials from the Virginia Department of Mines, Minerals, and Energy said that their department used strategic planning to focus their staffs’ efforts on achieving common organization goals. They said that involving staff at all levels in the strategic planning process helped to communicate to staff the organization’s mission, values, and goals. According to those officials, participating in the strategic planning process helped staff to learn how their work contributed to achieving the department’s goals. According to state officials, Oregon used strategic planning as a means to get diverse stakeholders, including legislators, agencies, county and local governments, and other community groups, to reach consensus on statewide goals. Oregon’s statewide strategic plan, known as Oregon Benchmarks, was crafted with widespread public input and adopted by the state legislature in 1991, according to Oregon officials. Oregon business, city, county, community, state, and legislative leaders met in 12 regional meetings over 6 months to develop the plan. Oregon officials said that this statewide participation in the strategic planning process contributed to benchmarks—or goals—that accurately reflected statewide values and priorities. State and legislative officials told us that to solidify support, the Oregon Progress Board worked extensively to introduce the benchmarks to legislators in both parties and both houses of the state legislature. In 1991, the Oregon legislature unanimously passed legislation adopting the benchmarks and directed the Oregon Progress Board to update the benchmarks every 2 years. As a result of the strategic planning process, Oregon agencies, legislators, city and county governments, and nonstate organizations could share a common focus on specific statewide goals that they did not have before, according to state officials and a 1992 Progress Board report on the benchmarks. According to Oregon officials, consensus on priority goals was achieved in diverse areas, such as those concerning children and families, education and workforce preparation, workforce training, health and health care, and economic improvement. For example, one statewide priority goal for children and families was to increase the percentage of infants whose mothers did not use alcohol during pregnancy. The baseline of this measure was 93 percent in 1990; this measure increased to 95 percent in 1992. The goal was to achieve 97 percent for 1995 and to achieve even higher goals for the years 2000 and 2010. Oregon state officials told us that strong statewide consensus on goals established during the strategic planning process encouraged state employees to work across agency and program boundaries to accomplish common objectives. Department of Human Resources officials told us agencies were required to identify the statewide goals to which they could make a contribution and develop measures that demonstrated their progress in achieving the goals. In doing so, human resources officials said that agencies found they sometimes needed to change their approaches. For example, before beginning the statewide strategic planning process, the department’s Adult and Family Services Division worked to ensure compliance with the state’s regulations on welfare eligibility. Under Oregon’s strategic plan, the division’s mission shifted from processing welfare clients to helping clients achieve self-sufficiency through a variety of means, such as obtaining child support and education. To demonstrate the progress it made in helping clients meet its self-sufficiency goals, the division measured the average number of months for which clients received welfare and sought to reduce this number from the average of 20 to 17. Human resources officials also said that the statewide program goals sometimes required agencies to work across agency and program boundaries to achieve program outcomes that, individually, they could only partially influence. For example, child support recovery staff in the Adult and Family Services Division of the Department of Human Resources said they had now recognized that their services significantly contributed to the division’s overall ability to help achieve Oregon’s self-sufficiency goals. However, child support collections from noncustodial parents were based on the identification of paternity, which fell under the responsibility of the Support Enforcement Division of the attorney general’s office. To increase collections, support recovery staff worked with support enforcement staff. They said that they found that single mothers were missing appointments with support enforcement staff that were intended to identify paternity. To simplify paternity declaration, child support recovery staff and support enforcement staff established a procedure that involved mothers signing affidavits declaring paternity with Adult and Family Services Division social workers. Oregon officials said that the state benchmarks also have helped agencies and nonstate organizations, such as private businesses, work as partners to achieve common statewide goals. According to the Oregon Progress Board director, nonstate organizations had a great capacity to help the state achieve the benchmarks. For example, according to a 1992 Progress Board report on the benchmarks, to achieve Oregon’s goal of diversifying its economy, the state established benchmarks to increase the share of employment in businesses that added value to the state’s natural resources, such as wood products and agriculture, before they were exported. However, the Progress Board director said that state government had only a marginal ability to achieve those benchmarks on its own. Therefore, the Progress Board encouraged the industries Oregon had targeted for growth to develop and track their own performance measures, such as increased sales and employment, that would demonstrate growth in those industries. After we completed our review, a report was issued by independent reviewers chartered by the Oregon Progress Board that gave an assessment of the Oregon Benchmarks as a mechanism for guiding the state’s strategic plan and as a framework for developing performance measurements. Among other things, the reviewers reported that “remarkable” progress had been made in the development and use of benchmarks by the executive and legislative branches, local governments, and the private sector. However, the report also recommended that more concerted effort was needed to (1) increase the value and use of the benchmarks by the legislature and citizens; (2) increase the integration of the benchmarks into current state policy initiatives, agency programs, and performance measurement processes; and (3) provide more conscious attention to developing and evaluating effective strategies for achieving the benchmarks. For example, the evaluation recommended that the Progress Board (1) provide information, including workshops, to new and returning legislators to discuss the benchmark process and advantages of focusing on results-oriented issues; (2) provide each manager with regular performance reports that identify outcomes of the effort under that particular manager’s responsibility; and (3) encourage and sponsor in-depth examination of benchmark trends and analyses of reasons for progress made, or not made, toward benchmark targets, and use those findings as a major part of the Progress Board’s biennial report. Since its creation in 1985, staff at all levels of Virginia’s Department of Mines, Minerals, and Energy have been involved in the department’s strategic planning process to define the organization’s mission, values, goals, objectives, and strategies. The department was created to consolidate various divisions from three agencies that were responsible for Virginia’s mineral resource programs and functions. This consolidation brought together organizations with different cultures and highlighted the state’s reported need for the new department to focus on geologic mapping and research, energy conservation, and consistent regulation of the mineral and fossil fuel extraction industries. In addition to existing programs and personnel, the new department inherited visible and significant performance failures, inadequate resources, and numerous disgruntled customer groups, according to department officials. Department officials said that when the department was formed, they recognized that it needed to create an environment that fostered staff cohesion and that focused on the new department’s mission. Consequently, the department instituted a strategic planning process that identified customer needs and involved all department staff at various stages in the process. The departmentwide strategic planning process, which was driven by an internal and external customer focus, strengthened the department’s ability to manage the complex programs and issues generated by competing customer interests, according to department officials. Department officials described their annual, participatory strategic planning process as follows. Since 1985, the process began with an off-site meeting of department management staff to discuss such things as the department’s future, challenges, and goals. From this meeting, they said that the department developed a fairly broad strategic plan that documented the department’s mission, values, goals, objectives, and strategies. After the strategic plan development, top division management and employees developed operational plans for each division, which described in detail how the division would implement the department’s strategic plan. From these division operational plans, work unit staff wrote more detailed plans on what was to be accomplished and by whom. Finally, managers developed individual performance plans for their employees that explained management’s expectations of the employee and showed how the employee’s work would contribute to the goals contained in the department’s strategic plan. The states we selected used a variety of performance measures to assess the progress agencies made in meeting statewide or agency strategic goals. For example, agencies typically tracked program costs and the number of services provided. The states supplemented this information by also gathering data on program outcomes that could be used to assess the degree to which state efforts met strategic goals. However, measuring program outcomes entailed a number of challenges. For example, one state agency found that the results of its efforts could take years to occur, and its specific contribution to achieving the results could be difficult to determine. To address this challenge, the agency defined and tracked a number of intermediate outcome measures in addition to final outcome measures. State officials said that training and involvement in the development of performance measures helped ensure that agency managers and staff used the performance measures to gauge their progress in meeting goals and to adjust their operations to meet such goals. However, according to state executive and legislative branch officials, performance information had limited influence on legislators’ decisions about program funding in part because consensus between the branches had not been reached on the measures that agencies would use to gauge performance. In implementing performance measurement, the six states attempted to report on outcomes or results of state programs and also included other types of performance measures to provide a perspective on the effectiveness, efficiency, and cost of state programs. Table 2 provides examples of the mix of measures that states used to provide a range of information on program performance. The states traditionally used input, output, and efficiency measures to provide information on resources used, the quantity of services provided, and service costs, respectively. However, the states or agencies we visited placed a new emphasis on developing outcome measures to measure the extent to which strategic goals and objectives were being met. Outcome measures were intended to gauge the impact of a program’s products or services on the recipients of those products or services. Some state agencies also used information from customer satisfaction surveys as measures of program outcomes. According to Texas state budget documents, the Texas Commission for the Blind used a combination of output and outcome measures to assess the extent to which it met its strategic goals. One of the commission’s strategic goals was “to assist Texans who are blind or visually impaired to live as independently as possible consistent with their capabilities.” The commission established a target percentage of blind or visually impaired people avoiding a dependent living environment as an outcome measure to determine whether this strategic goal was being met. The commission’s strategy for achieving this target was “to provide a statewide program of developing independent living skills.” The commission also established a target “number of adults receiving skills training” as an output measure for this strategy. According to the National Governors’ Association Task Force on State Management, this performance information gave commission staff a clearer understanding of the ultimate goals they were working toward and gave state policymakers a better understanding of the agency’s operations. According to state officials, the Minnesota Trade Office assessed the progress of its programs by using intermediate outcome measures to supplement final outcome measures. Although the desired outcome was to increase exports and create jobs, Trade Office officials said that measuring the success of the Trade Office’s efforts was problematic because 2 to 3 years might elapse from the time the Trade Office assisted a business to when the desired outcome occurred. The Trade Office measured the impact of its services by collecting information on both intermediate and final outcomes. The Trade Office used the information on intermediate outcomes to measure the progress a business made toward exporting, such as “decided to export” or “made foreign market contact.” It used information on final outcomes to measure the end results, such as “delivered a product/service to a foreign market” or “added new export-related jobs.” According to a Trade Office official, since the Trade Office began monitoring performance for results, it could more clearly show the extent to which services had reached intended customers, the perceptions customers had about the quality of services provided, and the impact their services had on Minnesota businesses. Trade Office officials told us that customer surveys were another means by which they gathered outcome measure information to measure its performance. For example, the Trade Office asked clients whether their businesses achieved desired results, such as increased sales in international markets, because of services it provided. By surveying businesses on the degree to which the Trade Office contributed to its clients’ export efforts, the Trade Office developed informative and meaningful information both for Trade Office managers and for those businesses. Before the Trade Office began surveying customers, substantive measures of customer satisfaction and program results did not exist, and the Trade Office relied primarily on activity statistics and anecdotes. Trade Office officials said that customer survey data also helped the Trade Office improve its effectiveness by identifying geographic areas that were underserved. The Trade Office, which began surveying its customers in 1989, found that although about 33 percent of the state’s manufacturing businesses with export potential were located outside the Minneapolis/St. Paul metropolitan area, 28 percent of Trade Office clients were from that area. Consequently, the Trade Office increased the number of businesses served in that area of the state. Based on the reported experiences of Oregon and Minnesota officials, for stakeholders, including agency managers and staff, to use performance measures to gauge progress toward goals, they needed to be involved in developing the measures and needed to understand how the resulting performance information would be used. However, officials and staff told us that agencies faced challenges in developing and using performance measures. For example, they said some state agency staff lacked the skills needed to develop performance measures and had no experience in using performance information. In addition, they said these staff were concerned that they would be held accountable for outcomes that they could only partially influence through their efforts. Oregon and Minnesota agencies provided examples of how they attempted to deal with these challenges through training, employee involvement in developing performance measures, and the commitment of upper management. Oregon officials said that training in the mechanics of measuring performance and the positive uses of performance information helped develop staff-level support for performance measurement. Oregon provided training to all agency heads as their agencies implemented performance measurement. The state also provided ongoing training and guidance to volunteer performance measurement coordinators in each agency as their agencies developed performance measures. These coordinators frequently served as agency mentors and helped train agency staff in the development and use of performance measures. The state also produced two video training tapes advocating the use of performance measures that agencies used to train staff. The tapes included testimonials by the governor, the Oregon Progress Board, agency heads, union leadership, and agency staff supporting the use of performance measures. An official at Oregon’s Department of Transportation said that agency staff received 9 days of training, including a 2-day orientation, a 3-day team-building exercise, and a 4-day session in the development of performance measures. Oregon’s Department of Human Resources staff said that although they had not received training in the development of performance measures, they did receive training in working effectively as a team and that this team training had helped them when they developed their own performance measures. According to Minnesota economic development officials, when they launched their performance measurement program, managers were afraid that negative performance information would be used against them. To allay this fear, upper management involved program managers in developing the customer survey instrument, acknowledging that the program managers knew their programs best and therefore could develop appropriate measures of customer satisfaction. Also, upper management emphasized that performance information would be used to improve operations and agreed to provide survey results to program managers first before making them public. According to the economic development officials, obtaining program manager support was essential because program managers provided valuable insights on the interpretation of survey results and possible program improvements. Some Oregon state agencies also obtained staff support for performance measurement by allowing work groups to develop their own performance measures. Officials said the value in this approach was that staff were less likely to criticize and more willing to achieve performance measures and targets they had developed themselves. Program staff from the Adult and Family Services Division of the Department of Human Resources said that they were concerned initially that, among other things, performance information would be used against them, either to justify firing underperforming staff or to justify layoffs due to budget cuts. Also, they said that they did not understand how they could benefit from performance measurement. The staff said that by allowing them to develop and use performance measures to improve their operations, they came to accept this new way of managing their work. However, they said that obtaining staff buy-in is a continuing challenge and that some staff still harbored concerns that performance information would be used against them. Oregon agencies gave work groups the opportunity to develop their own measures to gauge their progress toward the department’s overall goals. For example, the goal of the Adult and Family Services Third-Party Recovery Program was to recover funds owed to the state by third-party insurers. Therefore, program staff chose to measure the number of liens filed against the third-party insurers as a performance measure because no collection could occur until a lien had been filed. The program staff said that in the past, managers never communicated agency goals to them and staff considered only the tasks to be performed. Staff said that now they focused their efforts on actions more directly linked to the outcomes they were trying to achieve and set their own performance goals for their efforts. Furthermore, they said that they worked as a team to track their own performance. Finally, staff from one Oregon state agency told us that top and mid-level agency management needed to communicate the importance of performance measurement to staff and listen to staff concerns about setting and achieving performance goals. A state Department of Transportation official said that he and the department director demonstrated sustained commitment to the management reforms by publicly advocating outcome-based performance measurement and other results-oriented reforms. He said the director held regular brown bag lunches to talk about the management reforms. Of the states we selected, Minnesota, North Carolina, Oregon, and Texas sought to develop performance budgets that used results-oriented performance information during the budget development process. We reported in February 1993 that the difficulty selected states had in achieving stakeholder consensus on meaningful performance measures was a key reason performance measures had not attained sufficient credibility to influence resource allocation decisions. The experiences of the state reforms that we examined as part of this review continued to underscore the importance of executive and legislative branch officials working together and the damage to performance budgeting reforms when strong working relationships were not established. For example, in 1992, the Minnesota Department of Finance instructed state agencies to develop performance-based budgets. According to the department’s instructions, these budgets were to show how agency activities related to the overall goals of the state and were to include specific performance measures that could be used to measure progress toward the goals. However, a senior Department of Finance official said that a major weakness with the state’s performance budgeting reform was that the legislature and its staff had limited involvement as the reform was being implemented. Similarly, legislators and legislative staff we spoke to said they were dissatisfied with the budget process and the performance information provided to make budgetary decisions. One legislator said that the information included in the budget books was “almost worthless.” A legislative staff member said that a performance-based budget was a “great idea” but confirmed that the Department of Finance should have gotten more input from the legislature when the department was developing the budget. After we had completed our field work, the Minnesota Office of the Legislative Auditor issued an evaluation in February 1994 assessing the state’s performance budgeting efforts and recommending a number of improvements. The legislative auditor reported that performance information presented in the 1994-95 budget generally had little impact on discussions or decisions by the executive and legislative branches. The legislative auditor also suggested a number of ways to increase the legislature’s use of performance information. For example, the legislative auditor endorsed efforts to use agencies’ performance reports as a focal point for legislative oversight. The legislative auditor noted that such “performance reviews” might provide a useful forum for discussing agencies’ missions, objectives, and performance. The legislative auditor also noted that although Minnesota’s performance budgeting efforts had problems, state agencies still were using performance information internally to help them manage their programs. For example, as we noted earlier, the Minnesota Trade Office used intermediate outcome indicators and customer surveys to help guide its efforts. Similarly, the legislative auditor reported that the Minnesota Department of Revenue expected to use measures such as customer satisfaction and the tax compliance gap (the percentage of taxpayers who should file returns but do not) to help with spending decisions, daily operations, and accountability. North Carolina also has sought to increase the use of results-oriented performance information during budget deliberation. A governmentwide audit by a committee of 27 public officials and private citizens recommended, among other reforms, the development of a results-oriented budget process to enable the legislature to focus on the intended outcomes of expenditures rather than budget line items. The committee felt that this budget reform was so important that it proposed implementing results-oriented budgeting before the audit was completed. In response, the executive branch implemented a results-oriented budgeting process on a pilot basis in two areas, health and environmental programs, then on a broader basis in six program areas for the 1995-96 biennium. The pilot performance budgets were not used by the legislative committees in their 1993-94 biennium budget deliberations. According to an official of the Fiscal Research Division of the state legislature, budget officials did not consult state legislators and their staffs on their needs and requirements for performance information to be included in the pilot performance budgets. This official said that budget officials first should have introduced legislators and their staffs to the concept of outcome performance measures and demonstrated the value of changing from the then current line-item budgets to performance budgets. Next, the official said, budget officials should have determined the types of outcome measures legislators needed, established the reliability of the performance measures, and reported the performance information in user-friendly format. Both planning and budget officials said that rushed implementation limited the agencies’ ability to develop outcome measures for their budgets. In developing a results-oriented budget for 1995-96, as required by 1994 amendments to the North Carolina Executive Budget Act, planning and budget officials implemented a process that produced outcome measures in the selected program areas. Planning and budget officials plan to meet with legislators during the 1995 session to help clarify the expected outcomes and refine the outcome measures. The states and state agencies we selected that had begun to use strategic planning and performance measurement generally determined that they also needed to align their information, human resource, budgeting, and financial systems to support program goals. As part of this, they began to search for ways to provide managers with more flexibility in the use of resources. However, because many of these systemic changes have yet to be fully implemented, the long-term challenges, costs, and benefits have not yet been determined. The experiences of Texas and Oregon showed that management information systems (MIS) needed to provide a full range of information to support managers’ efforts to achieve agency goals. In general, the states had used their MIS to collect and report program input data, such as staff years and activities completed, and input costs, such as those for salaries and equipment. State officials told us that although those data were important to them for managing their programs, agencies also needed to use their MIS to collect and report output and outcome data to demonstrate the progress programs made in achieving performance goals and/or the funding required to achieve specific performance targets. According to state officials, Texas changed its statewide MIS from one that reported data on input costs by program to one that supported statewide strategic planning. Texas did this by restructuring its MIS to include the missions, goals, and objectives of its agencies, along with specific strategies for achieving the objectives and measures of progress in terms of outcomes, outputs, and efficiency. The MIS also linked budgeted expenditures, accounting, and performance information. According to a state official, under the new MIS, every state agency was linked by computer to the State Comptroller’s Office, the Office of the Governor, and the Legislative Budget Office. Oregon state officials and line staff discussed with us the need to augment their MIS to include a fuller range of performance measurement data. However, they also pointed out to us that additional data collection efforts must be reconciled with existing collection efforts to limit burdensome and duplicative data gathering. According to one Oregon official, the state’s existing budgeting system included workload measures. He said that when Oregon adopted a performance measurement system that required output and outcome measures, some agencies collected at least two different forms of data for the same programs to meet the requirements of both management information systems. For example, starting in 1989, the Oregon Department of Transportation required work units to collect performance measurement data. This required units to collect data on resource use as well as output and outcome data for the same activities. For budgeting purposes, the highway maintenance management information system was used to record input measures, such as worker hours, for activities like snow and ice removal. To measure performance, a separate system—the performance measurement management information system—was used to collect performance information for the snow and ice removal activities in terms of the number of road-miles cleared. Highway maintenance staff said that when the requirement was first announced, they resisted the state’s new performance measurement requirement because of the burden of collecting both types of data. Recognizing the need for both types of data, the department was trying to combine the two data collection systems into a single system at the time of our visit. Some states realized they needed to align their human resource management systems to support a focus on outcomes. The focus of such alignment ranged from changes to staff appraisal systems to the reduction and streamlining of staffing rules and procedures. At the time of our visit, however, the states had not completed efforts to align their human resource management with agency missions. They were finding that it is easier to identify rigidities and problems with civil service systems than it is to find workable solutions that also carry out public sector merit principles. Minnesota initiated a restructuring of its human resource management system in part to support its results-oriented strategic planning and performance measurement reforms, according to a report by the state’s Commission on Reform and Efficiency (CORE). CORE and the Minnesota Department of Employee Relations, which administered the state’s human resource management system, met with hundreds of stakeholders, including agency managers, personnel directors, line employees, union representatives, legislators, and others to determine what was wrong with the human resource management system and how to restructure it to be more effective. As described in the CORE report, Minnesota’s human resource management system had been designed in 1939 to ensure stability and standardization at a time when government was characterized by political patronage and inequitably applied personnel policies. CORE determined that this system had grown too complex and unresponsive to meet the needs of government and the people it served. Among the many problems CORE addressed were performance appraisal and training systems that did not attempt to link employee performance and development to customer needs and the achievement of agency goals. As described in its report, CORE found that the current performance management system focused on evaluating how an employee performed a defined set of activities rather than how an employee accomplished objectives that contributed to agency mission and goals. To address this problem, CORE sought to design a performance management system that would link an agency’s performance goals to its work assignments and employee performance evaluations. CORE also reported that employee training and development needed to support the achievement of organization mission and goals. Focus group discussions led by CORE revealed, among other things, that the agencies needed to improve their training development by examining trends and planning for workforce skills needs. CORE proposed linking training and development decisions to organizational goals, objectives, and performance by using goals and outcome measures as the criteria for planning, prioritizing, and arranging training and development activities. In its report, CORE offered this hypothetical example of such a linkage: “Train X people to operate Y equipment to perform Z, which supports a certain program or goal of the organization.” “n 1991, the Governor’s Commission for Government by the People . . . recommended that pilot agencies act as laboratories for other agencies, experimenting with flexibility concepts, beginning with the test elimination of constrictive state personnel and budget requirements. The . . . ommission noted that the personnel and budget systems often concentrate on inputs and ignore outcomes, limit a manager’s flexibility to move resources as needs change, hide the true costs of programs, and encourage managers to waste money.” On the basis of the commission’s findings, the Florida legislature established budget and personnel flexibility pilot projects in several departments. Through the pilot projects, selected departments were given the authority to act outside of the normal personnel and budget requirements of Florida statutes. They were granted greater flexibility to (1) establish their own personnel classification systems and pay plans and (2) transfer funds and budget authority internally without prior approval from the Executive Office of the Governor. According to an audit by the Florida legislature’s Office of the Auditor General, through its pilot, the Department of Revenue sought to recruit and retain a superior workforce, improve workforce productivity and morale, and ensure that its personnel system and procedures supported the workforce. The department implemented the pilot program by finding savings within its existing budget. The department used its personnel flexibility to streamline its grievance and disciplinary procedures; adopt flexible work days, hours, and work sites; provide pay raises not tied to promotion or working later shifts; experiment with various uses of administrative leave; and establish new job classifications. The department used its budgetary flexibility to transfer positions and funds within planned expenditures to fund new priorities in programming and office automation, purchase personal computers, and provide raises to employees. According to a senior Florida official, the productivity pilots influenced executive and legislative willingness to reform Florida’s planning and budget system to encompass greater managerial flexibility and accountability for outcomes. For example, this official said Florida’s Department of Transportation was able to use the personnel program designed by the Department of Labor and Employment Security, a pilot agency, as the basis for its broad-banded approach, reducing the number of job classes from 1,718 to 96. Management reforms that are under way in Florida, Minnesota, North Carolina, Oregon, Texas, and Virginia reflect a common objective shared by the states to make their governments more results oriented. Although each of these states implemented different reforms to respond to its individual needs and political environment, the reforms included requirements similar to those of GPRA, such as strategic planning and performance measurement, and the alignment of certain management systems. The experiences of the states we selected led to a common conclusion among the officials we interviewed that the management reforms similar to those contained in GPRA required a long-term effort, but could help to improve agencies’ effectiveness and efficiency. For example, the states’ experiences suggest that strategic planning and performance measurement could be an important means for stakeholders to obtain agreement on common goals and measure progress toward achieving those goals. The states reported that they used strategic planning to improve working relationships within and across agencies and across levels of government aimed at achieving desired outcomes. Performance measures were designed to provide the critical information needed to assess the degree to which the desired outcomes were being achieved. The states’ experiences suggest that, if successful, GPRA could serve as a powerful tool for developing and communicating agreement across the federal system on programs’ goals and for measuring progress in achieving those goals. We are sending copies of this report to the Vice President; the Director, Office of Management and Budget; other interested congressional committees; the governors of the states visited; and other interested parties. We also will make copies available to others on request. The major contributors to this report are listed in appendix II. Please contact Charles I. Patton, Associate Director, or me on (202) 512-8676 if you have any questions. GPRA requires federal agencies to develop, no later than the end of fiscal year 1997, 5-year strategic plans that include the agency’s mission statement, identify the agency’s goals, and describe how the agency intends to achieve those goals through its activities and through its human, capital, information, and other resources. Under GPRA, agency strategic plans are the starting point for agencies to set goals for programs and measure the performance of the programs in achieving those goals. In addition, GPRA requires agencies to submit, beginning in fiscal year 1999, annual program performance plans to the Office of Management and Budget (OMB) and program performance reports to the President and Congress. Program performance plans are to describe how agencies are to meet their program goals through daily operations and establish target levels of performance for program activities. In these plans, agencies are to define target levels in objective, measurable terms so that actual achievement can be compared against the targets. Agencies’ individual performance plans are to provide information to OMB for an overall federal government performance plan that OMB is to develop and submit annually to Congress with the president’s budget. In their program performance reports, agencies are to show (1) program achievements compared to the targets specified in the performance plans; and (2) when a target has not been met, an explanation of why the target was not met and what actions would be needed to achieve the unmet goals. GPRA also allows agencies to propose in their annual performance plans that OMB waive certain administrative requirements. These administrative waivers would provide federal managers with more flexibility to structure agency systems to better support program goals. Under GPRA, the administrative requirements eligible for waiver would be nonstatutory and involve only budgeting and spending within agencies. In return, agencies would be held accountable for “achieving higher performance.” Finally, GPRA requires a 2-year test of performance budgeting in not less than five agencies, at least three of which have had experience developing performance plans. Under the test, performance budgets are to provide Congress with information on the direct relationship between proposed program spending and expected program results and the anticipated effects of varying spending levels on results. GPRA calls for phased implementation so that selected agencies can develop experience from implementing its requirements before implementation is required for all agencies. In fiscal year 1994, OMB selected 53 agencies or programs to pilot strategic planning, performance planning, performance measurement, and performance reporting and will select additional pilot agencies in fiscal years 1995 and 1996. OMB also will be selecting agencies from among the initial pilots to pilot managerial flexibility and test performance budgeting in fiscal years 1995 and 1998, respectively. Although GPRA does not call for governmentwide implementation of strategic planning and performance planning until fiscal years 1998 and 1999, respectively, OMB and the administration’s National Performance Review have strongly endorsed these reforms and have encouraged all agencies to develop their strategic and performance plans as soon as possible. Management Reforms: Examples of Public and Private Innovations to Improve Service Delivery (GAO/AIMD/GGD-94-90BR, Feb. 11, 1994). Improving Government: Actions Needed to Sustain and Enhance Management Reforms (GAO/T-OCG-94-1, Jan. 27, 1994). Management Reform: GAO’s Comments on the National Performance Review’s Recommendations (GAO/OCG-94-1, Dec. 3, 1993). Improving Government: Measuring Performance and Acting on Proposals for Change (GAO/T-GGD-93-14, Mar. 23, 1993). Federal Performance Management: Agencies Need Greater Flexibility in Designing Their Systems (GAO/GGD-93-57, Feb. 24, 1993). Performance Budgeting: State Experiences and Implications for the Federal Government (GAO/AFMD-93-41, Feb. 17, 1993). Quality Management: Survey of Federal Organizations (GAO/GGD-93-9BR, Oct. 1, 1992). Program Performance Measures: Federal Agency Collection and Use of Performance Data (GAO/GGD-92-65, May 4, 1992). Organizational Culture: Techniques Companies Use to Perpetuate or Change Beliefs and Values (GAO/NSIAD-92-105, Feb. 27, 1992). Government Management Issues (GAO/OCG-93-3TR, Dec. 1992). Financial Management Issues (GAO/OCG-93-4TR, Dec. 1992). Information Management and Technology Issues (GAO/OCG-93-5TR, Dec. 1992). Program Evaluation Issues (GAO/OCG-93-6TR, Dec. 1992). The Public Service (GAO/OCG-93-7TR, Dec. 1992). The Chief Financial Officers Act: A Mandate for Federal Financial Management Reform (GAO/AFMD-12.19.4, Sept. 1991). Service to the Public: How Effective and Responsive Is the Government? (GAO/T-HRD-91-26, May 8, 1991). Management Practices: U.S. Companies Improve Performance Through Quality Efforts (GAO/NSIAD-91-190, May 2, 1991). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed six states' experiences in implementing management reforms and how their experiences could assist federal agencies in implementing the Government Performance and Results Act (GPRA). GAO found that: (1) federal agencies may be able to better implement GPRA and increase program effectiveness if they adopt results-oriented management reforms similar to those implemented by the six states; (2) federal agencies will need long-term commitment and cooperation between the executive and legislative branches to implement management reforms and mission-related program goals and performance measures to effectively implement GPRA; (3) the states' use of strategic planning has helped build consensus on statewide goals and has fostered interagency cooperation; (4) the states use a variety of performance measures to gauge agencies' progress in meeting strategic goals; (5) the states train their staff on how to use performance measures and involve them in the development process to effectively implement results-oriented reform; (6) the states have aligned their information, human resources, budgeting, and financial management systems to better support managers in their efforts to achieve statewide goals and to obtain stakeholders' agreement on strategic goals; (7) the states have provided program managers with progress reports, ways to assess staff achievement, and greater flexibility to meet program goals; and (8) although the effects of the management reforms could not be fully determined, the states remain committed to change.
The Park Service is composed of headquarters, seven regional offices, and 417 park units that cover 84 million acres across all 50 states. Park units include national parks, national battlefields, national recreation areas, and national lakeshores. The Park Service has long worked with concessioners who provide visitor services, such as lodging and recreational opportunities in national park units, and some concessions operations are nearly as old as the parks in which they operate. For example, the Many Glacier Hotel in Glacier National Park opened for business in 1915, 5 years after the park was established in Montana, and is currently operated by a concessioner. As of April 2016, the Park Service had 488 concessions contracts in over 100 park units, and in 2015, such operations collectively generated about $1.4 billion in gross revenues and paid about $104 million in franchise fees to the Park Service. Concessioners provide a vast array of services throughout the national park system. Table 1 shows the most commonly offered services under concessions contracts. Concessions contracts also vary in size and scope. The largest concessions contracts in terms of revenues generated generally offer lodging, food, and retail services. According to a Park Service presentation, these three services accounted for over half of the revenue generated under concessions contracts in 2014. In 2015, the five largest concessions contracts generated more than $50 million each in gross revenues for services in Yosemite, Yellowstone, and Grand Canyon national parks along with Statue of Liberty National Monument and Glen Canyon National Recreation Area. These contracts generally offer lodging, food, and retail services. In addition, in 2015, 20 concessions contracts with the largest gross revenues accounted for about two thirds of the total revenues generated under all concessions contracts. In contrast, 177 concessions contracts each generated less than $100,000 in annual revenues in 2015. Many of these contracts were for guide services and outfitters. Some concessioners, such as those that operate lodges, are assigned buildings or land that are owned by the federal government, and they are responsible for maintaining them during the life of the contract. As shown in figure 1, the concessions contracting process consists of three main steps: prospectus development, contract award, and contract management. Park Service staff at the headquarters, regional, and park unit levels serve different roles in the concessions program. Park Service headquarters provides guidance on the concessions program and oversees the process for developing prospectuses for contracts with anticipated annual gross revenues of more than $5 million a year or a contract term of more than 10 years, which are known as “headquarters- level” contracts. As of April 2016, there were 52 headquarters-level contracts, and these contracts accounted for almost 80 percent of the total gross revenues generated by concessions contracts in 2015. Within headquarters, four branches have different roles in the concessions program, as shown in table 2. According to Park Service officials, regional staff are also involved in developing prospectuses for headquarters-level contracts. In addition, regional staff take the lead in developing prospectuses for concessions operations that are under $5 million a year in revenue or 10 years in length and answer questions that park unit staff may have on the concessions program. The primary role of park unit staff is to oversee concessions contracts once they have been awarded. Park unit staff will also provide input into prospectus development, such as what services should be allowed. According to Park Service officials, park unit staff include full-time, dedicated staff who are generally known as concessions management specialists and collateral duty staff. Collateral duty staff have other responsibilities at the park in addition to overseeing concessions, such as being a law enforcement ranger. The 1998 Concessions Act authorizes the concessions program at the Park Service to provide necessary and appropriate services to visitors that are consistent with the preservation and conservation of a park unit’s resources. The 1998 Concessions Act repealed the National Park Service Concessions Policy Act, which was enacted in 1965 (1965 Concessions Act), and made changes to the concessions program. For example, the 1998 Concessions Act reduced the ability of the Park Service to offer a preferential right of renewal to incumbent concessioners, which allows an incumbent concessioner to match a better bid offered by a competitor. The 1998 Concessions Act allows a preferential right of renewal for concessioners offering guide or outfitting services or those with anticipated gross revenues under $500,000. In contrast, the 1965 Concessions Act allowed a preferential right of renewal for all concessioners. The 1998 Concessions Act also changed how concessioners are to be compensated for capital improvements made to buildings or land assigned to them. Under the 1998 Concessions Act, concessioners that make approved capital improvements receive a leasehold surrender interest (LSI) for this improvement. LSI generally represents the initial value of capital improvements adjusted for inflation and depreciation made by a concessioner to a property, such as building a new structure, completing a major rehabilitation, or installing a non-removable piece of equipment, known as a fixture. LSI took the place of a system of compensation for capital improvements called “possessory interest” that existed under the 1965 Concessions Act. According to Park Service officials, one of the key differences between these systems is the method used to calculate their values, with LSI being easier to calculate than possessory interest because there is a defined formula to calculate LSI in the act. The Park Service tracks LSI balances during the contract term, and if a contract is awarded to a different concessioner when the contract ends, the 1998 Concessions Act requires the previous concessioner to be paid for any LSI. Table 3 compares selected provisions in the 1965 Concessions Act and 1998 Concessions Act. Our 2000 report on the concessions program, which we issued prior to the implementation of the 1998 Concessions Act, highlighted three management challenges: Inadequate staff qualifications and training: We generally found that Park Service staff at the headquarters, regional, and park unit levels did not have the necessary skills or training to implement the program. We found that concessions staff were often transferred from other career fields at the Park Service, such as interpretive or law enforcement rangers and the agency’s view was that “anyone could do concessions.” Inability to manage contract workload and expired contracts: We found that the Park Service was unable to manage its concessions contract workload, and this had resulted in approximately 45 percent of concessions contracts and permits (283 of 630) being expired as of December 31, 1999, meaning that these contracts had exceeded their original term and were under an extension. We found that this backlog of expired contacts resulted in concessioners not having an incentive to invest in facilities that were under short-term extensions because the extensions gave concessioners little time to earn a return on their investment. Lack of accountability: We found that the organizational structure of the Park Service impeded accountability because Park Service headquarters did not have direct authority over how park units were implementing the program. We also found that confusion existed about the roles and responsibilities in the program among headquarters, region, and park unit staff, and that the review process for concessioners was not adequate. To address some of these challenges, we recommended that the Park Service improve the qualifications of concessions staff, use contractors in the program, or do some combination of both. We also recommended that the Park Service establish a formal process to conduct periodic independent inspections of concessioners’ lodging facilities. The Park Service agreed with these recommendations and has taken steps to implement them, as discussed below. The Park Service has made several changes to the concessions program since our 2000 report. Specifically, the Park Service has hired concessions staff with relevant skills or educational backgrounds, is using consultants, and has increased training opportunities. In addition, the Park Service has reduced the number of concessions contracts under extension because it is issuing contracts on a regular basis. The Park Service’s headquarters office has also increased involvement in the program and is collecting more data from concessioners. However, we found that some of these data are incomplete because concessioners did not submit required financial reports or data were reported incorrectly and were not identified in the agency’s review of the reports. In our 2000 report, we found that Park Service concessions staff generally did not have the business, financial, or contracting backgrounds needed to successfully carry out the concessions program. Since then, the agency has taken steps to hire concessions staff with relevant qualifications, particularly at the headquarters or regional level. Two park superintendents who have been in the Park Service for more than 30 years said the agency was hiring staff with business backgrounds and specialized skills for the program instead of moving people from other park unit positions, such as rangers, into the concessions program. This is a change from the time of our 2000 report, when we found that the Park Service typically filled concessions positions by transferring staff from other career fields. According to our interviews with Park Service concessions staff at headquarters, regional offices, and park units, many current program staff have relevant experiences or educational backgrounds as follows. Headquarters: The chief of commercial services and the four branch chiefs have educational degrees in relevant fields, such as hospitality or business, prior work experience in relevant fields, or had worked in the concessions program for some time. For example, the commercial services chief, who was hired in November 2014, has more than 25 years of experience working in the hospitality industry along with degrees in hospitality and business. Regional offices: Most of the seven commercial services chiefs who oversee the concessions program in their regions have relevant educational degrees or work experience. Specifically, three of the chiefs had prior relevant experience, such as working on contracts in the private sector, while three others had worked in the concessions program for more than 8 years. In addition, several regional office concessions staff have similar backgrounds and experiences. For example, concessions staff at three of the regional offices have business degrees. Park units: Concessions staff at the 20 park units we interviewed have a variety of backgrounds. Some of these park staff have degrees in business or hospitality or several years of experience in the concessions program, while other staff did not. Those staff with fewer relevant qualifications were generally at parks that did not have large concessions programs, in terms of the number of the contracts at the park or the revenues generated under these contracts. For example, at one park unit that had one small concessions operation for a campground, the one collateral duty staff overseeing this operation did not have a relevant background or much experience in the program. In our 2000 report, we recommended that the Park Service consider using contractors in the concessions program for activities such as writing prospectuses and performing financial analysis. The 1998 Concessions Act directs the Park Service to use contractors to conduct or assist in various aspects of the concessions program, including health and safety inspections, analysis of rates charged to the public, and the preparation of the financial aspects of prospectuses. According to Park Service officials and consultants we interviewed, we found that the agency uses consultants in certain aspects of the concessions program. In addition, the draft guidance directs concessions staff to use consultants to help develop prospectuses for headquarters-level contracts. Some of the ways in which consultants are used, according to our interviews, are as follows: Condition assessment: Consultants develop inventory of park assets, such as buildings and land that are part of concessions operations, and then conduct assessments of their condition. The results of these assessments are used to develop maintenance plans for concessioners to follow under their contracts. Financial analysis: During prospectus development, consultants develop models that estimate future costs and revenues for concessions operations. These models are used to develop the minimum franchise fee that is published in a prospectus. Rate administration: Consultants develop tools to support and conduct rate comparability studies to respond to requests from concessioners to change the rates they charge for visitor services. Environmental audits: Consultants examine environmental management programs that are part of concessions operations, such as their use of hazardous chemicals that are part of certain operations. In our 2000 report, we also found that once staff were transferred from other fields into the concessions program, there was limited training available to them and this further limited their ability to carry out their duties. The Park Service now offers additional training courses on the concessions program to staff to help improve their skills, as shown in table 4. Specifically, the agency offers four classroom-based courses that are several days in length. According to a senior Park Service official, the Evaluation and Pricing course predated the implementation of the 1998 Concessions Act, but the other three classes were developed since the law’s implementation to provide additional training to concessions staff. Of these training courses, concessions staff are only directed to complete the Evaluation and Pricing training as of November 2016, while the other courses are optional. Many of the park unit staff we interviewed said they had taken one or more of the four classroom training courses. At one park unit, a collateral duty staff said that he did not have a background in the concessions area and that the courses had helped him understand how to administer the program. The Park Service supplements these classroom trainings with online training and monthly conference calls during which different concessions topics are covered. Based on our analysis of Park Service data, the Park Service has extended fewer concessions contracts past their contract term. Specifically, as we reported in 2000, approximately 45 percent of concessions contracts and permits (283 of 630) had expired as of December 31, 1999, and many of these had been under extension for 5 to 10 years. In contrast, as of April 2016, 28 percent (136 of 488) of concessions contracts were under extension, and 85 percent (116 of 136) had been under extension for 3 years or less. Table 5 shows the number of contracts that were under extension by region, as of April 2016. The Alaska region has the highest percentage of contracts under extension. According to Park Service officials from this region, limited staff were available to help prepare the prospectuses needed to award new contracts so the region extended some existing contracts. According to Park Service officials, the decrease in the proportion of contracts under extension is due, in part, to a provision in the 1998 Concessions Act, which limits contract extensions to a maximum of 3 years. Officials said that this led the agency to award contracts on a regular basis instead of extending them indefinitely, as had been allowed under the 1965 Concessions Act. In our 2000 report, we found that under the agency’s organizational structure, Park Service headquarters did not have direct authority over how park units implemented the program and that headquarters did not have information on certain aspects of the program, such as centralized information on the condition of lodging facilities. Since then, headquarters has increased its involvement in the concessions program. For example, headquarters is actively involved in preparing prospectuses for the 52 headquarters-level contracts, which are the largest concessions contracts. Specifically, a headquarters Planning and Development branch staff member serves as one of the project managers during prospectus development for these contracts. In carrying out this role, headquarters helps to determine how to structure the concessions contract, including what services to allow under the contract and whether to permit large-scale capital improvements during the term of the contract, such as constructing new buildings. According to the Park Service’s draft guidance on the concessions program, the Director of the Park Service must also approve any capital projects estimated to cost over $1 million. According to Park Service officials, the increased oversight of LSI eligible projects is intended to prevent LSI balances from growing too high under certain contracts. High LSI balances can discourage competition because few companies have the resources to purchase the LSI from the previous concessioner, according to Park Service officials. The Park Service also has some initiatives under way to help oversee the performance of concessioners. For example, Park Service headquarters developed a data system to track the ratings that park unit staff give to concessioners as part of the annual overall review process, according to a senior Park Service official. In addition, this official said that 10 percent of annual overall reviews on concessioners will be subject to further review by Park Service headquarters staff, beginning in 2017. Park Service headquarters currently reviews these annual ratings for completeness, but it is aiming to determine if the rating given to a concessioner is justified by the supporting narrative. In our 2000 report, we also found that the Park Service lacked centralized information on the condition of concessioner lodging facilities, which limited the agency’s ability to oversee the program. Since then, the Park Service has directed that condition assessments of structures maintained by concessioners be conducted during the prospectus development process, according to Park Service officials. Information from these assessments is entered into the Park Service’s Facility Management Software System, which contains data on Department of the Interior facilities. Park Service staff use this information to develop a maintenance plan for the concessions contract, which they review and update as needed, according to the agency’s draft guidance and agency officials. While the Park Service has taken steps to obtain more centralized information on the concessions program, we found that some concessioners’ financial reports were missing or data contained in the reports we reviewed were sometimes reported incorrectly. Standard concession contracts require concessioners to submit annual financial reports to the Park Service for each concessions contract they hold. These reports provide data such as gross revenues, operational costs, and franchise fees paid. The Park Service uses these reports and the data they contain for several purposes, including to reconcile franchise fee payments received from concessioners annually and to generate financial projections that are used to develop prospectuses for contracts, according to agency officials. We found instances where these reports had not been submitted or data on the financial reports that were submitted were incorrectly reported by concessioners. Some financial reports were missing: Financial reports for 2015 had not yet been submitted for 23 of 485 concessions contracts as of November 2016. Under these contracts, a total of about $20 million in gross revenues and about $98,000 in franchise fees were reported on their most recently available financial report, which in most instances was from 2014. According to the standard contract language the Park Service uses for concessions contracts, these reports are due within 120 days of the end of a concessioner’s fiscal year, and Park Service officials said that most concessioners use a calendar year for financial reporting. This means that these concessioners should have submitted their 2015 financial reports by April 30, 2016. When we asked Park Service officials about this issue, they said that this was a recurring issue for some concessioners and that it can be hard to obtain financial reports from smaller concessioners because these concessioners sometimes have only one or two employees and these limited resources can make it difficult to submit reports on time. Financial reports sometimes contained incorrect data: For 39 of 485 contracts, concessioners reported gross revenues, but did not report paying a franchise fee in their 2015 financial report. Under these 39 contracts, a total of about $21 million in gross revenues was reported in 2015. According to Park Service data, a franchise fee should have been paid under these contracts. Park Service officials said that they believed these were instances where the concessioner had paid franchise fees, but had not filled out the annual financial reports properly. They added that these inconsistencies should have been identified by the relevant park unit or the regional office during their review of these financial reports. Some park unit officials said that they are overwhelmed by the number of reports, including financial reports, they receive from concessioners and do not have time to review all of them. In addition, we found that the data from these financial reports are entered into a spreadsheet that does not contain edit checks that could identify possible errors. We used supplemental data from the Park Service to determine that franchise fees had been paid under some of these contracts, but we were unable to confirm franchise fees had been paid for all of them because these data were reported at a park unit level and not at a contract level. According to Standards for Internal Control in the Federal Government, agencies should obtain information from external parties in a timely manner and should have control activities to ensure that data are reliable. However, not all financial reports were submitted in the time frames required by standard contract language, and some that were submitted contained errors that were not identified by Park Service staff during the review process. Without timely or accurate financial data from concessioners, the agency could be limited in its ability to oversee certain aspects of the concessions program such as determining whether concessioners have paid their franchise fees. In interviewing Park Service officials and concessioners, we identified some challenges in the three steps of the concessions process: prospectus development can be a lengthy and expensive process, and it can be hard to generate competition for some contracts; the agency’s evaluation panels can sometimes have difficulty assessing proposals, and the award process can be lengthy; and contract management can be affected by limited staffing and confusion about how to fund capital improvements and maintenance. The Park Service’s commercial services strategic plan highlights many of the challenges we identified and identifies initiatives to potentially address them, but is missing certain information, such as performance measures that are quantifiable to track progress and determine where additional effort may be needed. Several Park Service officials said that developing prospectuses can be a lengthy or expensive process, taking as long as 4 years to complete. Some of these officials noted that the process is lengthy, in part, due to the multiple levels of Park Service review or the time it takes for consultants to conduct condition assessments of concessioner-assigned buildings or land. In addition, according to one consultant we interviewed, it can cost between $70,000 and $400,000 for a consultant to do a condition assessment of the buildings that are part of a concessions operation, depending on the size and scale of the operation. A Park Service headquarters official acknowledged that it can be costly to develop a prospectus but a future contract can generate revenues that greatly exceed these costs. For example, this official told us that the agency spent about $2 million for a condition assessment of 800 buildings for a concessions contract that was anticipated to generate over $100 million in annual gross revenues and had a contract term of 20 years. In addition, several concessioners noted that responding to a prospectus can be time consuming or expensive. For example, some of these concessioners said that developing a proposal cost several tens of thousands to hundreds of thousands dollars. A few Park Service officials and a trade association representing guides and outfitters suggested that a simplified prospectus development process for small contracts may help to reduce the time and costs of developing the agency’s prospectuses and concessioners’ proposals. The Park Service was directed by the 1998 Concessions Act to use a simplified process for small, individually owned entities seeking concessions contracts. The Park Service has developed guidance for one part of the prospectus development process—determining franchise fees for small concessions contracts with projected annual gross revenues less than $250,000. In 2014, the agency began a pilot project designed to help it develop a simplified prospectus process, but it has not yet issued a prospectus using this process. According to Park Service officials, the agency plans to issue a prospectus under the pilot project by the end of 2016 and use the results of this effort to simplify the prospectus development process for small contracts. According to Park Service officials, one of the goals of the 1998 Concessions Act was to increase competition in the concessions program, but several Park Service officials and concessioners we interviewed said that increasing competition on concessions contracts continues to be a challenge. Competition for larger contracts is limited to a few companies, in part, because some contracts have high LSI balances, according to Park Service officials. High LSI balances can discourage competition because few companies have the resources to purchase these balances from the previous concessioner, as we found in 2015. The Park Service has taken steps to manage LSI balances by either reducing high balances on existing contracts or limiting the LSI that a concessioner can incur on a new contract, as we found in 2015. Specifically, the Park Service has reduced high LSI balances by using franchise fees to buy down these balances on contracts that would otherwise have attracted few bidders. Most notably, the agency spent almost $100 million to reduce the LSI balance for capital improvements at Grand Canyon National Park to encourage competition. On another contract, concessioners informed the Park Service that they would not submit proposals since the LSI balance on a concessions contract was too high, according to a Park Service official. As a result of this input, the park unit plans to buy down the LSI to zero, which staff said may help generate competition. In addition, the Park Service has limited the amount of LSI that concessioners can incur on new contracts, as we found in 2015. However, according to some concessioners, not allowing concessioners to incur LSI could limit their interest in investing in concessioner-assigned buildings or land because they would not be paid for eligible capital improvements they make. Competition is also limited because over half of the Park Service’s concessions contracts continue to have a preferential right of renewal, according to Park Service officials. While the 1998 Concessions Act generally prohibits the Park Service from granting a preferential right of renewal for larger contracts, the Park Service estimates that about 70 percent of its concessions contracts continue to have a preferential right of renewal. These contracts generally have less than $500,000 in gross annual revenues or are for guide services or outfitters. Competition for contracts may be limited in such situations because the incumbent can match a better bid offered by a competitor. Some guiding concessioners we spoke with said that it was important to maintain a preferential right for guides and outfitters because these types of concessioners often have specialized equipment and skills unique to the park and service provided, such as mountain climbing. Several Park Service officials and concessioners said that it can sometimes be challenging for evaluation panel members to determine if a bidder can perform all of the services listed in its proposal. According to the agency’s draft guidance on the concessions program, panel members are to typically review information that is submitted as part of the bidder’s proposal, and according to Park Service officials, panel members generally do not ask for additional information during the evaluation process to help assess a bidder’s proposal. Some of these Park Service officials and concessioners said that this can sometimes result in the agency awarding the contract to a bidder who has a well-written proposal, but might have “overpromised” in its ability to implement its proposal. For example, a park unit official said that a concessioner submitted a proposal in response to a prospectus for a tour boat operation and included additional services not specifically required by the prospectus, such as offering a healthy food menu on board the boat. This concessioner was awarded the contract, in part, because of these additional services; however, it took years for the concessioner to follow through on providing this service, according to this park unit official. A concessioner we spoke with said that it would be helpful for park unit staff knowledgeable about the concessions operations at the park unit to be included as an advisor to the evaluation panel to help determine if proposed services are feasible. According to the Park Service’s draft guidance, park unit staff typically serve as technical advisors to the panel. Park unit staff are not always included because of limited travel funds, according to a Park Service official. However, the evaluation panel may informally contact park unit staff when it has technical or park-specific questions, according to agency officials. Another challenge with the contract award phase of the process is the length of time it takes to award a concessions contract, according to some Park Service officials and concessioners. For example, one park unit official we interviewed in July 2016 said that a contract had not been awarded by an evaluation panel that took place in the prior winter and for which only one proposal was submitted. According to Park Service officials, this contract is to be awarded in December 2016. The length of time to award a contract can be affected by the need for agency review. According to the agency’s draft guidance, agency officials are to review the evaluation panel’s recommendation on which concessioner submitted the better proposal before announcing the winner of a contract. There is also a need for review by the Office of the Solicitor during several stages of the concessions process, according to agency officials. In addition, the 1998 Concessions Act requires that the Park Service submit headquarters-level contracts to specified congressional committees for a period of 60 days before they can be awarded. Several Park Service officials as well as concessioners we interviewed said that the agency does not always have enough staff to adequately manage concessions contracts. These management activities include reviewing a concessioner’s performance and compliance with its contract and approving concessioner rates. For example, concessions staff at two park units said they found it difficult to review required reports submitted by concessioners because of the large number of concessions contracts they managed. According to the Park Service’s draft guidance on the concessions program, staff are to review and update required reports to ensure that concessioners are meeting their established operational and maintenance responsibilities and providing visitors with satisfactory services, among other things. Furthermore, two concessioners said that not having enough concessions staff at parks slows the review and approval of their requests, such as changes to their rates. For example, one concessioner said that it took several months for concessions staff at a park unit with limited staff to respond to a request to adjust the prices for food the concessioner sold. Staff at some park units we contacted manage the concessions program as a collateral duty, meaning they have other primary responsibilities, such as law enforcement. A few of these park unit concessions staff said that managing the concessions program as a collateral duty works well because the concessions program is not large at their park. However, staff at other park units said that managing the concessions program as a collateral duty is challenging since their primary duties take up the majority of their time. As a result, they do not always have time to proactively manage concessions contracts, such as approving new prices for services. Park Service officials said headquarters offers a technical assistance program to parks that need support. In a typical year, the program funds a total of 10 onsite trips, during which headquarters staff travel to park units to assist park unit staff in areas such as conducting inspections of concessioner operations. For the 20 park units we contacted, we generally found that parks that had more contracts or headquarters-level contracts also had more staff managing the program (see table 6). One exception was at Glacier Bay National Park and Preserve, where two full-time concessions staff are responsible for overseeing 39 concessions contracts. In addition, parks in our review that had concessions contracts generating higher revenues generally had more staff. For more information on gross revenues generated at these different park units, see appendix II. Limited staffing levels can also affect a regional office’s ability to support parks in the concessions program, according to Park Service officials. According to regional officials, parks that do not have full-time concessions staff can rely on the regional offices for assistance in managing the concessions program. In 2015, the agency tried to address staff shortages in regional offices by allowing each of them to use funding from franchise fees to hire an additional full-time concessions management specialist to help with the workload, according to Park Service officials. As of November 2016, six of the seven regional offices have requested funding to hire additional staff and three of these offices have hired staff using this funding. Some concessioners we interviewed said that it was challenging to determine how to fund maintenance or capital improvements. Concessioners that have been assigned buildings or land, such as hotels or restaurants, are required to maintain these assets, and they may also be required to undertake capital improvement projects. In the area of maintenance and capital improvements, projects may be funded in several ways, depending on the type of work required: Routine maintenance: Activities, such as painting or replacing carpet, are paid for by the concessioner. Personal property: Includes certain items, such as removable equipment and furniture, which are paid for by concessioners. Replacement of building components: Concessions contracts may require concessioners to set aside a percentage of their revenues into a repair and maintenance reserve fund that they establish and manage. These funds can be used to replace a building component, such as a roof or windows, at the end of its useful life. These funds cannot be used for large-scale capital improvements that would qualify for LSI, and cannot be used for routine maintenance needs. Large capital projects or fixtures: Concessioners can also be required under their contracts to undertake capital improvements, which are eligible for LSI, such as constructing a new building, completing a major rehabilitation, or replacing fixtures. The Park Service tracks LSI balances during the contract term, and if a contract is awarded to a different concessioner when the contract ends, the 1998 Concessions Act requires the previous concessioner to be paid for any LSI. Figure 2 shows different types of maintenance or improvements that could be made by a concessioner in a lodging room and the applicable funding category. As this figure shows, a single project to update a lodge room could involve a concessioner determining whether a specific activity qualifies for one of the four categories mentioned above. A few concessioners said it was challenging to determine which category of funding a repair or improvement qualifies for within a single project. For example, one concessioner had to replace some carpet, furniture, and dry wall at a historic lodge due to mold, a project that cost about $50,000. The concessioner said it was surprised to learn that the repair and maintenance reserve fund they manage would cover only the dry wall replacement, which cost about $4,000, and the rest of the expense would be the concessioner’s responsibility. Another concessioner said it was confusing to determine which “bucket” of money to use for improvements made as part of one project. As a result, the concessioner was still trying to determine how to account for different parts of this project completed years ago. Confusion exists, in part, because the Park Service has not finalized certain guidance or made it publicly available. Park Service officials said that the agency provides information on the use of LSI and repair and maintenance reserve funds to concessioners upon request and that some information on these topics is in the contracts that concessioners sign. However, we found that detailed guidance on how to fund particular repairs and capital improvements was not always readily available and that some of the guidance for these funding sources is still in draft. Specifically, the Park Service issued draft guidance on LSI in 2012 and updated it in another draft dated October 2015, but this guidance has not yet been finalized and is not available on the Park Service’s website as of December 2016. Two concessioners said that the agency’s list of fixtures that qualify for LSI, which are listed in the guidance, periodically changes, adding to the challenge of determining what qualifies for LSI. Similarly, the Park Service has developed internal guidance on the use of repair and maintenance reserve funds, but this guidance is also not available on its website for concessioners to consult, as of December 2016. Under Standards for Internal Control in the Federal Government, agencies should communicate with external parties to achieve their objectives. According to Park Service officials, one of their goals in implementing the 1998 Concessions Act was for the Park Service to improve the maintenance of concessioner-assigned buildings. However, confusion about how to fund maintenance and capital improvements will likely continue without finalized guidance that is publicly available to concessioners. This could lead to delays in undertaking needed maintenance projects and capital improvements, which could further contribute to the agency’s deferred maintenance in concessioner- assigned buildings, which was over $400 million in fiscal year 2015, according to Park Service officials. The Park Service developed a 5-year commercial services program strategic plan in 2015, to help improve the commercial services program, including concessions management. This plan was an update to the commercial services improvement plan that had been place for the prior 10 years. According to agency officials, the 2015 strategic plan provided the agency with an opportunity to review progress made on past goals and establish new plans going forward. The Park Service developed this plan based on interviews with concessioners, consultants, and park unit staff, according to a senior Park Service official. As a result, the strategic plan recognizes many of the challenges that we also identified in our interviews with Park Service officials and concessioners. For example, the plan has a goal to improve the prospectus and contract award process which aims to reduce costs and improve efficiency to the government and bidders. Similarly, the plan aims to attract more bids for concessions contracts, increase the accuracy of financial reporting, and increase the percentage of concessions staff who receive training. While the Park Service’s strategic plan recognizes many challenges that the concessions program faces, we found that the plan is missing quantifiable and measurable performance goals, which would help ensure that these challenges are addressed. Specifically, the plan identifies various performance measures, but it has no related targets or time frames, which would clearly identify the level of performance the agency is trying to achieve and by when. For example, within the goal to improve the prospectus and contract award process, the agency has identified four performance measures, one of which is “change in percent of responses to prospectuses (increase).” While the plan notes that Park Service is aiming to increase the percent of responses, which provides a sense of what the agency is trying to achieve, it does not state by how much (target) or by when (timeframe)—two key aspects of a performance goal. According to agency officials, the strategic plan is still a work in progress, and the agency plans to develop a process to track performance in 2017. Until this effort is complete, it is unclear whether this process will include targets and timeframes. As we previously found, a critical element in an organization’s efforts to manage for results is its ability to set meaningful goals for performance and to measure progress towards these goals. The performance planning and reporting framework put into place by the Government Performance and Results Act of 1993 (GPRA), as updated by the GPRA Modernization Act of 2010, provides important tools to decisionmakers. For example, agencies are to develop performance goals that define the level of performance to be achieved in each fiscal year, and express those goals in an objective, quantifiable, and measureable form. Without clearly defined performance goals that would provide a basis against which results can be compared, it will be difficult for the Park Service to track its progress in these areas and determine where additional effort may be needed to address identified challenges to the concessions program. Concessioners help to provide a range of services to visitors to national park units. The Park Service has made positive changes in many of the areas that we identified as challenges in our 2000 report, such as obtaining more centralized information to oversee the concessions program. However, in some instances required reports from concessioners were not provided on time or those submitted contained incorrect financial data that was not identified in the review process. Without more timely and accurate financial data from concessioners, the agency could be limited in its ability to oversee certain aspects of the concessions program, such as whether concessioners have paid franchise fees. In addition, Park Service officials and concessioners identified ongoing challenges with the concessions program. Specifically, some concessioners said they find it challenging to determine how to fund maintenance or capital improvements on buildings or land they can be assigned under their contracts. This is, in part, because some guidance in this area is not finalized or publicly available to concessioners. The Park Service’s strategic plan for the commercial services program recognizes many challenges facing the concessions program and has identified goals that may address some of them. However, this plan lacks performance goals with targets that specify desired outcomes and timeframes. Without clearly defined performance goals, it will be difficult for the Park Service to track its progress in addressing these challenges and determine where additional effort may be needed. To help improve oversight of the concessions program, we recommend that the Secretary of the Interior direct the Director of the National Park Service to take the following three actions: review the financial reporting process and make any necessary adjustments to help ensure timely and accurate reporting of data on annual financial reports; finalize guidance on maintenance and capital improvements and make it publicly available to concessioners; and develop performance goals with targets and timeframes in its commercial services strategic plan. We provided a draft of this report to the Department of the Interior for review and comment. The GAO Audit Liaison from the Department of the Interior responded via e-mail, stating that the department agreed with our recommendations and providing technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of the Interior, and other interested parties. In addition, the report will be available at no charge on the GAO website at www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Our objectives were to examine (1) how the concessions program has changed since our 2000 report and (2) any ongoing challenges in the concessions program. To address these objectives, we examined relevant laws, regulations, and Park Service documents. Specifically, we examined the National Park Service Concessions Management Improvement Act of 1998 (1998 Concessions Act), its associated regulations, and the National Park Service Concessions Policy Act. We examined the Park Service’s draft guidance on commercial services, known as Reference Manual 48A, along with guidance on repair and maintenance reserve accounts, and draft guidance from 2012 and 2015 on leasehold surrender interest. We reviewed some of our prior reports on the concessions program. In particular, we reviewed our 2000 report on the concessions program, which identified management challenges facing the concessions program prior to the implementation of the 1998 Concessions Act. We also examined the Park Service’s commercial services program strategic plan, which includes initiatives to help address challenges in the concessions program, and compared this plan with our past work on leading practices in strategic planning, as applicable. In addition, we obtained and analyzed data on concessions contracts from the Park Service. Specifically, we analyzed administrative data that were provided to us in April 2016 on the concessions contracts in place, the services they provided, and their contract terms. We used these data to conduct various analyses, including identifying the number concessions contracts, the services offered under these contracts, and whether these contracts were under an extension. We analyzed financial data on concessions contracts, including their gross revenues and franchise fees paid, for 2015, the most recent year for which data were available. We used these data to determine the gross revenues that were generated under concessions contracts and the franchise fees that were paid. To determine the reliability of these data, we interviewed agency officials who were familiar with these data and conducted electronic testing of these data. We found these data to be sufficiently reliable for our purposes, which included providing information on the number of contracts, the services provided under contracts, the number of contracts under extension, and the total gross revenues and franchise fees in the concessions program. In our report, we noted some limitations in the financial data. Specifically, we found that some annual financial reports for the year 2015 were missing and that some of the financial reports contained incorrect data. However, we concluded that we could still use the financial data in our report because of the small number of contracts that had data issues, and we concluded that the data we report on total gross revenues and franchise fees would not be substantially affected by the data issues we identified. We interviewed Park Service officials at the headquarters, regional, and park unit levels to better understand staff qualifications and training, the concessions program, as well as their perspectives on ongoing challenges in the program. We used a standard set of questions to obtain information on these topics. At the headquarters level, we interviewed the chief of the commercial services office, who oversees the concessions program, along with the branch chiefs of all four branches of the commercial services office—Asset Management, Planning and Development, Financial Analysis, and Contract Management. At the regional level, we interviewed regional commercial services chiefs in all seven regions—Alaska, Intermountain, Midwest, National Capital, Northeast, Pacific West, and Southeast—as well as concessions staff in these offices who help to manage the concessions program. At the park unit level, we interviewed concessions staff involved in managing concessions at 20 park units that had one or more concessions contracts. Specifically, we interviewed staff from 2 park units in person (Mount Rainier and Olympic national parks) and staff from the remaining 18 park units via phone to ask about their experiences in managing concessions contracts. We selected a range of parks that varied by region; the number of visitors to the park; type of park (i.e. scenic versus historical); and the size of the concessions program at these parks. We interviewed officials from at least two park units in all seven of the Park Service’s regions. Appendix II lists the park units that we contacted and information on the concessions contracts in these parks. We also interviewed 21 concessioners, including at least one concessioner that operated in each of the 20 parks we contacted, to understand their perspectives on the concessions program. We selected a range of concessioners that varied by the gross revenues their operations generated and the types of services they provided under their contracts. We used a standard set of questions to obtain their views on the concessions process and any challenges they face. To identify the most common challenges mentioned in our interviews, we performed a content analysis of the answers to our interview questions for the 48 interviews we conducted with Park Service officials and concessioners. For reporting purposes, we categorized their responses as follows: “several” represents an answer mentioned in more than 10 of these interviews; “some” represents an answer mentioned in 7 to 10 of these interviews. We also interviewed two trade groups, the National Park Hospitality Association and America Outdoors Association because they represented a variety of concessioners in the program. In addition, we conducted interviews with stakeholders who were familiar with the concessions program, including consultants that help the Park Service implement the program, academics in the hospitality field, and lawyers who represent concessioners. The views from the interviews we conducted are not generalizable to all parks, concessioners, or stakeholders, but they were used to provide a range of perspectives on the concessions program. We conducted this performance audit from January 2016 to February 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Table 7 provides information on the number of, gross revenues, and services offered under concessions contracts at park units we contacted. In addition to the individual named above Elizabeth Erdmann (Assistant Director), Scott Heacock, and Carmen Yeung made key contributions to this report. Additional contributions were made by Penny Berrier, Anna Brunner, Greg Campbell, Antoinette Capaccio, Cindy Gilbert, Benjamin T. Licht, Ying Long, Guisseli Reyes-Turnell, and Dan Royer.
The 1998 Concessions Management Improvement Act governs concessions services at national parks. In 2016, the Park Service managed 488 concessions contracts, and such contracts generated about $1.4 billion in gross revenues in the prior year. Under these contracts, companies and individuals operate businesses in parks, including lodges, restaurants, and recreational services. GAO was asked to review the concessions program. This report examines (1) how the concessions program has changed since GAO's 2000 report and (2) any ongoing challenges in the concessions program. To conduct this work, GAO examined Park Service policy, guidance, and relevant laws and regulations; analyzed Park Service data on concessions contracts; interviewed Park Service staff at headquarters, all seven regions, and 20 park units, selected for size of concessions program and park type, among other things; and interviewed concessioners and stakeholders, such as consultants familiar with the concessions program. The Department of the Interior's National Park Service (Park Service) has made several changes to its concessions program since GAO issued a report on the program in 2000. In that report, GAO highlighted three management challenges: (1) inadequate qualifications and training of concessions staff; (2) backlog of expired contracts that were extended; and (3) lack of accountability in the concessions program. In this review, GAO found that the Park Service has taken steps to address these challenges: Staff qualifications: Park Service has hired many concessions staff with relevant skills or educational backgrounds, such as in hospitality services or business. In addition, the Park Service developed several training classes for concessions staff to help improve their skills. Contracts under extension: Park Service has reduced the percentage of extended contracts, from about 45 percent in 2000 to 28 percent in 2016. Accountability: Park Service headquarters has increased its involvement in the concessions program and has centralized more information on the program. However, in some instances, GAO found that some financial reports that were to be submitted by concessioners to the Park Service were not submitted in a timely manner or data in the submitted reports were inaccurate. Park Service staff did not identify these discrepancies when reviewing the reports. Without timely and accurate financial data from concessioners, the agency could be limited in its ability to oversee certain aspects of the program such as determining whether concessioners paid required fees. GAO identified some ongoing challenges in each of the three steps of the concessions process. First, developing a prospectus, which provides information on a concessions operation to potential bidders, can be a lengthy and expensive process, and it can be hard to generate competition. Second, the agency's evaluation panels can sometimes have difficulty assessing some proposals, and the award process can be lengthy. Third, contract management can be affected by limited staffing and confusion among concessioners about how to fund maintenance and capital improvements on buildings or land assigned to them by the Park Service. This situation is, in part, because the Park Service has not yet finalized related guidance and made it publicly available to concessioners. The Park Service's commercial services strategic plan recognizes many of the challenges GAO identified and lists goals to potentially address them. For example, the plan has a goal to improve the prospectus and contract award processes by reducing costs and improving efficiency to the government and bidders. In addition, the plan aims to attract more bids for concessions contracts, increase the accuracy of financial reporting, and increase the percentage of concessions staff that receive training. However, these goals do not have targets or timeframes for their completion. Leading practices indicate it is critical for an agency to set meaningful performance goals and to measure progress towards these goals. Without clearly defined performance goals that contain targets or timeframes, it will be difficult for the Park Service to track its progress in these areas and determine where additional effort may be needed to address identified challenges in the concessions program. GAO recommends that the Park Service review and adjust its process to help ensure timely and accurate reporting of financial data from concessioners, finalize its guidance and make it public, and develop performance goals with targets and timeframes in its commercial services strategic plan. The Department of the Interior agreed with GAO's recommendations.
During the past 20 years, state, local, and tribal governments as well as businesses have expressed concerns about congressional and regulatory preemption of traditionally nonfederal functions and the costs of complying with federal regulations. The executive and the legislative branch have each attempted to respond to these concerns by issuing executive orders and enacting statutes requiring rulemaking agencies to take certain actions when they issue regulations with federalism or intergovernmental relations effects. Two prime examples of these responses are Executive Order 12612 (“Federalism”) and the Unfunded Mandates Reform Act of 1995 (UMRA). Executive Order 12612, issued by President Reagan in 1987, established a set of fundamental principles and criteria for executive departments and agencies to use when formulating and implementing policies that have federalism implications. The executive order says that federal agencies should refrain from establishing uniform, national standards for programs with federalism implications, and when national standards are required, they should consult with appropriate officials and organizations representing the states in developing those standards. The order says that regulations and other policies have federalism implications if they “have substantial direct effects on the States, on the relationship between the national government and the States, or on the distribution of power and responsibilities among the various levels of government.” Executive Order 12612 also contains specific requirements for agencies. For example, the order requires the head of each agency to designate an official to be responsible for ensuring the implementation of the order. That official is required to determine which proposed policies have sufficient federalism implications to warrant preparation of a “federalism assessment.” The assessment must contain certain elements (e.g., identify the extent to which the policy imposes additional costs or burdens on the states) and must accompany any proposed or final rule submitted to the Office of Management and Budget (OMB) for review under Executive Order 12866. OMB, in turn, is required to ensure that agencies’ rulemaking actions are consistent with the policies, criteria, and requirements in the federalism executive order. In May 1998, President Clinton issued Executive Order 13083 (“Federalism”), which was intended to replace both Executive Order 12612 and Executive Order 12875 (“Enhancing the Intergovernmental Partnership”). However, in August 1998, President Clinton suspended Executive Order 13083 in response to concerns raised by state and local government representatives and others about both the content of the order and the nonconsultative manner in which it was developed. Therefore, Executive Order 12612 remains in effect. rulemaking process. We focused on the April 1996 through December 1998 time frame because we were able to use our database to identify which rules were “major” under the Small Business Regulatory Enforcement Fairness Act (SBREFA) (e.g., those that have a $100-million impact on the economy). As a result, we cannot comment on rules issued outside of that time frame. Although Executive Order 12612 does not require agencies to mention the order in the preamble to their final rules or to note in those preambles whether a federalism assessment was prepared, doing so is a clear indication that the agency was aware of and considered the order’s requirements. Also, if an agency prepared a federalism assessment for a final rule, it would be logical for the agency to describe the assessment in the preamble to the rule. Our work showed that Executive Order 12612 had relatively little visible effect on federal agencies’ rulemaking actions during this time frame. To summarize the nearly 3 years of data depicted in figure 1, agencies covered by the order mentioned it in the preambles to about 26 percent of the 11,414 final rules they issued between April 1996 and December 1998. Many of the final rules that federal agencies issue are administrative or routine in nature, and therefore unlikely to have significant federalism implications. As a result, it is not particularly surprising that agencies would not prepare federalism assessments for many of those rules. However, rules that are “major” under SBREFA and that involve or affect state and local governments would seem more likely to have federalism implications that would warrant preparation of an assessment. However, that does not appear to have been the case. As figure 3 shows, of the 117 major final rules issued by covered agencies between April 1996 and December 1998, the preambles indicated that only 1 had a federalism assessment. The agencies had previously indicated that 37 of these rules would affect state and local governments, and the preambles to 21 of the rules indicated that they would preempt state and local laws in the event of a conflict. At least one of the four state and local government organizations that we consulted during the review said that federal agencies should have done assessments for most of these 117 major rules. In response, the agencies said that their rules did not have sufficient federalism implications to trigger the executive order’s requirements. action would directly create significant effects on states even if the action was mandated by law or the department otherwise had no discretion. The criteria in EPA’s guidance established a high threshold for what constitutes “sufficient” federalism implications—perhaps explaining why none of the agency’s more than 1,900 final rules issued during the April 1996 to December 1998 time frame had a federalism assessment. For example, in order for an EPA rule to require an assessment, the agency’s guidance said the rule must meet all four of the following criteria: have an “institutional” effect on the states, not just a financial effect (regardless of magnitude); change significantly the relative roles of federal and state governments in a particular program context, lead to federal control over traditional state responsibilities, or decrease the ability of states to make policy decisions with respect to their own functions; affect all or most of the states; and have a direct, causal effect on the states (i.e., not a side effect). At least one of these criteria appeared to go beyond the executive order on which it is based. Although EPA said a rule must affect all or most of the states in order to have sufficient federalism implications to warrant preparation of an assessment, Executive Order 12612 defines “state” to “refer to the States of the United States of America, individually or collectively.” (Emphasis added.) EPA’s guidance also said that, even if all four of these criteria are met, a rule would not require a federalism assessment if a statute mandates the action or the means to carry it out are implied by statute. However, EPA’s actions appear to be allowable because the executive order does not define what is meant by “sufficient” federalism implications, leaving that determination up to the agencies. OMB officials told us that they had taken little specific action to ensure implementation of the executive order, but said the order is considered along with other requirements as part of the regulatory review process under Executive Order 12866. They said that agencies had rarely submitted separate federalism assessments to OMB but have addressed federalism considerations, when appropriate, as a part of the cost-benefit analysis and other analytical requirements. order was soon to be revised by Executive Order 13083. However, he also said that Executive Order 12612 had not been implemented to any significant extent by the Reagan Administration “or its successors,” suggesting that the lack of implementation was unrelated to any pending revision of the order. In addition, the Acting Administrator said that the primary vehicles for improving federal-state consultation in the past 6 years have been Executive Order 12875 and UMRA. We have not examined the implementation of Executive Order 12875. However, we have examined the implementation of UMRA, and concluded that it has had little effect on agencies’ rulemaking activities. Title II of UMRA is one of Congress’ primary efforts to address the effects of federal agencies’ rules on state and local governments. Section 202 of the act generally requires federal agencies (other than independent regulatory agencies) to prepare “written statements” containing specific information for any rule for which a notice of proposed rulemaking was published that includes a federal mandate that may result in the expenditure of $100 million or more in any 1 year by state, local, and tribal governments, in the aggregate, or the private sector. UMRA defines a “mandate” to be an “enforceable duty” that is not a condition of federal assistance and does not arise from participation in a voluntary federal program. For rules requiring a written statement, section 205 requires agencies to consider a number of regulatory alternatives and select the one that is the least costly, most cost-effective, or least burdensome and that achieves the purpose of the rule. Other sections of the act focus even more specifically on the interests of state and local representatives. For example, section 203 states that agencies must develop plans to involve small governments in the development of regulatory proposals that have a significant or unique effect on those entities. Section 204 requires agencies to develop processes to consult with representatives of state, local, and tribal governments in the development of regulatory proposals containing “significant ederal intergovernmental mandates.” a condition of federal financial assistance or as a duty arising from participation in a voluntary program. Other rules did not result in “expenditures” of $100 million. Because no written statement was required for these rules, the requirements in section 205 regarding the identification and selection of regulatory alternatives were not applicable to these rules. Also, title II of UMRA contains exemptions that allowed agencies not to take certain actions if they determined the actions were duplicative or not “reasonably feasible.” Other provisions in title II also had little effect. During the first 2 years of UMRA’s implementation, the requirement in section 204 that agencies develop an intergovernmental consultation process appears to have applied to no more than four EPA rules and no rules from other agencies. EPA generally used a consultation process that was in place before UMRA was enacted. Also, section 203 small government plans were not developed for any of the 73 final rules promulgated during this 2-year period. Officials in the four agencies that we contacted said none of their final rules had a significant or unique effect on small governments. Section 208 of UMRA requires the Director of OMB to submit an annual report to Congress on agency compliance with UMRA. The fourth such report is scheduled to be delivered within the next few weeks. In his third UMRA report published in June 1998, the OMB Director noted that federal agencies had identified only three rules in the more than 3 years since the act was passed that affected the public sector enough to trigger the written statement requirements. Nevertheless, he said federal agencies had embraced the act’s “overall philosophy,” as evidenced by the range of consultative activities the report described. On its surface, H.R. 2245 contains several provisions that are similar to requirements in both Executive Order 12612 and UMRA. For example, section 7 of the bill would, if enacted, require agencies to publish “federalism impact assessments” that are somewhat similar in content to the federalism assessments in the executive order and the written statements required by UMRA. All of those assessments and statements require agencies to develop estimates of the costs attendant to the implementation of the regulation at issue. Also, both the bill and the executive order require identification of regulatory provisions that preempt state government authority or functions. not have federalism implications or prepare a federalism impact assessment. Neither Executive Order 12612 nor UMRA requires agencies to declare whether each of their proposed and final rules has federalism implications. As I noted previously, UMRA does not apply to most economically significant rules, and the executive order does not require agencies to publish the designated officials’ federalism determinations. If the bill is modified in this manner, this requirement will be similar to a provision in the Regulatory Flexibility Act of 1980 (RFA), which requires agencies to state whether their rules have a “significant economic impact on a substantial number of small entities.” Therefore, the implementation of the RFA may prove instructive as to how this portion of the bill will be implemented. For example, according to the Small Business Administration’s (SBA) Office of Advocacy, a perennial problem with the implementation of the RFA has been agencies’ use of “boilerplate” certifications indicating that their rules do not have a significant economic impact on a substantial number of small entities. Contributing to this problem is the fact that the RFA does not define the terms “significant economic impact” and “substantial number of small entities,” and no federal agency is responsible or authorized to define the terms. As a consequence, different agencies have different interpretations of the statute. We have recommended that Congress consider giving SBA or some other entity the responsibility or authority to define key terms in the act. Therefore, applying the lessons of the RFA to the proposed legislation, Congress may want to carefully define what it believes constitutes “federalism implications” or assign that responsibility to some other entity. Finally, I would like to briefly comment on section 6 of H.R. 2245, which says that federal agencies may not include any agency activity that is a state-administered federal grant program in its annual performance plans developed pursuant to the Government Performance and Results Act of 1993 (Results Act) “unless the performance measures for the activity are determined in cooperation with public officials.” The bill defines “public officials” as elected officials of state and local governments, including certain organizations that represent those officials (e.g., the National Governors’ Association and the United States Conference of Mayors). The Results Act already requires agencies developing their strategic plans to “solicit and consider the views and suggestions of those entities potentially affected by or interested in the plan.” The Senate Governmental Affairs Committee report on the Results Act noted that the strategic plan “is intended to be the principal means for obtaining and reflecting, as appropriate, the views of Congress and those governmental and nongovernmental entities potentially affected by or interested in the agencies’ activities.” In that regard, we believe that working with state and local governments or their representative organizations to develop goals and performance measures in federal grant-in-aid programs can strengthen the intergovernmental partnerships embodied in those programs. For example, in 1996, we reported on a joint goal and performance measure-setting effort between the federal Office of Child Support Enforcement (OCSE) and state governments. Initially, the federal-state relationship was not so cooperative. In 1994, OCSE specified the performance levels that states were expected to achieve in such areas as the establishment of paternity and collections of child support. State program officials strongly objected to this federal mandate because they did not have an opportunity to participate in the planning process. Following these initial planning efforts, OCSE sought to obtain wider participation from program officials at the federal, state, and local government levels. OCSE also established task forces consisting of federal, state, and local officials to help focus management of the program on long- term goals. During the planning process, participants agreed that the national goals and objectives would be based on the collective suggestions of the states and that the plan’s final approval would be reached through a consensus. For each goal, the participants identified interim objectives that, if achieved, would represent progress toward the stated goal. At the time of our review, OCSE and the states were also developing performance measures to identify progress toward the goals, and planned to develop performance standards to judge the quality of state performance. They created a Performance Measures Work Group to develop statistical measures for assessing state progress toward achieving national goals and objectives. OCSE also encouraged its regional staff to develop performance agreements with states, specifying both general working relationships between OCSE regional offices and state program officials and performance goals for each state. Overall, OCSE and most state officials that we contacted said the joint planning process strengthened the federal/state partnership by enabling them to help shape the national program’s long-term goals and objectives. State and local government stakeholder involvement has also been important in the development of practical and broadly accepted performance measures in other federal programs, including some block grants. We believe that these kinds of intergovernmental cooperation can serve as models for the kinds of efforts that section 6 of the Federalism Act of 1999 seeks to encourage. Mr. Chairman, this completes my prepared statement. I would be pleased to answer any questions. Contacts and Acknowledgment For future contacts regarding this testimony, please contact L. Nye Stevens at (202) 512-8676 or Curtis Copeland at (202) 512-8101. Individuals making key contributions to this testimony included Elizabeth Powell, Joseph Santiago, and Alan Belkin. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch-tone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed House Resolution (H.R.) 2245, the Federalism Act of 1999, focusing on the agency rulemaking and performance measurement requirements of the bill. GAO noted that: (1) during the past 20 years, state, local, and tribal governments as well as businesses have expressed concerns about congressional and regulatory preemption of traditionally nonfederal functions and the costs of complying with federal regulations; (2) the executive and the legislative branch have each attempted to respond to these concerns by issuing executive orders and enacting statutes requiring rulemaking agencies to take certain actions when they issue regulations with federalism or intergovernmental relations effects; (3) two prime examples of these responses are Executive Order 12612 and the Unfunded Mandates Reform Act of 1995 (UMRA); (4) GAO's work showed that Executive Order 12612 had relatively little visible effect on federal agencies' rulemaking actions during this timeframe; (5) agencies covered by the order mentioned it in the preambles to about 26 percent of the 11,414 final rules they issued between April 1996 and December 1998; (6) however, mentioning the order in the preamble to a rule does not mean the agency took any substantive action; (7) the agencies usually just stated that no federalism assessment was conducted because the rules did not have federalism implications; (8) the preambles to only 5 of the 11,414 final rules that the agencies issued between April 1996 and December 1998 indicated that a federalism assessment had been done; (9) many of the final rules that federal agencies issue are administrative or routine in nature, and therefore unlikely to have significant federalism implications; (10) the criteria the agencies used to determine whether federalism assessments were needed varied among the agencies; (11) Office of Management and Budget officials told GAO that they had taken little specific action to ensure implementation of the executive order, but said the order is considered along with other requirements as part of the regulatory review process under Executive Order 12866; (12) GAO reported that requirements in title II of UMRA appeared to have had only limited direct impact on agencies' rulemaking actions in the first 2 years of the act's implementation; (13) as introduced, H.R. 2245 would require federalism impact assessments for all proposed and final rules; and (14) GAO believes that working with state and local governments or their representative organizations to develop goals and performance measures in federal grant-in-aid programs, as required by H.R. 2245, can strengthen the intergovernmental partnerships embodied in those programs.
According to the International Association of Chiefs of Police (IACP), managing officers’ use of force is one of the most difficult challenges facing law enforcement agencies. The ability of officers to enforce the law, protect the public, and guard their own safety is very difficult in an environment in which violent crime is commonplace and firearms are frequently used for illegal purposes. For the agents of the Bureau of Alcohol, Tobacco and Firearms (ATF), suspected illegal firearms activities are the leading cause for their initiating enforcement actions. Over the past several years, ATF has come under public criticism and congressional scrutiny primarily as a result of its operation at the Branch Davidians’ compound in Waco, TX, and citizen accusations that ATF agents used excessive force in carrying out their enforcement responsibilities. The February 1993 operation at the Branch Davidians’ compound was initiated to serve an arrest warrant on David Koresh, the Davidians’ leader, and to execute a search warrant on the compound. When Koresh refused to accept the warrants, ATF tried to forcibly enter the compound using a tactic known as dynamic entry but simultaneously was met with gunfire from the Branch Davidians. In the ensuing gun battle, four ATF agents and six Branch Davidians were killed. ATF is a law enforcement agency within the Department of the Treasury with responsibilities directed toward reducing violent crime, collecting revenue, and protecting the public. ATF enforces the federal laws and regulations relating to alcohol, tobacco, firearms, explosives, and arson. Among its missions, ATF is to work directly and in cooperation with others to (1) suppress and prevent crime and violence through enforcement, regulation, and community outreach and (2) support and assist federal, state, local, and international law enforcement. To accomplish its criminal enforcement responsibilities, ATF has 24 field divisions, headed by special agents-in-charge (SAC), located throughout the United States. As of September 1995, ATF had a total of 1,944 special agents of which 1,777 were assigned to its field divisions. ATF’s special agents are to initiate criminal investigations when notified of suspected illegal activities by such sources as informants; undercover operatives; and referrals from ATF inspectors and other federal, state, and local law enforcement agencies. Figure 1.1 is an ATF organization chart, as of January 1996, that depicts the principal units discussed in this report. As table 1.1 shows, the vast majority of ATF’s enforcement activities have been directed at suspects who are believed to be engaged in illegal firearms activities. Suspicion of illegal firearms activities is the principal reason that initiates ATF firearms investigations. On the basis of its investigations, ATF apprehends individuals that it suspects of criminal violations. As can be seen from table 1.2, most of the individuals ATF arrested were suspected of violating firearms laws. According to IACP, use of force has been construed to include a wide range of techniques used to compel compliance. Such techniques range from verbal persuasion—the lowest force level—to deadly force—the most severe force level—and everything in between, including physical force, stun guns, tear gas, batons, and other nonlethal equipment. The variety of coercive options available to agents in a confrontational setting is often referred to as the “force continuum.” According to Treasury policy on the use of force, the primary consideration in its use is the timely and effective application of the appropriate level of force required to establish and maintain lawful control. ATF training materials note that the use of force by law enforcement officers in the performance of their duties has been traditionally limited to four categories. Under these categories, use of force is allowed if necessary to • overcome resistance to the officer’s lawful commands, • effect an arrest or detain a suspect, • maintain custody and prevent escape, or • protect the officer or other persons. Furthermore, the training materials note that in determining how much force may be or should be used, the officer should consider such factors as the • nature of the offense for which the suspect is being arrested, e.g., felony or • number of participants on each side; • size, age, and condition of participants; • record and/or reputation of the suspect for violence; • use of alcohol or drugs by the suspect; • suspect’s mental or psychiatric history; • presence of innocent bystanders; and • availability of less violent or nonlethal weapons. In October 1995, Treasury and the Department of Justice adopted use of deadly force guidelines that are uniform with the exception of certain agency mission-specific provisions. For example, while warning shots generally are not permitted under the Treasury and Justice policies, the U.S. Secret Service may use warning shots in exercising its protective responsibilities, the U.S. Customs Service may use warning shots on the open waters, and Justice agencies may use warning shots under certain circumstances within the prison context. In addressing the subject of nondeadly force, the Treasury and Justice uniform policies recognize that if force other than deadly force reasonably appears to be sufficient to accomplish an arrest or otherwise accomplish the law enforcement purpose, deadly force is not necessary. In commenting on the policy, both Departments committed to take all reasonable steps to prevent the need to use deadly force. Both Departments define deadly force as the use of any force that is likely to cause death or serious physical injury. Therefore, any firearms discharge that is intended to disable a suspect is considered to be deadly force. However, the use of other weapons, including those considered nonlethal, could be construed as a law enforcement officer’s having used deadly force, depending on the manner in which the weapon was used. For example, hitting a suspect in the head with a baton is considered to be the use of deadly force. In August 1995, the Chairman of the Subcommittee on Treasury, Postal Service, and General Government, House Committee on Appropriations, asked us to (1) identify and describe ATF’s policies for the use of deadly force; (2) determine how ATF conveys its policies to its agents; (3) determine the reasons for and the extent to which ATF uses dynamic entry and the equipment ATF uses to accomplish these entries; and (4) determine whether ATF has complied with its procedures for investigating shooting and alleged excessive use-of-force incidents. While we determined whether ATF complied with procedures for investigating shooting and alleged excessive use-of-force incidents on the basis of a review of case file documents, we did not evaluate the quality and adequacy of ATF’s investigations. In addition, except for a limited check discussed later, we did not verify whether all shooting and alleged excessive force incidents were reported or whether all reported allegations of excessive force were investigated. The Chairman also asked us to compare how ATF addresses the above issues with the way that Justice’s Federal Bureau of Investigation (FBI) and the Drug Enforcement Administration (DEA) address them. In addition, we were asked to determine (1) whether ATF applies lessons learned from its reviews of shootings and allegations of excessive use of force and (2) what authority ATF has to take adverse personnel actions against agents, particularly in connection with excessive use-of-force incidents. To identify and describe ATF’s policies on the use of deadly force and compare them to DEA’s and FBI’s, we reviewed pertinent Treasury, Justice, ATF, DEA, and FBI policies and accompanying commentaries on the use of deadly force that were available. Also, we reviewed certain relevant U.S. Supreme Court and lower court decisions involving the use of deadly force. We also interviewed appropriate ATF, DEA, and FBI officials concerning their policies on the use of deadly force. To determine how ATF conveys its deadly force policies to its agents, we (1) visited the Federal Law Enforcement Training Center (FLETC) and ATF’s National Academy in Glynco, GA; (2) observed training facilities, equipment, and ongoing classes—including Basic Marksmanship, Judgment Pistol Shooting, Situational Response, Non-Lethal Control Techniques, and Tactical Operations Planning; and (3) spoke with officials about the overall training courses provided new agents, in general, and the use-of-force and firearms training provided, in particular. We also reviewed teaching guides and student training materials for FLETC’s Criminal Investigator Training Course and ATF’s New Agent Course. At the time of our visit to FLETC, ATF did not have any students enrolled in criminal investigator training for new agents, although there were new agent classes in progress. However, we observed a class of new ATF agents attending the National Academy. We also met with FBI and DEA training officials to discuss their new agent training courses and use of deadly force training; toured the training academy at Quantico, VA, and observed the training facilities and equipment; participated in a demonstration of the Firearms Training System and reviewed summaries of course materials and instructor teaching guides for training that discussed use of force. We compared use-of-force course descriptions and the types of training provided new ATF agents to course materials and DEA and FBI training officials’ descriptions of the types of training provided new DEA and FBI agents. We also reviewed (1) ATF’s manual for its new agent On-the-Job Training Course and identified training objectives dealing with use-of-force issues and (2) course training materials provided to Special Response Teams (SRT)—ATF’s version of special weapons and tactics units—on use of force and deadly force. We identified and reviewed ATF’s policies requiring use-of-force discussions during quarterly firearms training and during tactical operational planning and operation briefings. On September 28, 1995, October 13, 1995, and December 11, 1995, respectively, we observed quarterly firearms training at three ATF field divisions—Baltimore; Washington, D.C.; and Los Angeles—to assure ourselves that the required use-of-force discussion took place. We also interviewed the divisions’ Firearms Instructor Coordinators concerning firearms training and use-of-force training and reviewed their records documenting use-of-force discussions at prior quarters’ training sessions. At the firearms qualification sessions, we also judgmentally selected and spoke with some attending ATF agents to determine whether use-of-force discussions were a regular part of tactical operation planning and operation briefings. Also, in these divisions, we interviewed Assistant Special Agents-in-Charge responsible for the SRT, SRT leaders, and group supervisors to determine whether use-of-force discussions were a part of tactical operation planning and operation briefings. Finally, we compared FLETC and ATF training materials with relevant Treasury/ATF use-of-force and deadly force policies to determine if training materials complied with the policies. While our review analyzed whether the agencies’ training reflected applicable use-of-force policies, we did not assess the effectiveness of the agencies’ training. In addition, we did not review new agent attendance records while at FLETC or the ATF National Academy, nor did we review agent personnel training records while we were at the field divisions. To determine the reasons for and the extent to which ATF uses dynamic entry and the equipment used in such entries, we reviewed ATF policies, procedures, and documents regarding operational planning, dynamic entries, equipment, and SRT units. Due to time constraints, we did not review a sample of all ATF enforcement actions conducted by all ATF agents to determine how often dynamic entry was used. Because (1) SRTs are to be deployed to conduct ATF’s higher risk search and arrest warrants and SRTs have access to all of the equipment available to ATF agents as well as additional specialized equipment and (2) ATF maintains more thorough and readily available data on SRT operations, including tactics and equipment used, than it does on its other enforcement groups, we focused our review and analysis of ATF’s use of dynamic entry and related equipment to operations involving SRT deployments. However, whenever possible, we discussed the use of dynamic entries and related equipment by non-SRT agents with headquarters and division officials. We analyzed all SRT deployment activation reports for fiscal year 1995 to identify the reason for the deployment, the extent to which the deployment used the dynamic entry tactic to enter a building, whether the deployment resulted in force being used (e.g., shootings or physical force), and whether the SRT used specialized equipment. We also obtained an ATF statistical compilation of fiscal years 1993 and 1994 SRT activation data, which showed the number of activations, the reason for deployments, deployments that resulted in force being used (e.g., shootings), and whether special equipment was used. Furthermore, we reviewed all shooting incident reports for fiscal years 1990 through 1995 and determined the number of SRT incidents in which ATF agents fired their weapons at suspects. We also interviewed ATF officials in the Special Operations Division to determine ATF’s practices regarding the use of dynamic entries and other tactics as well as the type of equipment used for these operations. In addition, we discussed the use of dynamic entries with ATF division officials, SRT team leaders, Tactical Operations Officers, and agents at three ATF field divisions—Washington, D.C.; Baltimore; and Los Angeles. We reviewed training materials and observed some of the training provided new agents at FLETC regarding dynamic entries and other tactics and related equipment. We also reviewed training materials used to train new SRT members during their initial 2-week training session at Fort McClellan, AL. We analyzed all SRT training reports for fiscal year 1995 to determine the type of in-service training received, equipment used during training, and sources that provided instruction. In addition, we observed the Washington Division SRT’s fourth quarter 1995 training to determine what tactical training was received. We spoke with officials from ATF’s Property and Fleet Management Section and Enforcement Support Branch regarding ATF policies and controls on the equipment available for high-risk operations. We also observed the equipment maintained and issued by the Enforcement Support Branch in Rockville, MD, and the SRT equipment at the three field divisions we visited. In addition, at the Washington, D.C., and Los Angeles divisions, we obtained listings of the SRT equipment and vehicles as well as certain firearms, breaching tools, and other tactical equipment available for dynamic entries. Although we observed the SRT equipment maintained in Baltimore, we did not obtain an equipment listing because at the time of our visit the SRT had been recently merged into the Washington Division’s SRT. We discussed the use and sources of this equipment with the Tactical Operations Officers, the SRT team leaders, and several agents in each of the three division offices we visited. We also obtained and reviewed comprehensive listings of rifles and tactical carbines, SRT and armored vehicles, and aircraft in inventory throughout ATF from ATF’s Inventory Tracking and Equipment Management System. In addition, we visited the FBI and DEA Washington field divisions to compare ATF’s use of dynamic entries and equipment to other federal law enforcement agencies. At each division, we interviewed division officials, including entry team leaders, to determine their use of dynamic entries and other tactics and observed the equipment used by the FBI Special Weapons and Tactics (SWAT) and DEA entry teams during high-risk operations. On the basis of the standardization of training provided for high-risk warrant service, both FBI and DEA officials opined that their division’s use of dynamic entry and related equipment generally was representative of other field divisions in their respective agencies. We also reviewed the literature available from IACP and other law enforcement experts regarding the use of equipment, dynamic entries, and other tactics by law enforcement agencies. To determine whether ATF complied with its procedures for investigating shooting and use-of-force incidents, we obtained and reviewed the following information: (1) procedures for reporting, investigating, and reviewing shooting and misconduct incidents; (2) policies on administering adverse personnel actions against agents found to have violated use-of-force policies; (3) policies on protecting complainants from retaliation; (4) policies on ensuring that lessons learned from investigations are transmitted to agents; and (5) investigative guidelines and/or standards recommended by IACP, the President’s Council on Integrity and Efficiency (PCIE), and the Commission on Accreditation for Law Enforcement Agencies. We obtained similar information, where applicable, from DEA and FBI. We also identified and reviewed legislation, regulations, and court cases related to the use of force by law enforcement agencies. We identified and reviewed files related to the investigation of reported shooting and use of excessive force incidents during fiscal years 1990 through 1995. For shooting incidents, we identified and reviewed 38 of 39 incidents where ATF agents intentionally discharged their weapons at suspects. For use of excessive force incidents, we identified and reviewed 92 investigations in three categories of alleged agent misconduct: (1) misconduct during the execution of a search warrant, (2) violation of a person’s civil rights, and (3) assault by an agent on a person. Because ATF does not maintain a separate category for use of excessive force, we judgmentally selected these categories following consultations with ATF officials and a review of misconduct incident categories. The selection was based on the likelihood that these categories would include most, if not all, incidents of alleged use of excessive force. Of the 92 investigations, we found that 25 involved allegations of the physical abuse of persons and/or property. To place the shooting and use of excessive force incidents in perspective, we obtained statistics related to ATF enforcement actions, such as arrests and SRT deployments. At the request of the Subcommittee, we also obtained shooting incident and enforcement action data from DEA and FBI. However, it should be emphasized that these data were not comparable to ATF’s, given the agencies’ differences in missions, personnel levels, and some data definitions. These data are presented in appendix V. As agreed with the Subcommittee, we did not verify the accuracy of ATF’s, DEA’s, or FBI’s statistical data because of time limitations. To determine ATF’s compliance with its investigative procedures, we reviewed ATF’s investigative files for all 38 intentional shooting incidents that were reported to and investigated by ATF from fiscal years 1990 through 1995 as well as for the 25 alleged excessive force incidents we selected. We based our compliance determination on whether the information in the files indicated that the investigative procedures had been followed. We looked for the required information on (1) the incident, such as whether it resulted in injuries or the type of law enforcement activity that resulted in the incident, and (2) the investigation, such as who conducted the investigation, who reviewed it, the types of information the investigation obtained and analyzed, and the outcome of the investigation. Where documentation was not initially found, we obtained documents and/or explanations from ATF officials and considered them in our determination. Due to time and methodological constraints, we did not evaluate the quality and adequacy of the shooting and use of excessive force investigations or the validity of their conclusions. We also did not evaluate the circumstances, such as law enforcement actions, that resulted in the shooting or use of excessive force incidents. In addition, we did not verify the accuracy of the information in ATF’s files. Finally, we did not verify whether all shooting and alleged excessive force incidents were reported or whether all reported allegations of excessive force were investigated. We did, however, do a limited check related to this matter by searching a computerized news database and contacting two organizations with possible knowledge of some incidents. The results of this limited check are discussed in chapter 5. We discussed issues related to our review, including the use of excessive force, with officials from (1) ATF’s Office of Inspection, Office of Chief Counsel, and Office of Enforcement; (2) DEA’s Office of Inspections; (3) the FBI’s Office of Inspection; and (4) organizations that monitor law enforcement practices. To determine whether, and how, ATF applies lessons learned from its investigations, we (1) identified and reviewed the relevant sections in ATF’s investigative procedures; (2) obtained from ATF and reviewed examples of lessons learned being implemented; (3) reviewed ATF’s October 1995 report on the actions taken in response to the lessons learned from the Waco operation; and (4) discussed related issues with cognizant ATF officials, including the Associate Director for Enforcement. To determine ATF’s authority for administering adverse actions against its personnel—including managers who perform poorly—we (1) obtained and reviewed the relevant ATF adverse action orders, (2) identified examples of personnel actions from our review of ATF’s investigative files, and (3) discussed adverse action issues with staff from ATF’s Employee Labor Relations Branch (ELRB) and the chairman of the unit charged with reviewing incidents that may result in adverse action being taken against ATF personnel. We also obtained relevant documentation from DEA and FBI and compared it with ATF’s to identify any similarities and differences. Our review was made between August 1995 and January 1996 in accordance with generally accepted government auditing standards. We provided drafts of this report to the Secretary of Treasury and the Attorney General for comment. Responsible Treasury and Justice officials provided oral comments at separate meetings on March 1, 1996. At the March 1, 1996, meetings, the Senior Advisor to the Under Secretary of the Treasury for Enforcement and ATF officials provided Treasury’s comments, and the Director of the Audit Liaison Office under the Assistant Attorney General for Administration provided Justice’s comments. Also present at the Justice meeting were officials from the Office of the Attorney General, the Office of the Deputy Attorney General, the Criminal Division, DEA, and FBI. The Treasury and Justice officials either characterized the report as balanced, accurate, and thorough or had no comments. They provided some technical comments that we have incorporated in this report, where appropriate. State rules and Supreme Court guidance on the use of deadly force have been evolving for a number of years. Approaches among the states on the use of deadly force have ranged from those that place an emphasis on the apprehension of a fleeing felon to those that permit such force regarding dangerous suspects but with certain qualifications. Within this context, federal law enforcement agencies have, over the years, adopted policies to govern their employees’ use of deadly force. In October 1995, Treasury and Justice adopted uniform policies on the use of deadly force. These uniform policies, like those they replaced, were adopted to reflect applicable Supreme Court guidance. Treasury’s and Justice’s commentaries, in general, explain that their policies were formulated to be more restrictive on the law enforcement officer than constitutional or other legal limits. ATF’s 1988 use of deadly force policy, which was in effect before the issuance of the 1995 Treasury policy, was, with two distinctions as discussed in this chapter, consistent with the new Treasury policy. In addition, the 1988 ATF policy, was, with three distinctions as discussed in this chapter, consistent with prior DEA and FBI policies. By 1985, the rules in the states governing the use of deadly force by law enforcement officers varied. These rules can generally be grouped into three categories: (1) the common-law rule, (2) a modified common-law approach, and (3) the Model Penal Code approach. Many states followed something similar to the English common-law rule on the use of deadly force, which existed at the time of this country’s founding. Generally, deadly force could be used by a law enforcement officer if necessary to arrest a felony suspect. Because the type of felony involved is not taken into account, this rule is generally referred to as the “fleeing felon” rule. An officer could use deadly force when he reasonably believed that he was justified in arresting an individual for a felony as long as the officer also reasonably believed that such force was necessary to protect himself or prevent escape. To a great extent, the rationale behind the fleeing felon rule was based on the fact that common-law felonieswere punishable by death, and the use of deadly force was seen as merely accelerating the penal process, albeit without providing a trial. For example, the 1982 Tennessee statute, which was found unconstitutional in a landmark Supreme Court decision discussed later, was based on the fleeing felon rule. The decision provided, in part, that “f, after notice of the intention to arrest the defendant, he either flees or forcibly resists, the officer may use all the necessary means to effect the arrest.” “’Forcible felony’ means treason, murder, voluntary manslaughter, aggravated criminal sexual assault, criminal sexual assault, robbery, burglary, arson, kidnapping, aggravated battery and any other felony which involves the use or threat of physical force or violence against any individual.” Legislatures in other states abandoned the common-law rule for some form of the Model Penal Code approach, which imposes several qualifications on the use of deadly force. The Model Penal Code, as formulated by the American Law Institute in 1962, generally permits the use of deadly force only when the crime for which the arrest is made involves conduct including use or threatened use of deadly force or when there is a substantial risk that the person to be arrested will cause death or serious bodily harm if his apprehension is delayed. More specifically, under the Model Penal Code approach, the use of deadly force is not justified unless (1) the arrest is for a felony, (2) the actor effecting the arrest is a peace officer or is assisting a peace officer, (3) the actor believes such force creates no substantial risk of injury to innocent persons, and (4) the actor believes that the felony included the use or threatened use of deadly force or there is a substantial risk that the suspect will cause death or serious bodily harm if apprehension is delayed. The Supreme Court noted, in 1985, that while there was not a constant or overwhelming trend away from the common-law rule, a long-term movement has been away from the emphasis that deadly force may be used against any fleeing felon. In the 1985 Tennessee v. Garner decision and the 1989 Graham v. Connor decision, the Supreme Court addressed the issue of when police may reasonably use deadly force and provided some clarification as to how courts should examine allegations that law enforcement officers have used excessive force. In Garner, a police officer shot and killed Edward Garner to prevent his escape from the scene of a burglary, even though Garner did not appear to be armed. Garner, after being told to halt, tried to climb over a fence at night in the backyard of a house he was suspected of burglarizing. With the aid of a flashlight, the officer was able to see Garner’s face and hands. Even though the officer saw no sign of a weapon, he shot Garner in order to prevent his escape. The officer argued that his actions were reasonable under a Tennessee statue that provided that a law enforcement officer could use any means necessary to make an arrest. “Where the officer has probable cause to believe that the suspect poses a threat of serious physical harm, either to the officer or to others, it is not constitutionally unreasonable to prevent escape by using deadly force. Thus, if the suspect threatens the officer with a weapon or there is probable cause to believe that he has committed a crime involving the infliction or threatened infliction of serious physical harm, deadly force may be used if necessary to prevent escape, and if, where feasible, some warning has been given.” In the 1989 Graham decision, the Supreme Court provided some clarification as to how courts should examine allegations that law enforcement officers have used excessive force. In Graham, a diabetic felt the onset of an insulin reaction and drove with a friend to a convenience store to purchase orange juice. Upon entering the store and seeing the number of people ahead of him at the checkout line, Graham hurried out of the store to go to a friend’s house instead. A police officer became suspicious, followed Graham’s car, and made an investigative stop. Backup police officers arrived, handcuffed Graham, and ignored Graham’s attempts to explain and treat his diabetic condition. Graham sustained various physical injuries during the incident, was thrown headfirst into the police car, and the officers refused to let him have some orange juice as a remedy for his condition. Graham was later released when the officers learned that nothing had happened at the store. Graham brought an action against the officers involved in the incident alleging that the officers had used excessive force in making the investigatory stop. The District Court applied a four-factor test and ruled in the officers’ favor. The Court of Appeals for the Fourth Circuit, without attempting to identify the specific constitutional provision under which Graham’s claim arose, endorsed the four-factor test applied by the District Court and affirmed the District Court decision. “explicit what was implicit in the Garner analysis—that all claims alleging that law enforcement officers have used excessive force—deadly or not—in the course of an arrest, investigatory stop, or other seizure of a free citizen should be analyzed under the Fourth Amendment’s ’reasonableness’ standard . . . .” The Court explained that determining whether the force used to effect a particular seizure is “reasonable” under the Fourth Amendment requires a careful balancing of “the nature and quality of the intrusion on the individual’s Fourth Amendment interests against the countervailing governmental interests at stake.” While recognizing that the test of reasonableness under the Fourth Amendment is not capable of precise definition or mechanical application, the Court explained that its proper application requires careful attention to the facts and circumstances of each particular case, such as the severity of the crime at issue, whether the suspect poses an immediate threat to the safety of the officers or others, and whether the suspect is actively resisting or attempting to evade arrest by flight. Among other things, the Court noted that the reasonableness of a particular use of force must be judged from the perspective of a reasonable officer on the scene, rather than with the 20-20 vision of hindsight. The Court further noted that the calculus of reasonableness must allow for the fact that police officers are often forced to make split-second judgments—in circumstances that are tense, uncertain, and rapidly evolving—about the amount of force that may be necessary in a particular situation. In October 1995, Treasury and Justice adopted use of deadly force policies to standardize the various policies their component agencies had adopted over the years. The policies are uniform with the exception of certain agency mission-specific provisions covering, for example, Justice’s prisoner-related responsibilities. Justice’s Resolution 14, which created the Justice uniform policy, notes that in view of Supreme Court decisions addressing constitutional restrictions on the use of deadly force, Justice’s investigative agencies have, over the years, adopted policies to govern their employees’ use of deadly force, albeit in a manner that was not standardized. Both Justice and Treasury note in their commentaries that the policies are intended to maintain uniformity among their various respective departmental components and to achieve uniform standards and training with respect to the use of deadly force. While components may develop and conduct their own training on the use of deadly force, the commentaries state that the new uniform policies govern the use of deadly force under all circumstances. The Justice and Treasury uniform policies provide that their respective officers may use deadly force only when necessary, that is, when the officer has a reasonable belief that the subject of such force poses an imminent danger of death or serious physical injury to the officer or another person. The Treasury and Justice commentaries, in general, explain that their policies were formulated to be more restrictive on the law enforcement officer than constitutional or other legal limits. Following are Treasury’s and Justice’s 1995 policies: • Treasury: “Treasury Law Enforcement Officers may use deadly force only when necessary, that is, when the officer has a reasonable belief that the subject of such force poses an imminent danger of death or serious physical injury to the officer or to another person.”• Justice: “Law enforcement officers and correctional officers of the Department of Justice may use deadly force only when necessary, that is, when the officer has a reasonable belief that the subject of such force poses an imminent danger of death or serious physical injury to the officer or to another person.” Accompanying Treasury and Justice commentary provide that “probable cause,” “reason to believe,” or a “reasonable belief,” for purposes of their policies, mean facts and circumstances, including reasonable inferences, known to the officer at the time of the use of deadly force, that would cause a reasonable officer to conclude that the point at issue is probably true. The commentaries also recognize that the reasonableness of a belief or decision must be viewed from the perspective of the officer on the scene, who may often be forced to make split-second decisions in circumstances that are tense, unpredictable, and rapidly evolving. Justice and Treasury commentaries also state that as used in their respective policies, “imminent” has a broader meaning than “immediate” or “instantaneous.” The commentaries further state that the concept of “imminent” should be understood to be elastic, that is, involving a period of time dependent on the circumstances, rather than the fixed point of time implicit in the concept of “immediate” or “instantaneous.” Thus, a subject may pose an imminent danger even if he or she is not at that very moment pointing a weapon at the officer if, for example, he or she has a weapon within reach or is running for cover carrying a weapon or running to a place where the officer has reason to believe a weapon is available. In addition, the policies provide that if force other than deadly force appears to be sufficient to accomplish an arrest or otherwise accomplish the law enforcement purpose, deadly force is not necessary. The commentaries further provide that if force less than deadly force could reasonably be expected to accomplish the same end, such as the arrest of a dangerous fleeing subject, without unreasonably increasing the danger to the officer or others, then it must be used. “A firearm may be discharged when the special agent believes that there is no other means of control and perceives an imminent threat of death or serious bodily injury to himself/herself or other innocent persons.” The 1988 ATF and 1995 Treasury policies are consistent in that both policies generally authorize the use of such force only when the law enforcement officer reasonably believes or perceives that there is an imminent threat or danger of death or serious physical injury to the officer or another person. Moreover, both the 1988 ATF and 1995 Treasury policies limit the degree of force authorized to that which is needed to accomplish the law enforcement purpose. More specifically, the 1988 ATF policy provided that the degree of force authorized was limited to that which was necessary to establish lawful order and control in a timely manner, and the 1995 Treasury policy provides that if force other than deadly force appears to be sufficient to accomplish an arrest or otherwise accomplish the law enforcement purpose, deadly force is not necessary. One distinction between the policies is that the 1995 Treasury policy refers to the use of “deadly force” while the 1988 ATF policy referred more specifically only to the use of a “firearm.” With respect to the 1988 ATF policy, an ATF official noted that until 1995, firearms were the only equipment issued to ATF agents that could inflict deadly force. A second distinction is that while the 1995 Treasury policy allows for the use of deadly force only when the law enforcement officer has a “reasonable belief” that there is an imminent threat of death or serious physical injury, the 1988 ATF policy allowed for the use of such force when the special agent “perceives” an imminent threat of death or serious physical injury. An ATF official noted that, under the 1988 policy, the special agent’s perception of an imminent threat would have been within the context of additional policy language which provided that “the authority to bear firearms carries with it an obligation to exercise discipline, restraint, and good judgement.” The 1988 ATF use of deadly force policy was, with three distinctions, consistent with DEA and FBI policies in effect prior to the issuance of the 1995 uniform policies. Following are DEA’s and FBI’s prior policies: • DEA: “Agents are not to shoot any person except in self-defense, when they reasonably believe they or another person are in danger of death or grievous bodily harm.”• FBI: “Agents are not to use deadly force against any person except as necessary in self-defense or the defense of another, when they have reason to believe they or another are in danger of death or grievous bodily harm.” The prior ATF policy was consistent with prior DEA and FBI policies in that they generally authorized the use of deadly force only when the agents reasonably believed or perceived that there was a threat or danger of death or serious bodily harm to the agent or another person. One distinction among the three policies was that the ATF policy alone provided the additional qualifying restriction that such threat be “imminent.” In addition, the aforementioned “firearm/deadly force” and “reasonably believes/perceives” distinctions also existed among the prior policies. More specifically, (1) the ATF and DEA policies referred to the shooting of a “firearm” while the FBI policy used the term “deadly force” and (2) the ATF policy used the term “perceives” while the DEA and FBI used the terms “reasonably believe” and “reason to believe,” respectively. The policies described in this chapter contain additional guidance regarding specific situations, such as fleeing persons, escaping prisoners, verbal warnings, warning shots, and shooting at or from vehicles. This additional information can be found in appendix I. As Supreme Court guidance and state rules on the use of deadly force have evolved over the years, so have the policies of federal agencies. In addition, the ability of officers to enforce the law, protect the public, and guard their own safety is a very difficult task. Officers are often forced to make split-second judgments in circumstances that have been described as tense, uncertain, and rapidly evolving. Recently, Treasury and Justice adopted uniform policies to standardize the policies of their component agencies. Since 1988, ATF has maintained a policy that was, with three distinctions as discussed in this chapter, consistent with prior FBI and DEA policies. ATF’s 1988 policy was also, with two distinctions as discussed in this chapter, consistent with Treasury’s current uniform policy. Use of deadly force training provided new ATF agents at FLETC and the ATF National Academy reflected the ATF/Treasury deadly force policy. This policy is also to be reiterated to new agents during their probationary period when they receive on-the-job training (OJT). Furthermore, the types of deadly force training new ATF agents received at FLETC and the ATF National Academy were consistent with the types of training provided new FBI and DEA agents. Moreover, ATF policy requires that ATF agents be reminded of the deadly force policy at least quarterly throughout their careers. The first year with ATF, new agents are to receive about 17 weeks of formal training—about 8 weeks in general criminal investigator skills and techniques at FLETC and 9 weeks in ATF-specific training at the National Academy. In addition, agents also participate in ATF’s OJT for new agents. FLETC requires that all students who attend the basic criminal investigator course be trained in Treasury’s/FLETC’s Use-of-Force Policy and Firearms Policy, including deadly force. Because Treasury’s use-of-force and deadly force policies are applicable to all of its bureaus, the use-of-force policies taught at FLETC are generally consistent with ATF’s policies. Once trained on the policies and tested on their knowledge of them, students are required to demonstrate their knowledge and apply the policies where applicable throughout their training. All new ATF agents are required to attend the FLETC’s Criminal Investigator Training Program (CITP). This program, which in fiscal year 1996 is to be expanded to approximately 9 weeks from slightly over 8 weeks in fiscal year 1995, provides basic training in a broad range of skills that criminal investigators require. Among the more than 70 course topics presented are interviewing, case management, surveillance, undercover operations, crime scene investigation, fingerprints, constitutional law, court testimony, and search and seizure. All Treasury bureaus’ and many non-Treasury agencies are to send their new agents to FLETC for basic criminal investigator training. CITP training consists of three methods of presentation— classroom/lecture; laboratory, where students practice skills under an instructor’s guidance; and practical exercises, where students participate in a related law enforcement scenario and demonstrate law enforcement skills. Students are to receive over 175 hours of training in the classroom/lecture, 117 hours of laboratory work, and 39 hours of practical exercises. They are to be graded in both lecture material and practical exercises. During CITP, students are to be given 5 written examinations on which they must score at least 70 percent and satisfactorily complete all required tasks during the practical exercises. According to FLETC officials, in about 1990, the FLETC Use-of-Force Oversight Committee developed a use-of-force continuum model, consistent with Treasury policies, which has been used to train all students. Furthermore, the Committee recommended and FLETC agreed to integrate use-of-force issues, where applicable, into all FLETC courses. As a result, FLETC provides a 2-hour course on Firearms Policy during the first week of training that presents, among other things, the basic concepts in the use of force, including deadly force, and introduces students to FLETC’s Use-of-Force Model (discussed below). Among the performance objectives of this course are that the student is to be able to (1) identify basic principles governing the use of force, (2) identify and apply the appropriate force, and (3) identify and apply the firearms policy and guidelines to hypothetical situations and practical exercises that are given throughout the training program. The FLETC Use-of-Force Model is composed of five color-coded levels of force designed to correspond to officers’ perceptions of the level of threat with which they are confronted and describes the progression or de-escalation of force on the basis of the demonstrated level of compliance or resistance from a subject. Students are shown a video illustrating a situation that poses various levels of threat and emphasizes how threats in real-life situations can escalate and de-escalate from one level to another. Table 3.1 shows the levels of threat and the corresponding force represented in the Use-of-Force Model. In addition to its presentation in training courses, FLETC officials said that the Use-of-Force Model is prominently displayed in all classrooms and throughout FLETC hallways. We observed that the Use-of-Force Model was displayed in the FLETC classrooms we visited as well as in hallways, firing ranges, and the cafeteria (see fig. 3.1). Our review of FLETC CITP course materials identified seven other courses (besides the Firearms Policy course) in which use-of-force topics were presented in varying amounts. These courses were (1) Detention and Arrest, (2) Execution of a Search Warrant, (3) Judgment Pistol Shooting, (4) Situational Response, (5) Introduction to Physical Techniques, (6) Non-lethal Control Techniques, and (7) The Removal and Positioning for Transportation of Reluctant Suspects. These courses contain components whose objectives are to train students in identifying threats and use-of-force concepts, applying the proper use of force to the threat, and honing judgmental skills in applying the various levels of force, including deadly force. For example, in the Detention and Arrest course (10 hours), terms used in the use-of-force policy are to be defined, Supreme Court rulings on deadly force are to be discussed, and the FLETC Use-of-Force Model is to be presented. One of the objectives of the course is to train students to recognize the degree of force, which may include deadly force, that may be used to effect an arrest, according to Treasury/FLETC Firearms Policy. In Judgment Pistol Shooting (3 hours), students are to use a weapon that has been altered to shoot a laser beam. They are confronted by realistic video scenarios on a giant screen that require them to use proper judgment in making shoot or not-to-shoot decisions. Students are required to identify the elements of jeopardy with which they are confronted and to provide a rational explanation in each instance of questionable judgment. Failure to properly respond to the video could result in the video perpetrator’s “killing” the agent or another person. To successfully pass the course, students must score 100 percent in judgment and 70 percent in shooting accuracy. In Situational Response (2 hours), students are to be given a scripted scenario in which they are placed into a situation and have to react to the threat posed by an instructor/role-player. Both students and role-players are to wear protective clothing and have weapons that fire paint bullets (commonly referred to as simunitions), which mark the target they hit. Students have to react to shoot and not-to-shoot situations and demonstrate the application of the deadly force policy. In Non-Lethal Control Techniques (30 hours), students are to be provided the basic skills required to control and arrest a compliant and a noncompliant suspect without using deadly force. As part of the training, students are required to assess the threat and resistance level of the suspect and respond with the correct level of force and control as required by the FLETC Use-of-Force Model. ATF students who successfully complete CITP are also required to attend New Agent Training at ATF’s National Academy, which is located on the FLETC campus. New Agent Training is 9 weeks and focuses on the laws, policies, procedures, and specialized investigative techniques that are specific to ATF and designed to orientate new agents to their roles as special agents. Our review of course materials identified three courses that address ATF’s use of deadly force policy: (1) Firearms Usage Policy, (2) Situational Response for ATF New Agent Training, and (3) Tactical Operations Planning. A portion of the Firearms Usage Policy course is to be devoted to reiterating ATF’s use-of-force and deadly force policies and the authority and limitations that agents bear in exercising the use of force. In Situational Response for ATF New Agent Training (4 hours), students are to work in two- or more person teams with simunitions and participate in 7 realistic scenarios. These scenarios are designed from real-life experiences of ATF agents to challenge the students’ ability to make proper decisions regarding the use of force, tactics, the use of cover, and, if appropriate, marksmanship. In this course, emphasis is placed on resolving the exercises with the use of surprise, speed, and, if necessary, violence without the use of deadly force. Among the objectives of the course are to allow the students to (1) demonstrate the ability to make proper deadly force decisions, (2) utilize the principles of tactics to gain control of situations and to avoid shootouts, and (3) demonstrate the ability to articulate a rational explanation of shooting decisions. In Tactical Operations Planning (2 hours), students are to compile the necessary intelligence to develop and execute tactical plans relating to search warrants, high-risk search warrants, arrests, and undercover operations. As part of the training in operational briefings, students are to be taught that the Treasury/ATF Firearms Policy is required to be presented before the execution of tactical plans. During the first year with ATF, new agents are supposed to complete phase 1 of the training program. For phase 1, new agents, in addition to attending CITP and New Agent Training, are to continue their training at their post of duty under the guidance of an experienced agent who has been designated as an OJT instructor. The purpose of OJT is to acquaint the new agent with applicable policies and procedures and to expose the trainee to various investigative activities. During this period, the trainee is expected to display the appropriate skill, knowledge, and judgment that is needed during their involvement in situations related to the training objective, such as during an arrest situation. For example, during OJT on arrest procedures, students are expected to understand the limitations on the use of force and the restrictions and limitations on using firearms when making an arrest. The student is also to understand and demonstrate proper custody and control of subjects. The types of training provided to new ATF agents in fiscal year 1995 to introduce them to and train them in the use-of-force and deadly force policies was consistent with the types of training provided to DEA and FBI new agents. Each agency provided new agent trainees with an initial 2-hour classroom lecture and discussion describing with examples the agency’s use-of-force/deadly force policy within the first week of the training. (App. II provides FLETC and FBI excerpts from nine training scenarios in which the use of force was used and the rationale for whether the force used was appropriate.) Thereafter, each agency employed a building-block approach that integrates the use-of-force/deadly force issues into other segments of the training in which the use of force could be a relevant issue, such as physical control techniques, arrests, and on search and seizure training. Each of the agencies employed training techniques, such as practical exercises using role-playing, simunitions exercises, and firearms judgment exercises that use realistic video scenarios requiring shoot or not-to-shoot decisions. Furthermore, each agency trained its new agents in recognizing the perceived level of threat they face and in responding to it with an appropriate level of force. FBI training officials stated that DEA and FBI have similar use-of-force training programs for new agents. One official noted that although the exact language of their policies might be somewhat different (the Treasury and Justice policies applicable to DEA and FBI were revised in October 1995 and are now generally uniform for DEA and FBI), DEA and FBI interpret and apply their use-of-force policies almost identically in training programs. This official said that he instructs on legal issues for the use-of-force policy course for FBI’s new agents and has taught DEA’s new agents as well and that he had also provided training assistance at FLETC. Even after the training of new agents is completed and agents are provided full special agent responsibilities, they are to be frequently exposed to the use-of-force and deadly force policies. ATF policy requires that these policies be reiterated during the planning process for tactical operations and quarterly during firearms requalification training. For over a decade, ATF policy has required that for every search warrant obtained, a plan is to be developed to execute the warrant and that all persons participating in the warrant are to be briefed on the plan. Moreover, the policy requires that every person participating in the plan, especially those who are not Treasury enforcement officers, are to be advised of Treasury’s policy on the use of firearms. A January 27, 1995, ATF policy brief stated that due to the increase in violence encountered by agents during the execution of search warrants, arrest warrants, and undercover operations, special agents planning to execute such operations are required to prepare an operational plan. The ATF guidance for operational plans stated that “the use of a well written operational plan, in concert with a thorough briefing, substantially enhances the safety of the special agents, public, and suspects.” ATF policy requires that all enforcement officers involved in the operation be provided a copy of the operational plan. Among the issues to be discussed at the operational plan briefing is ATF’s firearms policy on the use of deadly force. The plan, which is to be prepared on a standardized form, contains a block that is to be checked when the policy is discussed. ATF agents with whom we spoke at the Washington, Los Angeles, and Baltimore divisions stated that the firearms policy on use of deadly force was reiterated before all operations. ATF’s firearms policy requires all special agents to qualify in marksmanship with their primary duty firearm each quarter. Agents who fail to meet minimum qualification requirements are not to be certified to use that weapon until they requalify. Because the requirement applies to the primary duty weapon, each agent, even those in supervisory positions at headquarters, must attend and qualify each quarter. Furthermore, the ATF policy requires that as part of each firearms training session, no less than 1 hour of instruction is to be provided on ATF firearms/ammunition standards and procedures and the use-of-force policy. Each Firearms Instructor Coordinator is required to document the training provided and certify that the firearms policy instruction was provided. We observed one quarterly firearms training session at each of the Washington, Los Angeles, and Baltimore divisions. At each session, the divisions’ Firearms Instructor Coordinator read the use-of-force and deadly force policies to the agents. At the fourth quarter of the Washington Division’s fiscal year 1995 qualification, the coordinator elaborated on various points in the policies, such as the prohibitions against firing at moving vehicles and firing warning shots. At the Los Angeles Division’s first quarter of fiscal year 1996 qualification, the coordinator reviewed the revised October 1995 Treasury use of deadly force policy and confirmed that all agents had received copies of the new policy. He also gave the agents a quiz that included questions on the policy, among other topics. At the Washington, Los Angeles, and Baltimore divisions, we discussed with the Firearms Instructor Coordinators the training they provided and also reviewed their records, including quarterly firearms training documentation, to determine if the use-of-force policies were discussed at all quarterly firearms training in fiscal year 1995. Our review of ATF agent firearms qualification records for the Washington Division showed that at each quarterly session the division’s coordinator had certified on the records to reviewing ATF’s firearms policy. In Los Angeles, the division’s former Firearms Instructor Coordinator said that he reviewed the policy at each quarterly qualification session during fiscal year 1995 and had every agent sign their qualification records to attest that they understood the policy. However, Los Angeles’ new coordinator for 1996, said that instead of having agents’ attest that they understood the policy, he would meet the policy requirement by certifying on the agents’ qualification record that the use-of-force policy had been reviewed. At the Baltimore Division, the Firearms Instructor Coordinator had instituted, on his own initiative, a new computerized firearms qualification record form for each division agent. The computerized document did not show whether the firearms policy had been discussed. When asked about whether the policy had been discussed at each session, the coordinator said that he had discussed it and that, henceforth, he intended to certify to doing so in his written records. Furthermore, documentation at each of these offices showed that the firearms policy had been discussed at qualification sessions going back to at least the early 1990s. DEA and FBI officials confirmed that deadly force policies are to be reiterated at their quarterly firearms qualifications. In October 1995, shortly after Treasury revised its use-of-force policy to make it uniform among its components and with the Justice policy, ATF sent the revised policy to all of its field divisions. In his cover letter transmitting the revised policy, ATF’s Associate Director for Enforcement pointed out that the policy sets forth uniform standards for the use of deadly force and provides broad guidelines for all Treasury enforcement agencies. Moreover, he emphasized that the uniform policy was effective immediately and that it was the responsibility of each supervisor to ensure that all special agents under their supervision receive a copy of the policy. The letter also stipulated that the policy should be addressed at the next quarterly firearms qualification. Agents we spoke with in the Washington, Los Angeles, and Baltimore divisions all confirmed that supervising agents discussed the revised Treasury use-of-force policy with agents under their supervision. And, as noted above, we observed the Los Angeles Division’s quarterly firearms qualification in which the Firearms Instructor Coordinator discussed the new policy. ATF conveys its deadly force policies to new agents through training. Use-of-force and deadly force training provided new ATF agents reflected Treasury/ATF policies. The types of training new ATF agents receive were consistent with the training provided DEA and FBI new agents. Furthermore, ATF policy requires that the use-of-force and deadly force policies be reiterated to agents throughout their careers during quarterly firearms qualifications and tactical operations briefings. According to DEA and FBI officials, their use-of-force policies are also to be reiterated during firearms qualification. Dynamic entry has been a principal tactical procedure ATF has used to gain entry to premises when executing search and arrest warrants in high-risk operations. ATF believes dynamic entry is a useful tactic that can reduce the potential for injury to both agents and suspects in particular situations. However, on the basis of Treasury’s report on the Waco operation and views of tactical operations experts and ATF’s own personnel, ATF decided in October 1995 that dynamic entry would only be planned after all other options have been considered and began to adjust its training accordingly. Similarly, according to DEA and FBI Washington Division officials, their agencies use dynamic entry when necessary to execute high-risk warrants and believe the tactic promotes safety. ATF, DEA, and FBI use generally comparable weaponry and equipment to effect dynamic entries. The exceptions are noted in this chapter. In addition, all three agencies have aircraft that can be used for intelligence and surveillance operations, such as obtaining aerial photography, and their specialized teams generally have similar vehicles, such as sports utility vehicles, from which they can deploy and in which they store equipment. The clothing and additional gear worn by agents of all three agencies when executing warrants are designed to promote agent safety. Dynamic entry is one of several tactical procedures used by ATF to gain entry to premises to execute search and arrest warrants. Dynamic entry, which may involve a forced entry, relies on speed and surprise and often is used during high-risk operations, such as ones where suspects pose a threat of violence, or where evidence can be easily destroyed. Both ATF’s case agents and SRTs are to be trained in the dynamic entry technique. However, ATF did not compile any statistics regarding the number of times various tactics, such as a dynamic entry, were used during enforcement operations, according to ATF officials. Due to time constraints, we did not review a sample of all ATF enforcement operations to determine how often various tactics were used. However, we discussed the use of dynamic entries with ATF headquarters and division officials who all agreed that dynamic entry was the principal tactic used by ATF agents during high-risk search and arrest warrant operations. Furthermore, as agreed with the Subcommittee, since SRTs are to be deployed to conduct ATF’s higher risk search and arrest warrants and have access to all of the equipment available to ATF agents as well as additional specialized equipment, we primarily focused our review of ATF’s use of dynamic entry and related equipment on operations involving SRT deployments. Our review of SRT deployments for fiscal year 1995 found that the dynamic entry technique was used almost half of the time and was the predominant technique used when an entry to a premise was required. Moreover, during the period we reviewed, when the dynamic entry technique was used, no SRT member fired a weapon at a suspect. ATF has established an SRT in each of its 24 criminal enforcement field divisions to conduct high-risk operations. These situations include high-risk arrest, search, and undercover operations. SRT membership is voluntary and a part-time duty and ranges from 11 to 20 ATF agents depending on the location of the team. ATF defines high-risk situations, in which activation of the SRT should be considered, as those in which an increased propensity for violence exists based on the nature of the subject, the monetary value of the transaction, or the underlying circumstances of the situation. Some of the factors to be used to determine whether the SRT should be deployed include the suspect’s criminal history and propensity for armed violence, the weapons expected at the location, and the fortification of the buildings involved. SRTs are to be deployed at the discretion of the SAC of the division to ensure the safety of ATF agents, other law enforcement officers, and the public during high-risk operations. As seen in table 4.1, SRTs were most often deployed to execute search and/or arrest warrants. According to ATF, DEA, and FBI officials as well as training literature prepared by IACP and local law enforcement agencies, dynamic entry is a principal tactic available to law enforcement agencies for use during high-risk enforcement operations. According to these officials, the primary purpose for using dynamic entry is to ensure the safety of law enforcement personnel as well as suspects and other individuals during a high-risk operation. In addition, dynamic entries may be a preferred tactic when the possibility exists that evidence may be destroyed. Furthermore, these officials as well as other law enforcement officials agree that two characteristics of dynamic entry are speed and surprise. Various law enforcement articles as well as ATF, DEA, and FBI officials assert that law enforcement studies show reaction time to be slower than action time—it takes longer for individuals to respond to a threat than to make a threat. Thus, through the use of dynamic entries in certain high-risk situations, law enforcement agents hope to act so quickly that the suspects do not have time to respond or, at a minimum, give agents the advantage by forcing suspects to react to agent actions rather than the reverse. While dynamic entries may require a forcible entry, such as breaking down a door, dynamic entries can also be accomplished through open or unlocked doors. Dynamic entry is only one of several tactical techniques ATF agents can use to execute search and arrest warrants or conduct other enforcement operations. Additional techniques include • “stealth” or “static” entries, which involve slow, methodical entry and movement in a premise during which each area or room is cleared of danger before proceeding (e.g., when a suspect opens the door in response to the knock and announcement, agents may arrest or detain the suspect and then slowly clear the remainder of the premise of danger before proceeding with a search); • containment call-outs, which are situations where agents surround a location and, from covered positions, contact the suspect and order the person to exit the premise; • ruses, such as when agents create a ploy to draw the person out of their premise before making an entry or arresting them; and • arresting or detaining the suspect away from the location (e.g., vehicle stops) before making an entry or, in the case of an arrest, to avoid having to make an entry. Whether an entry is required after using certain tactics, such as containment call-outs or ruses, would depend on the purpose of the operation. If a search warrant needs to be executed, even after arresting or detaining the suspect outside of the premise, an entry may still need to be made. According to ATF agents and our review of SRT deployment reports for fiscal year 1995, agents often conducted a stealth or static entry, rather than a dynamic entry, after detaining or arresting the primary suspect. According to ATF, DEA, and FBI officials, flexibility in tactical operations is important. Thus, agents may use a combination of these tactics or change tactics during an operation, as necessary. For example, ATF training materials and officials stressed that even after a decision is made to use a dynamic entry, a situation can emerge in the middle of an operation that dictates a change in tactics. Accordingly, during one SRT deployment in fiscal year 1995, agents planned to conduct a dynamic entry to execute arrest and search warrants. However, after the SRT’s arrival at the primary suspect’s home, the suspect was located in the backyard and detained before the SRT entered the premise. The agents then used a stealth entry to execute the search warrant. (App. III provides detailed examples of actual SRT operations in which these various tactics were used.) According to ATF, DEA, and FBI officials, the decision regarding whether to use dynamic entry or another technique is dictated by the unique circumstances presented in each operation, with safety as the primary objective. ATF, DEA, and FBI officials agreed that factors, such as the suspect’s criminal history and violent tendencies, the location of the premise, and the amount of fortification expected, are to be considered when determining whether dynamic entry or another tactic should be used. In 1994, ATF developed and began requiring an Operational Risk Assessment form to be completed by agents when planning an operation.This document is designed to identify critical elements that can affect high-risk tactical operations. The assessment is divided into four categories, including the type of enforcement activity, the suspect’s criminal history, the weapons possessed by the suspect, and the suspect’s location. According to ATF agents, factors, such as the suspect’s violent tendencies, the location of the premise, and the amount of fortification expected, also are considered. The factors developed in the assessment are assigned point values and are totaled to determine the amount of risk believed to be present in the operation. Depending on the amount of risk present, a decision is made whether to use the SRT to accomplish the operation. Above a certain point total, deployment of the SRT is highly recommended. Below that point total, deployment is to be considered but is optional. The information gathered for the risk assessment represents critical intelligence information needed by ATF agents to develop an operational plan (discussed below). According to one SRT leader, one specific factor, if present, would not dictate the use of a particular tactic. Instead, he said agents consider the totality of the factors in a situation when developing the operational plan. For example, because a person is expected to be armed and has a history of violence, does not result in the SRT employing the same tactic in each case. Other factors also present are considered in conjunction with what is known about the suspect—possibly leading agents to chose different tactics in each case. Thus, according to the SRT leader, agents must consider all factors and determine the most appropriate tactic for each operation. For example, agents have to consider whether a prolonged containment call-out could result in needed evidence being destroyed; whether the surrounding neighborhood would have to be evacuated; or, if the location is in certain high-crime neighborhoods, whether a call-out would create a more dangerous situation (e.g., sniping or a riot). In January 1995, ATF began requiring agents to complete a standard written planning document for enforcement operations called an operational plan. According to ATF documents, ATF instituted this change due to the increase in violence ATF agents encountered during the execution of search and arrest warrants and undercover operations. An operational plan is required before executing any search or arrest warrants or conducting certain undercover operations. The plan is to specify the tactics and personnel to be used during the operation. According to ATF policy, the agent(s) responsible for the planning portion of the operation prepares the plan, and the group supervisor or Resident Agent-in-Charge (RAC) reviews and approves it. According to division officials, in operations involving the SRT, the SRT team leader and/or other SRT members are to develop the plan, which is then to be reviewed and approved by the assistant SAC responsible for supervising the SRT. Copies of the approved plans are to be sent to the SAC of the division. Since the decision to use a dynamic entry or other technique requires consideration of the unique factors present in each situation, ATF has not had any specific policies regarding the use of dynamic entry. However, as a result of Treasury’s review of the Waco operation and the views of tactical operations experts and ATF’s own personnel, ATF decided in October 1995 to implement lessons learned from Waco. One change ATF decided to make was that dynamic entry would only be planned after all other options had been considered. Given the choice, it was decided that the first tactical option to be considered during operational planning would be a ruse—luring the suspect out. It was believed that luring the suspect out would reduce the risks to the public and agents and ensure a safe, peaceful resolution to the situation. Regardless of whether dynamic entry or another technique is used, ATF agents are required to follow Treasury’s use-of-force policy as discussed in chapter 2. Also, ATF agents generally are required by law to knock and announce their identity and purpose before executing search and arrest warrants. Courts, nevertheless, have permitted unannounced entries in certain exigent circumstances. Several agents stated that knocking and announcing also helps to protect their safety—through identifying themselves and their purpose they sometimes can prevent a situation in which the suspect might react without knowing that they are law enforcement officers. ATF training is to emphasize that no one tactic is an absolute and that a number of factors, beyond the agents’ control, will influence whether a dynamic entry or other technique is best. During New Agent Training, ATF agents are to be taught to differentiate between situations that require a dynamic or stealth/static entry. Agents are to be taught to consider various factors—such as the characteristics of the suspect, location, and weapons expected—when determining the best tactics to employ during an operation. Agents are to participate in simulation exercises during which they are required to determine and use the appropriate tactics to gain control of situations. ATF course materials emphasize resolving these exercises by using tactics involving speed, surprise, and, if necessary, violence without employing deadly force. During our visit to the ATF National Academy, we were able to confirm through observation that a class of new agents were trained on dynamic entries through practical exercises. Agents assigned to SRTs are to receive additional training in tactics, including dynamic entries, during their 2 weeks of SRT basic training at Fort McClellan. Among the more than 20 course topics to be presented are tactical shooting, hostage situations, vehicle assaults, felony vehicle stops, and tactics. Over half of the instruction time allotted during this training is to cover tactics that include team entry and movement techniques during high-risk operations. However, according to an October 1995 ATF report, the basic SRT course curriculum has been revised to include instruction on some of the techniques needed to conduct containment call-out operations. This change was initiated by ATF, on the basis of lessons learned from Waco, to address situations in which there was no evidence in the suspect’s premise that could be easily destroyed. The new basic SRT training also is to emphasize that dynamic entries are to be planned only after all other tactical options have been considered. SRTs also are to continue to train on these skills at regularly scheduled in-service sessions. SRTs are required to receive a minimum of 8 hours training each month, or 24 hours each quarter, in addition to the quarterly firearms training to be received by all ATF agents. This continuing training must be provided by qualified sources, which includes other law enforcement agencies. We reviewed SRT quarterly in-service training records for fiscal year 1995 and determined that all 24 SRTs conducted training on entry techniques, such as dynamic and/or stealth/static entries, including practical exercises frequently using simunitions or live fire rounds. However, we did not confirm whether all SRT members attended all training sessions. Our review also showed that most of the training was conducted by ATF instructors. The most common non-ATF source of training was state and local law enforcement agencies. For example, we observed the Washington Division’s SRT training for the fourth quarter of 1995. Two days of this training was conducted by representatives of the Los Angeles County Sheriff’s Department’s Special Enforcement Branch—the SWAT team—and primarily consisted of instruction and practical exercises on stealth entry techniques. In 10 instances, Department of Defense or National Guard units provided some training to SRTs but generally not regarding entry tactics. ATF forward observers received most of the training provided by Department of Defense or National Guard units, covering subjects such as winter survival and surveillance techniques. Furthermore, our review showed that some SRTs’ quarterly training exceeded the minimum hourly requirements, such as training conducted by the Los Angeles and Washington divisions’ SRTs. According to ATF division and headquarters officials, dynamic entry has been the primary technique used by both SRTs and non-SRT agents to gain entry to premises when executing high-risk search and arrest warrants. We reviewed reports of each of the total 157 SRT deployments during fiscal year 1995. During these deployments, about 185 suspects were arrested. As seen in figure 4.1, SRTs used the dynamic entry technique almost half the time during these deployments, and it was the predominant tactic used when an entry to a premise was made. However, none of the 77 deployments in 1995 in which SRTs used dynamic entries resulted in ATF agents’ firing their weapons at suspects, according to the deployment reports. During only one SRT deployment, involving a “buy/bust” undercover operation in an open area where no entry was required, did ATF agents fire their weapons at suspects. During this incident, three suspects were wounded. Dynamic (77) Stealth/Static (29) Furthermore, from fiscal years 1993 through 1995, ATF conducted 35,949 investigations, arrested 22,894 suspects, and deployed SRTs 523 times. During this same period, as seen in table 4.2, SRT members were involved in 3 intentional shooting incidents, one of which resulted in fatalities. According to several ATF agents, one reason they used dynamic entries so frequently was the type of suspects they generally encountered. According to these agents, with most of their enforcement investigations involving firearms, ATF suspects frequently had previous convictions, were known to have committed or are suspected of past violent acts, and were believed to be armed. According to ATF reports of its firearms enforcement investigations from fiscal years 1990 to 1995, 46 percent of the suspects ATF arrested had previous felony convictions, 24 percent had a history of violence, and 18 percent were armed at the time of their arrests. According to one SRT leader, ATF encounters basically three types of suspects, those (1) who do not attempt, or even consider, doing anything other than following agents’ instructions; (2) who will attempt to flee or provoke an incident if they are given an opportunity or perceive any weaknesses on the part of the agents; and (3) who are willing to do whatever it takes not to be taken into custody. Thus, according to the SRT leader, ATF agents try to prevent incidents from occurring through following proper procedures and eliminating opportunities for suspects to provoke incidents. However, he also said that agents must always plan and be prepared for the worst, so training is very important. FBI and DEA officials said they also use dynamic entry as the primary tactic when entry to premises is required to execute high-risk search and arrest warrants. According to FBI and DEA officials, ensuring agent, suspect, and the public’s safety is the predominant reason they would choose to use dynamic entry versus another tactic. FBI and DEA officials agreed with ATF that the decision to use a dynamic entry or another technique is dictated by the unique circumstances presented in each operation and that factors such as the suspect’s criminal history and violent tendencies, the location of the premise, and the amount of fortification expected are considered. FBI has SWAT units in each of its field divisions that, like SRTs, generally are deployed for high-risk operations primarily involving search and arrest warrants. According to the FBI Washington Division’s SWAT team leader, dynamic entry is the predominant technique used by the SWAT team as well as non-SWAT agents to gain entry to a premise during a high-risk search and arrest warrant operation. According to the SWAT team leader, although FBI special agents and/or the SWAT team will move rapidly to enter and secure the immediate entry area, they generally will de-escalate the speed and move more slowly through the rest of a premise. According to division officials, the Washington Division’s use of dynamic entry generally is representative of other FBI field divisions. DEA has established a High Risk Entry and Arrest Team (HEAT) in its Washington, D.C., Division that, like SRTs, is deployed for high-risk operations primarily involving search and arrest warrants. According to Washington Division officials, the principal difference between HEAT and other DEA agents who conduct operations is that HEAT agents train more frequently as a unit and, thus, are better coordinated tactically. According to the DEA Washington Division’s HEAT leader, dynamic entries represent the predominant tactic used by both DEA agents and the HEAT team to gain entry to a premise during high-risk search and arrest warrants. DEA and FBI agents as well as their respective SWAT and HEAT teams are to receive training on the use of dynamic entries and other tactics during initial training. Also, according to Washington Division officials, SWAT and HEAT teams, like SRTs, train as a unit on a monthly basis in areas such as tactics and firearms. The equipment ATF agents used during dynamic entries generally included weaponry; breaching equipment, such as battering rams; and/or other tactical equipment designed for safety, such as body bunkers, ballistic vests, and helmets. In addition to the equipment available to all agents, SRTs have access to additional firearms, such as bolt-action and automatic rifles, and specialized tactical equipment, such as diversionary devices. SRT vehicles generally include vans and trucks from which SRT teams can deploy and in which they store equipment. The equipment, vehicles, aircraft, and clothing used by ATF generally are comparable to that used by FBI and DEA during similar operations except where noted. ATF weaponry available for use during dynamic entries includes any of the agency’s authorized firearms and less-than-lethal weapons, which include oleoresin capsicum (OC or pepper) spray and expandable tactical batons. According to ATF policy, agents are to be assigned a 9mm semi-automaticpistol as their primary duty weapon, and, if requested, a revolver as a backup weapon. Divisional offices or agents also are to be issued shotguns, semi-automatic rifles (e.g., Colt AR15s), and tactical carbines(e.g., Heckler and Koch MP5s), which agents may use during various enforcement operations, including search and arrest warrants. Some MP5s are equipped with a selector switch that, when activated, permits the weapon to fire two rounds with a single depression of the trigger and qualifies them as automatic weapons. According to ATF officials, only SRT agents are allowed to use automatic MP5s. Also, forward observers on SRTs are to be issued bolt-action rifles and AR15s, which can be used to provide additional cover for agents during warrant services. During our review of division training, we observed ATF agents qualifying with the 9mm pistols, revolvers, shotguns, AR15s, and MP5s at quarterly qualification sessions. Also, we observed the Washington and the Baltimore divisions’ in-service SRT training sessions in which they used the 9mm pistols, shotguns, and MP5s. The only weapon we did not observe during these training sessions was the bolt-action rifle issued to forward observers because the forward observers did not qualify with this weapon on the days that we attended. However, the Washington Division forward observer stated that he has taken the bolt-action rifle on SRT activations to provide cover for agents during operations. We obtained and reviewed listings of the firearms issued to agents and/or the division for use during dynamic entries from the Washington and Los Angeles divisions and determined that they included 9mm pistols, shotguns, AR15s, and MP5s as indicated by ATF policy. Furthermore, we reviewed listings from ATF’s Inventory Tracking and Equipment Management System of the rifles and tactical carbines issued to agents and/or divisions throughout ATF and determined that the types of weapons included AR15s, MP5s, and bolt-action rifles as indicated by ATF policy. In addition, these inventories showed that some divisions were credited with having obtained or seized certain automatic weapons, such as M16s, for agent use. When asked about these automatic weapons in the inventory listings, the Chief of ATF’s Special Operations Division said that ATF had acquired some M16s from the Department of Defense and converted them to semi-automatic rifles (e.g., AR15s) for use by agents. In addition, we spoke with the Chief of the Firearms and Technology Branch who confirmed that the M16s ATF had acquired for use by agents had been converted to semi-automatic rifles. Both these officials noted that some of the other automatic weapons in the inventory lists were incorrectly coded as available for agent use when in actuality they were used only as display or prop weapons—usually seized weapons that are then used as show weapons during undercover operations. We contacted Baltimore and Washington division officials who also confirmed that the automatic weapons shown on the inventory listing for their divisions had either been converted to semi-automatic rifles or were being used only as prop weapons. Each of these officials stressed that the only automatic weapon used by ATF is the two-shot burst MP5, which is limited to use only by trained SRT members. ATF agents also can use various breaching equipment during dynamic entries to gain entry through doors and windows during search and arrest warrant operations when there is no response to the ATF agents’ knock and announcement, particularly at locations where the entrances have been reinforced or fortified to deter entry. ATF breaching equipment includes items such as one- and two-person battering rams, hydraulic door spreaders, pry bars, and other firemen’s entry tools. Shotgun and explosive breaching has not been authorized by ATF. Agents also can use body bunkers, which are ballistic shields with clear viewports, during dynamic entries. These bunkers are capable of withstanding direct shots fired from various firearms and provide additional protection to agents as they enter and search premises. During our visit to the ATF National Academy, we observed agents making dynamic entries into buildings using body bunkers. Moreover, during our division reviews, agents demonstrated and/or provided examples of how this equipment has been used during dynamic entries. SRTs have access to the weapons and equipment available to all ATF agents as well as some specialized equipment, including flash/sound diversionary devices (commonly called flash/bangs), rappelling equipment, and night vision goggles. On the basis of our review of reports of all SRT deployments and in-service training sessions in fiscal year 1995 as well as course materials from the initial training received by SRT members at Fort McClellan, we determined that flash/sound diversionary devices represented the specialized equipment SRTs most frequently trained with and used during deployments. According to agents, flash/sound diversionary devices are used to disorient and confuse individuals to more safely obtain their compliance and detainment during enforcement operations. Factors such as the presence of children or elderly individuals are to be considered before the use of these devices, according to agents and SRT training materials. According to agents and several SRT deployment reports, these devices greatly assisted SRTs in subduing suspects, controlling situations, and avoiding shooting incidents. Although SRT members conducted training on rapelling from buildings and aircraft at their initial training and/or fiscal year 1995 in-service training sessions, this tactic was not used during any deployments in 1995, according to the ATF records we reviewed. In addition, our review of fiscal year 1995 SRT deployment reports and several operational plans showed that SRTs used the weapons discussed earlier as well as body bunkers and various breaching tools during dynamic entries. Depending on the unique circumstances present during an operation, SRT agents may be authorized to use any or all of the equipment during a dynamic entry. The following scenario represents an example of how this equipment has been used by both ATF SRTs and non-SRT agents during dynamic entries. An ATF agent knocks and announces ATF’s identity and purpose. If there is no response after a reasonable period and the door is locked or fortified, one or two agents breach the door using a ram or other tool to gain entry to the premise. Teams carrying body bunkers then quickly enter and search the premise to locate suspects and clear the premise of any danger. A bunker team generally consists of two or three agents—one agent carrying the bunker and holding a 9mm pistol on one side, another agent following closely and holding a 9mm pistol on the opposite side of the bunker, and the last agent following closely and carrying a MP5. Additional agents follow the bunker team and handcuff or detain suspects as they are located, and agents located outside the premise provide rear and front security. The Enforcement Support Branch of ATF’s Special Operations Division provides vehicles to ATF divisions for use in enforcement operations. According to ATF officials, they have not provided any armored vehicles to ATF field divisions. The vehicles provided generally include sedans, vans, and trucks. SRTs may have designated vehicles from which they can deploy during high-risk operations and in which they maintain their equipment. On the basis of vehicle listings from ATF’s Inventory Tracking and Equipment Management System and our review of all SRT deployments in 1995, vehicles such as large delivery trucks, excess military ambulances, vans, and utility trucks have been obtained or donated to ATF for use by SRTs. Furthermore, our review of the 1995 SRT deployment reports showed that SRTs had used these trucks and vans during actual operations. The only armored vehicles used by the SRTs identified through our review of the inventory listings and 1995 SRT deployment reports included: (1) a truck donated to the Los Angeles SRT by a private company that is engaged in the security and transfer of money and negotiable instruments and (2) a Dallas Police Department vehicle borrowed by the Dallas SRT on two occasions for search warrant operations in fiscal year 1995. We observed the Washington, Baltimore, and Los Angeles divisions’ SRT vehicles, which are used to store equipment and from which the SRTs had deployed during operations. The Washington Division’s SRT vehicles included a van and large delivery truck, similar to those used by bread or package delivery companies, in which they had installed benches along each side and storage units for equipment. A similar large delivery truck was donated to the Baltimore Division’s SRT and used for its operations. In addition to the armored truck, the Los Angeles Division’s SRT vehicles included two large sports utility vehicles. All of the Los Angeles Division’s SRT vehicles we observed had public address and siren systems installed. The Los Angeles Division’s SRT also reported having a large delivery truck and van, which we were unable to observe, due to time constraints and their remote location. ATF obtained five excess military armored “peacekeeper” vehicles from the U.S. Air Force in 1993 to test for possible tactical use, such as to protect agents from additional danger while they extract a wounded agent from a hostile situation. One of these vehicles was sent to the SRT training facility at Fort McClellan, and the rest remained at the Enforcement Support Branch in Rockville. According to Enforcement Support Branch officials, none of these vehicles was ever assigned to a division or used for enforcement operations. ATF decided not to keep these vehicles or obtain others for enforcement operations. At the time of our review, ATF had provided four of these vehicles to local law enforcement agencies. The remaining vehicle at Fort McClellan was awaiting disposal. Our examination of vehicle listings from ATF’s Inventory Tracking and Equipment Management System confirmed that none of the peacekeeper vehicles was assigned to field divisions at the time of our review. As of February 1996, ATF had obtained nine excess military fixed-wing twin engine two-seater aircraft for use in enforcement operations. Seven of these airplanes were strategically located within the United States and had been equipped with an advanced thermal imaging system that can be used for surveillance during low visibility conditions. According to ATF’s Chief Pilot, the remaining two airplanes were in storage. According to ATF policy, the airplanes primarily are intended for use during surveillance operations, such as aerial photography and general area surveys. According to the Chief of the Special Operations Division, these aircraft and/or previously leased fixed-wing aircraft have been used between 200 and 300 times a year in the past 4 years. In addition, ATF may obtain assistance from other agencies, such as the U.S. Customs Service, Department of Defense, and local law enforcement units, in conducting surveillance operations and transporting agents via aircraft. However, according to ATF policy, ATF agents must obtain approval from ATF’s Chief Pilot in the Air Operations Branch of the Special Operations Division before using any non-ATF aircraft. On the basis of the records we reviewed, in only one instance did an SRT use aircraft to transport agents during an operation in fiscal year 1995. In this operation, the SRT was deployed to provide surveillance and possible assistance in the arrest of a suspect in the Oklahoma City bombing investigation and was transported to the desert by the U.S. Customs Service’s and a local Sheriff department’s helicopters. Aircraft were more frequently used by SRTs for assistance in operational planning efforts. For example, we observed aerial surveillance photographs of locations at which the Los Angeles Division’s SRT conducted several of its activations in fiscal year 1995. We were told that these photographs were used during operational planning. ATF agents’ basic duty uniform for enforcement operations consists of a dark-blue jacket with matching teeshirt, pants, baseball cap, and ballistic vest. The uniform has yellow lettering in numerous places indicating “ATF,” “AGENT,” “POLICE,” and/or the ATF shield. The uniform also includes a black pistol belt, black nylon holster, and magazine pouch. Agents are not issued footwear, but may obtain tactical boots for which they are to be reimbursed by ATF. Also, according to ATF officials, almost all agents are issued a U.S. Army excess tactical helmet—painted black—with a detachable, clear plexiglass shield for eye protection. The purpose of the tactical helmets is to provide additional safety for the agents’ heads during high-risk operations. The clear shield provides no ballistic protection, but is intended to provide protection from flying debris. Agents assigned to SRTs are to wear the same basic duty uniform described above with some additional clothing during enforcement operations. SRT members wear a black-webbed vest, which is used to carry extra equipment (e.g., diversionary devices, radios, and extra ammunition) over their ballistic vest. SRTs may also wear fire-retardant gloves for added protection during operations in which diversionary devices are authorized. In addition, the Los Angeles Division’s SRT wears fire-retardant balaclavasfor further protection during these operations. However, according to the Los Angeles Division’s SRT leader, division management requires the balaclavas to be removed as soon as the entry or operation is complete and the area is secure. ATF has obtained excess military clothing, such as camouflage battle dress, for use by agents during training, to reduce the wear on agents’ basic duty uniforms. According to ATF agents, this clothing also may be worn during surveillance operations in rural areas. However, according to SRT agents and ATF headquarters officials, with only an occasional exception in rural areas, the camouflage clothing is not to be worn by entry teams during the execution of search and arrest warrants. During our review of division training, we observed ATF agents wearing the basic duty uniform, tactical helmets and boots as well as SRT agents wearing their additional carrier vests. Agents with whom we spoke stated that this clothing represented what they wear during enforcement operations. On the basis of our discussions with FBI and DEA training and field division officials as well as our observations of the equipment at their Washington field divisions, ATF weaponry generally is comparable to the weapons used by FBI and DEA. However, unlike ATF, both FBI and DEA have and are authorized to use certain automatic weapons (e.g., M16 rifle) that, with a single depression of the trigger, will continue to fire rounds until the trigger is released or the ammunition is exhausted. However, DEA Washington Division officials stated they had never used the M16 during enforcement operations, and both DEA and FBI officials agreed that the M16 is not a very practical weapon for urban operations. In addition, the breaching equipment, body bunkers, vehicles, aircraft, clothing, and specialized equipment used by SRTs also generally are comparable with those used by FBI SWAT and DEA HEAT teams. For example, FBI SWAT team members have black or green uniforms and DEA HEAT team members have green uniforms. Both FBI SWAT and DEA HEAT teams have ballistic helmets and fire-retardant balaclavas. Also, FBI and DEA have aircraft that can be used for intelligence and surveillance operations, such as obtaining aerial photography, and the FBI SWAT and DEA HEAT teams generally have similar vehicles, such as sports utility vehicles, from which they can deploy and in which they store equipment. However, while FBI SWAT teams have some additional tactical equipment not available to SRTs for dynamic entries, such as additional breaching equipment, the DEA HEAT team has somewhat less equipment. For example, the DEA HEAT team does not train with or have equipment for rappelling, is not allowed to use flash/sound diversionary devices, and does not include a sniper/forward observer position and its related equipment. Dynamic entry is a common tactic used by ATF, DEA, and FBI when entry to premises is used to execute high-risk search and arrest warrants. Dynamic entry is to be used when it is believed to be the safest alternative given the particular circumstances and requirements of an operation. The weaponry and equipment available for use by ATF, DEA, and FBI to effect dynamic entries are generally comparable. ATF has procedures in place for reporting, investigating, and reviewing shooting incidents and use of excessive force allegations involving ATF agents. These procedures are consistent with guidelines and/or standards recommended by IACP, PCIE, and the Commission on Accreditation for Law Enforcement Agencies. Overall, ATF’s procedures for shooting incidents also are comparable to those employed by DEA and FBI. However, while ATF’s excessive force procedures are comparable to DEA’s, there are distinctions with those employed by FBI. Our review of available information in ATF’s investigative files of reported intentional shootings and alleged use of excessive force incidents for fiscal years 1990 through 1995 showed that ATF complied with its investigative procedures in effect at the time of the investigation except that two investigative files did not contain a record of review by the designated unit at ATF headquarters as required by procedures. Our review also showed that ATF investigations determined that all reported intentional shootings were justified and most reported allegations of excessive force were unsubstantiated. In addition, our review showed that ATF agents found to have engaged in misconduct received sanctions in the form of written reprimands and suspensions, and that ATF has implemented lessons learned from its investigations. ATF’s procedures for reporting, investigating, and reviewing shooting incidents and allegations of use of excessive force are consistent with recommended guidelines and/or standards for law enforcement agencies established by IACP, PCIE, and the Commission on Accreditation for Law Enforcement Agencies. Overall, the shooting incident procedures are comparable to those employed by DEA and FBI. However, while the excessive force procedures are comparable to DEA’s, there are distinctions with those employed by FBI. ATF also has procedures for addressing administrative tort claims and civil lawsuits filed by complainants. Complainants may file such administrative claims and lawsuits in addition to reporting allegations of excessive force use. ATF has procedures in place for reporting, investigating, and reviewing shooting incidents. These procedures were revised in October 1994. The revisions were initiated by the ATF Director as part of an overall reorganization of ATF’s operations. Under the revised procedures, ATF’s OIis responsible for investigating reported shooting incidents. Before the revision, ATF’s Office of Enforcement (OE) was responsible for investigating shooting incidents. According to an OI official, the transfer of responsibility was part of an effort to make the investigative process independent of the enforcement process. As a result of the overall reorganization, OI reports directly to the ATF Director. The revisions also include changes in the reporting requirements and procedures and in the procedures for reviewing shooting incident reports. Appendix IV contains a detailed description of ATF’s procedures for reporting, investigating, and reviewing shooting incidents. As shown in figure 5.1, once a shooting incident occurs, the ATF agents involved in or present at the incident are required to immediately notify their supervisors. The incident is then to be reported through the chain of command to OE and OI, followed by a written notification within 12 hours of occurrence. Agents are not required to report shooting incidents related to authorized training and recreational shooting. According to the procedures, OI is to determine whether to conduct an investigation of any incident involving the discharge of a firearm within the criteria established by ATF. For example, the intentional discharge of a firearm by an agent is to be investigated. However, shootings of canines or other animals, while required to be reported, are not normally investigated. If an investigation of a shooting incident is required, OI is to assign it to a shooting incident review team (SIRT). An SIRT is to investigate the shooting incident according to the process described in figure 5.2. The investigation is to include, among other things, the determination of facts and the analysis of the events and circumstances relating to the incident. The SIRT is then to issue a Shooting Incident Report (SIR) that is to include, among other things, background on the incident, information on suspects’ actions, and exhibits. The report is to be submitted to the OI Assistant Director. On the basis of the incident’s nature and seriousness, the Assistant Director is to submit copies of the report to all members of the Shooting Incident Review Board (SIRB). The SIRB is composed of the following ATF officials: (1) the OI Assistant Director; (2) the Deputy Associate Director for Criminal Enforcement Programs; (3) the two Deputy Associate Directors for Criminal Enforcement Field Operations in the West and East Regions, respectively; (4) the Associate Chief Counsel for Litigation, Office of the Chief Counsel (OCC); (5) the Assistant Director for Training and Professional Development; and (6) the Chief of the Special Operations Division. According to the OI Assistant Director (and current Chairman), the SIRB meets at the Chairman’s request, generally within 2 to 3 weeks after a SIR(s) has been submitted, to review the report(s) and determine whether the shooting(s) was justified. The SIRB may accordingly recommend, among other things, changes to training and operational policies. Cognizant ATF Directorate heads are responsible for implementing the recommendations and are to respond in writing within 30 days describing the actions taken. If the SIRB finds the shooting to be not justified, and potential misconduct by ATF agents, it is to forward the matter to ATF’s Professional Review Board (PRB). As discussed below, PRB is to review incidents of alleged agent misconduct, including the alleged use of excessive force. ATF’s procedures are consistent with guidelines and/or standards recommended by IACP, PCIE, and the Commission on Accreditation for Law Enforcement Agencies. Specifically, IACP reporting guidelines recommend that use-of-force incidents, including shooting incidents, be reported in an accurate and timely manner. These guidelines also recommend the preparation of a written report about the incident and the notification of supervisors. In addition, IACP guidelines recommend that shooting incidents be investigated and that the resulting investigative reports be reviewed to determine whether changes in training or other policies are needed. Consistent with these guidelines, ATF procedures require agents to immediately report shootings to their supervisors and to follow up with a written report. In addition, the procedures require that shooting incidents be investigated and—depending on the nature of the incident—the resulting reports be reviewed by the SIRB to determine if changes in policies are warranted. PCIE standards recommend that investigators be qualified, exhibit professional proficiency, and exercise due professional care by following an investigative plan and relevant procedures. In addition, the standards recommend, among other things, the establishment of a management information system. Consistent with these standards, according to OI officials, (1) ATF investigators are special agents who have been trained—both collectively and individually—in the investigation of shooting and other use-of-force incidents, (2) ATF investigators use an investigative plan in the form of an action checklist, and (3) OI maintains investigative files and tracks investigations in a computer database. Commission on Accreditation for Law Enforcement Agencies standards, like IACP’s guidelines, recommend that shooting incidents be reported at least verbally by law enforcement officers within a specified period of time and to be followed by a written report as soon as practical thereafter. The Commission standards also recommend that law enforcement agencies review the incident reports. ATF’s procedures for reporting and reviewing shooting incidents, discussed earlier, are consistent with these standards. For example, ATF’s procedures require agents to immediately report shooting incidents to their supervisors. In addition, SIRB—depending on the nature of the incident—is to review investigative reports of shooting incidents. Both IACP’s and PCIE’s investigative guidelines and standards respectively recommend that investigations generally be impartial, thorough, and timely. These guidelines and standards also recommend that a thorough investigation is to involve, among other things, interviewing witnesses, gathering and preserving evidence, preparing a written report, and maintaining an investigative file. According to the IACP guidelines, the purpose of an investigation is to determine (1) the propriety of the use of force and (2) whether the use of force was in accordance with policy. Our review of ATF’s shooting incident procedures and investigative files showed that investigations were performed by OI, a Directorate that is independent of OE, which is the Directorate most likely to be involved in shooting incidents. The procedures require that the investigation be completed within 30 days of the incident. They also require that an investigation is to, among other things, include interviewing participants and witnesses, and gathering evidence. In addition, an ATF investigation is to (1) determine compliance with ATF and Treasury use-of-force policies, (2) establish a factual record for purposes of potential tort claims or litigation resulting from the incident, and (3) identify lessons learned. Overall, ATF’s procedures are comparable to those employed by DEA and FBI. For example, DEA and FBI reporting procedures require their agents to immediately report shooting incidents through their chains of command to the SACs. The initial reporting is to be followed by the SACs’ notification of their respective headquarters by teletype. Accordingly, for example, a DEA SAC is to notify DEA’s Office of Inspection and the appropriate drug section. In addition, DEA and FBI procedures, like ATF’s, require that written investigative reports be prepared. These reports are to include, where pertinent, witness interviews, police and medical reports, and incident scene diagrams and photographs, among other things. Finally, both DEA and FBI, like ATF’s SIRB, have units to review shooting incidents. There are two distinctions in the three agencies’ shooting incident procedures. The first distinction is related to investigative procedures. Specifically, DEA and FBI field divisions involved in a shooting incident may investigate that incident, while ATF field divisions cannot. For example, according to DEA procedures, DEA’s Office of Inspections may designate a DEA field division to investigate accidental and other firearm discharges it was involved in that did not result in significant injuries. Furthermore, according to a DEA official, the Office of Inspections is to monitor these investigations and conduct a post-investigative review of the field division’s final report to determine compliance with investigative procedures. In addition, according to an FBI official, FBI field divisions also may investigate shooting incidents in which they were involved. The Assistant Director of FBI’s Inspection Division, in consultation with the Assistant Director of the Criminal Division and the cognizant SAC, is to decide who will conduct the investigation. According to the FBI official, the decision is to be based on the seriousness of the shooting incident. The second distinction is related to the review process. Specifically, FBI’s Shooting Incident Review Group, in addition to FBI personnel, also includes one attorney each from Justice’s Civil Rights and Criminal Divisions, while DEA’s Critical Incident and Firearms Review Committee could include a representative from Justice—to be designated by Justice—on a case-by-case basis. In contrast, ATF’s SIRB is composed of only ATF personnel. ATF also has procedures in place for reporting, investigating, and reviewing use of excessive force allegations. According to these procedures, OI is responsible for investigating allegations of various types of misconduct by ATF agents, including the use of excessive force. Overall, DEA’s procedures were comparable to ATF’s in addressing use of excessive force allegations. However, there were distinctions between ATF’s and FBI’s procedures. ATF’s OCC is responsible for reviewing administrative tort claims and civil lawsuits filed by complainants. According to an OI official, complainants generally report allegations of use of excessive force to the SACs of ATF field divisions or to local law enforcement agencies who, in turn, refer them to OI. Complainants have also reported their allegations directly to OI and have filed administrative tort claims and/or civil lawsuits. In addition, use of excessive force allegations have been reported by ATF agents who have witnessed such incidents. ATF procedures require agents to report these allegations promptly to OI. OI has discretion over which use of excessive force allegations to investigate. According to OI officials, after an allegation is reported to OI and documented in an Incident Report, OI reviews the allegation and decides whether to launch a formal investigation. Criteria for determining whether to investigate the allegations include the seriousness of the allegations, the reliability of the source, and the timeliness of their reporting. According to the OI officials, while some allegations—following a preliminary review—are determined to be frivolous and, therefore, are not investigated, most of them are investigated. According to OI, an example of a frivolous allegation is one where complainants identify ATF agents as having used excessive force against them during an enforcement action, and a subsequent OI preliminary review determines that ATF agents were not involved in the enforcement action. OCC may request an investigation of excessive force allegations as a result of an administrative tort claim or civil lawsuit filed by a complainant. Whether or not OI determines that an investigation is warranted, it is to prepare and retain an Incident Report documenting the allegations. If OI decides to investigate use of excessive force allegations, the investigation is to be conducted by one of OI’s four regional offices, overseen by OI’s Deputy Assistant Director. The investigation is to be conducted using the same investigative procedures and techniques as those for shooting incident investigations discussed earlier (also see fig. 5.2 and app. IV). At the end of the investigation, a Report of Investigation is to be prepared and submitted to OI, where it is to be reviewed and signed by the Assistant Director and Deputy Assistant Director. The report is then to be submitted to PRB for further review. According to ATF’s misconduct procedures, all allegations of criminal violations by ATF employees are to be immediately reported to OI. OI is then to refer the matter to the local U.S. Attorney, Justice, or the appropriate state or local prosecutor for jurisdictional and prosecutorial determinations. In addition, if an OI misconduct investigation identifies potential civil rights violations resulting from the alleged use of excessive force, OI is to refer the matter to Justice’s Civil Rights Division for jurisdictional and prosecutorial determination. Reports of investigations involving alleged misconduct by ATF special agents—including use of excessive force—are to be reviewed by PRB. In addition, PRB is to review any shooting incident referred to it by SIRB for adverse or disciplinary action because of agent negligence. According to the PRB Chairman, from its inception in August 1995 until December 1995, PRB reviewed 44 investigative reports, of which he estimated about 10 were alleged use of excessive force incidents. According to the Chairman, none of the reviewed alleged excessive force reports resulted in disciplinary action against the agents involved. PRB is composed of the following ATF members: (1) the Chief of OE’s Enforcement Management Staff, who is also the PRB chairman; (2) the Deputy Assistant Director of the Office of Science and Information Technology; (3) the Chief of Laboratory Services of the Office of Science and Information Technology; (4) the Chief of OE’s Alcohol and Tobacco Programs Division; and (5) the Chief of the Career Development Division’s Office of Training and Development. According to an ATF official, before PRB, investigative reports were to be submitted to and reviewed by officials such as an agent’s SAC or OE’s Assistant Director to determine the need for and types of sanctions. According to the PRB Chairman, PRB meets every 2 weeks to review the investigative reports submitted by OI. On the basis of the facts of each incident, PRB determines whether to propose adverse or disciplinary action against the agents involved. PRB coordinates this decision with ATF’s ELRB and OCC. If it is determined that adverse or disciplinary action—such as suspension, demotion, or termination—is warranted, PRB will propose such action in a formal letter. The letter, which ELRB drafts and the PRB Chairman signs, is presented to the agent(s) involved in the incident. The agent has 15 days within which to respond (orally or in writing) to the proposed action or appeal it to a deciding official, normally a senior manager in the agent’s field or headquarters unit. The deciding official must coordinate his/her proposed decision with ELRB and OCC. If ELRB and OCC do not agree and a compromise cannot be reached, then the decision can be elevated to the next highest level, up to the Associate Director of the relevant Directorate. The ultimate deciding official is responsible for implementing the disciplinary action. PRB is to receive a copy of the final action decision. Agents retain rights to appeal a decision to the Merit Systems Protection Board (MSPB) or to file a complaint of discrimination or a grievance. ATF’s procedures are consistent with guidelines and/or standards recommended by IACP, PCIE, and the Commission on Accreditation for Law Enforcement Agencies. IACP and PCIE guidelines and standards respectively for reporting, investigating, and reviewing use of excessive force incidents are the same as those for shooting incidents discussed earlier. The Commission’s standards call for a specialized unit to investigate, among other types of misconduct, the use of force by law enforcement officers. The standards also call for written directives to investigate complaints and maintain records of such complaints. In addition, the standards call for the investigative unit to report to the agency’s chief executive. Consistent with the IACP, PCIE, and Commission guidelines and standards, ATF procedures designate OI, which reports directly to ATF’s Director, to investigate complaints of misconduct, including use of excessive force. In addition, the procedures require agents to report misconduct, including use of excessive force, to OI. As discussed earlier, OI is to conduct preliminary investigations of allegations of misconduct, document them, and, if the facts warrant, formally investigate the alleged misconduct. The report resulting from the investigation is then to be reviewed by PRB. Overall, ATF’s procedures are comparable to DEA’s in addressing use of excessive force allegations. However, there are distinctions between ATF’s and FBI’s procedures. According to DEA officials, complainants report their allegations either to local law enforcement agencies or to field division SACs. The SACs then report the allegations to DEA headquarters. Specifically, according to DEA’s procedures, allegations of unnecessary (DEA’s characterization of “excessive”) force are to be reported to its Office of Professional Responsibility (OPR). OPR is to determine whether to investigate the allegations and which unit—agent’s supervisor, the cognizant field division, or OPR—will investigate them. The resulting investigative reports are to be reviewed by DEA’s Board of Professional Conduct. The Board is then to either clear the agent or propose disciplinary or adverse action. Authority for such actions is taken pursuant to 28 C.F.R., section 0.138. The only distinction we noted between ATF’s and DEA’s excessive force incident procedures related to the delegation of investigative responsibility. According to DEA’s procedures, use of unnecessary force allegations may be investigated either by the agent’s supervisor, the cognizant field division, or DEA’s OPR. In comparison, ATF’s OI does not delegate its investigative responsibility. There are distinctions between FBI’s and ATF’s procedures for investigating and reviewing allegations of excessive force. According to FBI’s procedures, reported allegations of excessive force against FBI agents are to be referred by OPR to the FBI Criminal Investigative Division’s Civil Rights Unit for investigation. The Civil Rights Unit is to first discuss all allegations with Justice’s Civil Rights Division to determine whether Justice will request a criminal investigation or decline in favor of an administrative investigation. If the Civil Rights Division does not decline, a criminal investigation of an allegation is to be conducted by FBI agents under the supervision of the Civil Rights Unit and the Civil Rights Division. Once the criminal investigation is complete, and if the Civil Rights Division declines prosecution, the matter is to be referred back to OPR for administrative processing. Specifically, OPR is to review the Civil Rights Unit’s investigative report and is to determine its completeness and whether specific FBI policies, procedures, and guidelines were violated. OPR is to then refer the report to the FBI Personnel Division’s Administrative Summary Unit. The Administrative Summary Unit is to determine—and recommend as applicable—whether any administrative action is warranted, on the basis of investigative facts and applicable case precedents. Authority for such action is granted to FBI under 28 C.F.R., section 0.137. In contrast to FBI, ATF’s OI, as discussed earlier, is to first conduct an administrative investigation of alleged misconduct by its agents, including excessive force allegations. If the investigation identifies any potential criminal misconduct by an ATF agent, OI is to refer the matter to the appropriate federal, state, and local prosecutor, or to Justice for jurisdictional and prosecutorial determination. In addition, if the investigation identifies potential civil rights violations resulting from the alleged use of excessive force, OI is to refer the matter to Justice’s Civil Rights Division for jurisdictional and prosecutorial determination. In addition to reporting use of excessive force allegations, complainants have also filed administrative tort claims and civil lawsuits against ATF related to these allegations. During fiscal years 1990 through 1995, complainants filed 975 administrative tort claims and 528 civil lawsuits. As discussed earlier, during the same period, ATF initiated 76,542 investigations and arrested 46,930 suspects. Under ATF’s procedures, OCC is responsible for reviewing the administrative tort claims and civil lawsuits and advising ATF, Treasury, and Justice decisionmakers on legal issues related to the claims and lawsuits. In addition to reporting use of excessive force allegations to ATF, complainants may also file administrative tort claims—claims for monetary compensation—under the Federal Tort Claims Act (FTCA).According to the ACC for litigation, these claims are filed against the United States for allegedly negligent or other wrongful acts—such as damaging property—committed by its employees, such as ATF special agents, during the course of their employment. All administrative tort claims involving ATF employees are initially to be filed with ATF’s Chief of the Administrative Programs Division, who has the authority to make the final decision on the claims. After a claim is filed, the Chief is to refer it to OCC for legal review and advice. Procedurally, a claimant must first present an administrative claim to the appropriate federal agency. Once the claim has been denied, or 6 months after the claim has been filed, the claimant may bring a lawsuit in federal court. At OCC, the ACC for litigation is to assign the claim a unique number and enter it into a computerized case management and tracking system. The claim is then to be assigned to an attorney. The attorney is to conduct an initial screening of the claim to determine whether (1) it is filed in a timely manner and contains the prerequisite “sum certain” in damages, (2) it includes sufficient documentation of the claimed damages, (3) an investigative or accident report was created, and (4) other agencies were involved in the incident resulting in the claim. If the claim does not contain a sum certain figure or sufficient documentation—such as medical reports and bills—to support the claimed damages, a letter to the claimant is to be prepared requesting additional information and documentation. In shooting incidents, the relevant SIR is to be obtained. In incidents where no investigative report is available, OCC is to request an investigation of the allegations from OI. If another agency was involved in the incident, counsel for that agency is to be contacted and the resolution of the claim is to be coordinated as required by Justice regulations on federal tort claims. The assigned attorney also is to conduct a legal analysis of the claim’s facts and applicable law. Specifically, the attorney is to determine whether (1) the claim falls within FTCA, (2) the ATF employee was within the scope of employment at the time of the incident, (3) the claimed negligent or wrongful conduct is factually and legally supported, (4) there are any affirmative defenses under the applicable state law that bar recovery—such as the claimant was also negligent, and (5) damages are recoverable under state law. To determine damages, the attorney is to consult comparable cases in the same state where the injuries were parallel. On the basis of this legal review, the ACC for litigation is to prepare a memorandum for the Chief of the Administrative Programs Division. The memorandum is to summarize the case and recommend whether the claim should be allowed in full or part or be disallowed. The memorandum is to be accompanied by a letter to the claimant and a payment voucher if the claim is being allowed. If the award involves more than $25,000 and it is approved by the Administrative Programs Division Chief, it must also be approved by Justice. Without Justice’s approval, the letter to the claimant cannot be sent. During fiscal years 1990 through 1995, 975 administrative tort claims were filed against ATF. According to the ACC for litigation, 279 of the 975 claims—or about 29 percent—could be related to the excessive use of force. Specifically, of the 279 claims, 143 resulted from the Waco incident, 14 resulted from the alleged destruction of or damage to firearms, 6 resulted from property damage caused by discharged bullets, 53 resulted from property damage caused during the execution of warrants, 8 resulted from shooting incidents, and 55 resulted from injuries during some type of law enforcement activity. Injury claims are filed for, among other things, assault and battery, such as striking, pushing, or kicking—25 of the 55 injury-related tort claims alleged such activity. Of the 55 tort claims resulting from injuries, 4 were granted, 12 were pending at the time of our review, and 39 were denied. Of the four tort claims that were granted, one was for medical costs of an arrestee ($122.46), one was for detention of individuals during a state enforcement action in which ATF agents participated ($200 for each claimant), one was for the execution of a search warrant at a wrong address ($10,000), and one was for false arrest ($15,000). Twenty-three of the injury tort claims also resulted in civil lawsuits. Of these, 12 were dismissed, 1 was settled for $20,000 for false arrest, and 10 were pending at the time of our review. According to the ACC for litigation, civil lawsuits against ATF are filed by individuals either as (1) “Bivens cases” for alleged violations of their constitutional rights or (2) FTCA cases for negligence or wrongful acts committed by employees within the scope of their employment. Bivens cases are named after the 1971 Supreme Court decision that held that an individual injured by a federal agent’s alleged violation of the Fourth Amendment may bring an action against the agent. FTCA cases fall under the tort claims statute discussed earlier. All civil lawsuits against ATF and its employees involving official duties are to be forwarded to OCC. At OCC, the lawsuit is to be assigned a number and entered into a computerized case management and tracking system. The lawsuit is then to be assigned to an attorney for an initial review. The attorney is to determine whether the lawsuit is against the United States, ATF, an ATF employee in his/her official capacity, or an ATF employee in his/her individual capacity. The attorney is also to gather all documentation related to the incident, such as a tort claim file (if applicable), a SIR, OI’s misconduct report, decisions and rulings in related criminal cases, and related criminal investigative or compliance inspection reports. If the lawsuit is against the United States, ATF, or an ATF employee in his/her official capacity, the appropriate ATF office and employee is to be notified of the lawsuit. Justice is to provide representation in the lawsuit without a written request from the employee. However, if the lawsuit is against an employee in his/her individual capacity, the employee is to be notified of the lawsuit and advised that Justice representation is available if requested. If the employee requests Justice representation, the employee must forward a written request concerning the facts of the allegations to OCC. The employee’s supervisor must also provide written notice that he/she is aware of the allegations and the employee’s conduct and has determined that the employee acted properly within the scope of employment. OCC is then to advise Justice on whether to represent the employee, on the basis of whether the employee acted within the scope of employment and representation is in the best interest of the government. During litigation, Justice may solicit ATF’s views regarding a settlement. In general, OCC is to prepare a memorandum analyzing the relevant law, facts, and litigation risk. If a settlement in excess of $500,000 is recommended, it must be approved by Treasury’s Assistant General Counsel for Enforcement if the lawsuit is against the United States, ATF, or an ATF employee in his/her official capacity. If there is a proposed settlement or adverse action against an employee in his/her individual capacity, the employee may request indemnification by making a written request to OCC. OCC is then to make a recommendation on the basis of an analysis of the case’s facts and legal issues. The recommendation is to be submitted to Treasury’s Assistant General Counsel for Enforcement and the Deputy Secretary for Enforcement for approval. Indemnification is approved and available only if the employee acted within the scope of employment, it is in the best interest of Treasury, and there are available appropriated funds for ATF. During fiscal years 1990 through 1995, 528 civil lawsuits were filed against ATF. According to the ACC for litigation, of the 528 lawsuits, 183 were Bivens and FTCA-related lawsuits. Of the 183 lawsuits, 31 were filed as a result of a shooting or other allegation of excessive force use. Furthermore, of these 183 lawsuits, 106 alleged an illegal search and/or seizure; 35 were challenges to criminal convictions; and 42 were related to other allegations, such as failure to place an individual in a witness protection program, libel and slander, and invasion of privacy. In addition, 15 of the 528 lawsuits were filed as a result of the Waco incident. Of the 183 lawsuits not related to Waco, 11 were settled with no finding against or concession of wrongdoing by ATF, 115 were dismissed, and 57 were pending at the time of our review. Among some of the lawsuit settlements, two were for trespass-type claims involving searches of property and four were for false arrest claims involving, among other things, mistaken identities. The settlements ranged from $200 for false arrest claims to $250,000 for the accidental shooting of a local law enforcement officer. Our review of available information in ATF’s shooting incident and excessive force investigative files for fiscal years 1990 through 1995 showed that ATF complied with its investigative procedures in effect at the time of the investigation except that two investigative files did not contain a record of review required by the procedures. The review also showed that ATF (1) investigations found that all reported intentional shootings were justified, (2) investigations found that most reported allegations of excessive force were unsubstantiated, and (3) agents found to have engaged in some type of misconduct received sanctions in the form of written reprimands and suspensions. As discussed in chapter 1, due to time and methodological constraints, we did not evaluate the events that resulted in the incidents or the quality and adequacy of the ATF investigations. In addition, we did not verify whether all shooting and alleged excessive force incidents were reported, or whether all reported allegations of excessive force were investigated. Our conclusions about ATF’s compliance with its investigative procedures are based on whether we found documentation required by these procedures in the investigative files of shooting and alleged excessive force incidents and whether the documentation indicated that investigative procedures had been followed. Where documentation was not initially found, we obtained documents and/or explanations from ATF officials. Our conclusions apply only to the files we reviewed. Our review of ATF’s investigations of shooting incidents focused on those incidents where ATF agents reported intentionally discharging their weapons at suspects. As shown in table 5.1, during the period of fiscal years 1990 to 1995, ATF agents were involved in 39 such incidents. We reviewed 38 files of shooting incident investigations. We did not review the shooting incident resulting from ATF’s operation at the Branch Davidian compound in Waco because it was investigated by Treasury and not by ATF. Of the 38 shooting incidents we reviewed, 4 were the result of ATF SRT operations. Of the remaining 34 shooting incidents, 11 resulted from non-SRT ATF operations; 19 resulted from task force or other joint operations with federal, state, and/or local law enforcement agencies; and 4 resulted from other situations. As part of our review, we also identified and reviewed all 92 files of investigations—conducted during fiscal years 1990 through 1995—involving reported allegations of misconduct by ATF agents in 3 categories of agent misconduct: (1) misconduct during the execution of a search warrant, (2) violation of a person’s civil rights, and (3) assault by an agent on a person. As shown in table 5.2, 25 of the 92 investigations involved incidents specifically alleging the physical abuse of persons and/or property by agents. Of the 25 use of excessive force allegations, 1 was the result of an SRT operation. Of the remaining 24 use of excessive force allegations, 9 resulted from non-SRT ATF operations; 9 from task force or other joint operations with federal, state, and/or local law enforcement agencies; and 6 from other activities, such as a traffic dispute and the interrogation of a suspect. Our review of available information in ATF’s shooting and excessive force incident investigative files showed that ATF complied with its investigative procedures in effect at the time of the investigation with an exception discussed below. Specifically, all 39 reported shooting incidents involving ATF agents intentionally discharging their firearms were investigated—either by ATF or Treasury—as required by ATF procedures. In addition, the 38 shooting incident and 25 excessive force investigative files we reviewed contained the following items as required by ATF procedures. • The identification of individuals—such as ATF agents, other federal and/or state and local law enforcement officers, and suspects—and property involved in the incidents. Injuries and/or fatalities resulting from the incidents. • The type of operation—such as SRT, non-SRT, and task force—and type of law enforcement activity that resulted in the incidents, such as serving search and/or arrest warrants or engaging in undercover operations. • Written reports of the incidents and interviews with participants and witnesses. The 25 investigative files of alleged excessive force incidents included a record of review by the designated unit as required by ATF procedures. The shooting incident investigative files generally contained a record of review by the designated unit. Specifically, 34 shooting incident files contained this information. One shooting investigation review was pending at the time of our review and one shooting investigation was submitted to ATF’s then Office of Internal Affairs (the predecessor to OI) without a formal review. In addition, an OE official explained that following a search, a record of headquarters review could not be located in two shooting incident investigative files. These two incidents were investigated before the October 1994 revision of ATF’s investigative procedures. According to OI officials, other types of information, while not required by ATF procedures, could be included in investigative files if they are available, or—in the case of medical reports—are required by special circumstances, such as tort claims. For example, 35 shooting incident files included pertinent descriptions of the incident scenes and 30 files included descriptions of evidence, while 15 and 12 alleged excessive force files respectively included this information. In addition, 32 shooting and 20 alleged excessive force files contained a record of notification of the incident to ATF headquarters. Also, all 38 shooting and 22 of the 25 alleged excessive force files contained an indication of whether shootings were justified or the allegations were substantiated, and whether some type of sanction or corrective action was recommended. Finally, 11 shooting and 10 excessive force investigative files included a medical report. According to the OI officials, the investigative files discussed above may not have included information for several reasons. For example, descriptions of evidence—such as weapons and vehicles—may not be available because such evidence may have been discarded by suspects and never recovered. Written records of incident notification—significant activity reports (SAR)—are not material to OI investigations because OI is notified of incidents by telephone. This notification, in turn, is documented in an incident report, which is required to be included in the investigative file. Medical reports may not be included in a file because of legal access and privacy issues. Complainants are required to provide medical reports only for incidents that result in tort claims. ATF’s investigations and the subsequent reviews determined that all reported shootings were justified and within the scope of ATF’s use-of-force policies. The following are examples. In a 1991 incident, two ATF agents shot and killed a suspect during the serving of a state search warrant with local law enforcement officers at the home of the suspect. The agents were returning fire after being fired upon by the suspect. The suspect was under investigation for armed drug trafficking. The investigation and subsequent review determined that the agents acted within the scope of their duties because they were protecting themselves from hostile fire. In a 1991 incident, an ATF agent exchanged fire with a suspect during an undercover “buy/bust” operation. The suspect—classified as a high-risk “shooter,” or someone likely to resist arrest—resisted arrest. The suspect was the target of a large-scale undercover operation by federal law enforcement agencies as a major cocaine dealer and convicted felon. The ATF investigation and subsequent review determined that the shooting was justified because the agents were defending themselves during a high-risk operation. In a 1992 SRT-related incident, two ATF agents shot and killed a suspect who was shooting at other agents and police officers. The incident occurred during the execution of an arrest warrant issued by a state. The execution of the warrant was a joint ATF/local police department operation. The investigation and subsequent review determined that the shooting was justified as self-defense and that ATF agents acted within the scope of their duties. In a 1994 incident, three ATF agents engaged in an undercover “buy/bust” operation exchanged fire with several suspects. The suspects were targets of a gang-related investigation involving narcotics and weapons trafficking. During the course of the operation, one of the suspects attempted to rob one of the agents of his “buy” money. At that point, the agent announced himself as a law enforcement officer, at which time the suspect shot and wounded the agent. The agent and two other agents returned fire. The suspects fled in a vehicle. One suspect was later apprehended. ATF and local law enforcement agency investigations determined that the shooting was justified since the agents fired in self-defense when confronted with a life-threatening situation. ATF’s investigations and subsequent reviews determined that most reported use of excessive force allegations were unsubstantiated due to the lack of evidence. Specifically, in 18 of the 25 investigations, the ATF agents involved in the alleged incidents were cleared of all allegations because these allegations could not be substantiated. Four investigations found some type of misconduct by ATF agents. Two investigations were ongoing at the time of our review. One investigation was closed without further action because OI determined that there was no need for adjudication. Specifically, the allegation dated from 1984 and could not be investigated because of the lack of witnesses and evidence. The following are examples of allegations that ATF’s investigations could not substantiate. In a 1990 incident, a complainant alleged that an ATF agent assaulted him during an arrest for possession of an unregistered firearm. The complainant was admitted to a hospital as a result of the incident. However, ATF’s investigation of the incident determined that, according to a medical examination, there were no bruises, and that the complainant was actually admitted for a heart condition. While in the hospital, the complainant told a deputy U.S. Marshal that he had actually lied about the assault and did so in an attempt to stay out of prison by being admitted to a hospital. Accordingly, the investigation and subsequent review determined that the allegations were unsubstantiated. The agent was cleared of the allegations. In a 1992 incident in which agents were eventually cleared, a complainant alleged that two ATF agents beat him on the face and scraped his arm during the execution of a search warrant. The investigation and subsequent review determined that the allegation was unsubstantiated. Specifically, a medical examination of the complainant showed that the injuries were self-inflicted, which was a fact that the agents had observed and reported. Subsequent to the medical examination’s results, the complainant dropped the allegations against the agents. The agents received letters of clearance. In a 1994 incident, a complainant alleged that ATF agents damaged a lathe—a device used to refinish firearms—and firearms while returning them. The items had been seized as evidence for a court case. The investigation determined that the moving company hired by ATF to return the items had damaged the lathe and had twice offered to replace it at no charge. The investigation also determined that the complainant had repaired any damage to the firearms before ATF had an opportunity to examine them. As a result of the investigation, the agents received letters of clearance. In a 1994 incident, a complainant alleged that ATF agents assaulted and physically abused him during the serving of a search warrant. During the ATF investigation, the complainant underwent a polygraph examination. The examination determined that the complainant lied about the allegations. The complainant did not dispute the results and claimed that he had not really intended to file a complaint. The investigation and subsequent review determined that the allegations were unsubstantiated, and the agents received letters of clearance. Four of the 25 investigations found evidence of some agent misconduct during incidents where use of excessive force was alleged. As discussed below, the agents found to have engaged in such misconduct received sanctions in the form of written reprimands and suspensions. In one incident, in fiscal year 1990, two agents were suspended for 1 day each for failing to report an incident to their supervisors. The incident occurred during the search of suspected gang members in front of a grocery store. According to the agents, the suspects were making threatening gestures at their unmarked government vehicle. During the search, a struggle ensued during which a suspect was shoved by an agent against a store window, which was broken as a result. The store owner reported the incident to a local law enforcement agency, which then reported it to the ATF SAC. In a second incident, in 1990, an investigation determined that during an SRT-related operation involving the undercover purchase of firearms, an ATF agent engaged in a loud and offensive confrontation with a confidential informant who was part of the operation. The agent apparently became upset because narcotics were found in the informant’s vehicle. As a result of the incident, the agent seized the narcotics but never turned them over to any law enforcement agency. The agent later claimed that he flushed the narcotics down a toilet. The agent was suspended for 5 days for conduct “unbecoming to an agent” and for failing to properly handle suspected narcotics. In a third incident, in 1993, the complainant alleged that an ATF agent verbally and physically abused him, aggravating a pre-existing injury. The incident occurred during the serving of a federal search warrant. An ATF investigation and subsequent review determined that the allegations of physical abuse were unsubstantiated. However, the agent received a written reprimand for unprofessional conduct and for initiating a confrontation with the complainant that could have become more violent. In a fourth incident, in 1994, a suspect’s girlfriend filed a complaint alleging an ATF agent struck her boyfriend—a parole violator—with a firearm while arresting him. The agent claimed that the suspect struck his head on the pavement during a brief struggle. The suspect submitted to a polygraph test the results of which indicated deception on the suspect’s part. However, the agent received a written reprimand for failing to report an injury to a suspect during his official duties. As noted earlier, we did not verify whether all shooting and alleged excessive force incidents were reported, or whether all reported allegations of excessive force were investigated. However, we did a limited check to identify incidents that ATF may not have investigated by (1) conducting a literature search using the LEXIS/NEXIS electronic database to identify alleged ATF use of excessive force incidents reported in the media and (2) contacting the American Civil Liberties Union and the National Rifle Association to identify allegations of ATF excessive force reported to these organizations. Through these sources, eight incidents were identified. We then (1) cross-checked the alleged incidents we identified against ATF’s (a) investigative records to determine if ATF investigated these incidents and (b) litigation records to determine if any complainants filed civil lawsuits against ATF related to these incidents, and (2) discussed these incidents with cognizant officials. Our cross-check of ATF’s investigative and litigation records determined that three of the incidents—one shooting and two excessive force allegations—were investigated by ATF. Our cross-check also determined that two other incidents did not involve allegations of excessive force. Finally, our cross-check determined the following regarding the remaining three incidents: • A 1991 incident of alleged excessive force reported by the National Rifle Association was not investigated by ATF. ATF officials told us that they were not aware of this incident because, for example, a complaint may not have been filed. Moreover, no lawsuit had been filed against ATF at the time of our review. • A 1991 incident of alleged excessive force reported in newspaper accounts and by the American Civil Liberties Union was not investigated by ATF after its preliminary inquiry found that another federal law enforcement agency’s personnel—not ATF’s—may have been involved in the alleged incident. A lawsuit was filed naming ATF, among others, as a defendant. The district court dismissed all claims against all of the federal defendants. Subsequent plaintif motions were denied by the district court. The judgment of the district court was upheld on appeal. • A 1993 incident of alleged excessive force reported in newspaper accounts and by the National Rifle Association was not investigated by ATF at the request of the local U.S. Attorney’s Office litigating a related lawsuit against ATF. A district court dismissed a substantial part of the lawsuit. In sum, our limited check showed that in the eight incidents, ATF’s OI—or its predecessor—either investigated or had reasons not to investigate the shooting incident or allegations of excessive force that they were aware of. Of the three incidents that OI was not aware of, two did not involve an allegation of excessive force. With respect to the one incident that OI was not aware of, OI and OCC officals told us that no complaints or lawsuits related to the incident were filed with ATF and consequently they were not aware of the incident. It should be emphasized that these results cannot be generalized beyond the eight incidents. As part of its review process, ATF has implemented lessons learned resulting from its investigations of shooting incidents. ATF is also in the process of implementing lessons learned from the 1993 operation at Waco. According to the SIRB Chairman, changes to various policies are transmitted to agents through the SIRB report review and recommendation process. As discussed earlier and in appendix IV, under this process, SIRB is to make formal recommendations about changes in enforcement operations and policies, training, and technology to the appropriate ATF Directorates. The heads of the Directorates are responsible for implementing the recommendations and responding to the SIRB in writing. The recommended changes are to reach field agents through their SACs, who are ultimately responsible for their implementation. While the SIRB does not have the power to enforce the recommendations, their implementation is to be verified as part of OI’s inspection of ATF’s divisions. Our review of DEA’s and FBI’s procedures for implementing lessons learned from their investigations showed that these agencies’ procedures were similar to ATF’s. OI officials provided two examples of how lessons learned from OI’s investigations—and SIRB’s subsequent recommendations—have been implemented. In the first example, following a February 1995 meeting, SIRB determined that some ATF agents were accidentally discharging their firearms while using body bunkers. SIRB concluded that factors such as wearing gloves and using the off/weak hand may have contributed to the accidental discharges. In a memorandum to the Associate Director for Enforcement, SIRB recommended that agents be trained in using firearms carried in their weak, or off, hands while also using a body bunker and be requalified for firearm use while wearing gloves. In response to these recommendations, both the Associate Director for Enforcement and the ATF Director issued separate memorandums to cognizant officials, such as SACs and the Chief of ATF’s National Academy, informing them of the need to improve body bunker and firearms training. Accordingly, we observed agents at two ATF field divisions wearing gloves during their quarterly firearm training sessions. In the second example, SIRB recommended that in response to the bursting of a shotgun barrel during a full-load test-firing—an incident that resulted in injuries—ATF’s Director of Laboratory Services purchase and use remote and secure test firing systems and safety shields. The Laboratory Services Director responded in a memorandum that he not only had implemented SIRB’s recommendations, but also had taken additional actions, such as modifying all safety procedures at ATF’s laboratories. In October 1995, ATF issued a report identifying issues arising from the operation at the Branch Davidian compound in Waco and outlining the corrective actions it had taken in response to these issues. According to ATF officials, they reviewed the Treasury report on the Waco operation and addressed each issue raised in the report. In addition, ATF consulted with experts from other organizations and law enforcement agencies, such as IACP, the National Tactical Officers Association, the FBI, and the Secret Service, to identify weaknesses that affected the policies and procedures used during the Waco operation. According to the officials, copies of the report have been distributed to all ATF field divisions to help implementation. Also, according to these officials, ATF will (1) continually evaluate the progress of its responses to the Waco lessons and make changes as necessary, and (2) annually update the October 1995 report to reflect the progress and changes made. Finally, according to the officials, ATF is continuing to develop training programs to reflect the changes and to ensure that there is consistency between policy and training and the execution of enforcement operations. According to the ATF report, ATF learned certain lessons related to the planning and execution of enforcement operations and addressing post-operation issues. The following is the report’s summary of the lessons learned: • ATF may not be able to carry out every tactical operation it encounters alone and must be prepared to seek assistance from other federal, state, and local law enforcement agencies when necessary; • raid planners must have accurate and timely intelligence; • raid planners must have training in a wide range of tactical options; • raid plans must contain carefully constructed contingency plans so that the momentum of going forward does not take control over rational decisionmaking; • raid commanders must be chosen on the basis of their ability to handle the type of operation involved and not simply on the basis of territorial jurisdiction; • raid commanders must receive accurate and timely intelligence; • raid commanders must have clearly defined duties and responsibilities; the incident commander must be located at the command post where he/she can have access to all relevant intelligence and operational developments; • operational security must receive greater attention; in crisis situations, ATF agents who are emotionally involved and exhausted should not be left to handle media relations; and • ATF personnel, at all times, must be prepared to tell the truth and admit mistakes. If misstatements are made, they must be corrected as quickly as possible. According to the report, among its responses to these lessons, ATF is in the process of providing command and control and crisis management training to decisionmakers, developing a tactical intelligence structure, developing policy and training for operational security, and restructuring and enhancing the SRTs. Regarding the SRTs, ATF determined that they needed to be better equipped, to be provided with more specialized training, and to have expanded capabilities. Also according to the ATF October 1995 report, as a prelude to other changes, the ATF Director in October 1994 restructured ATF’s headquarters operations. Specifically, the Director elevated the training function to an executive-level position and created the Training and Professional Development directorate. He also created the Science and Information Technology directorate and, as discussed earlier, made the inspection function independent of the enforcement function. ATF has procedures in place for reporting, investigating, and reviewing shooting incidents and allegations of excessive force by ATF agents. These procedures are consistent with guidelines and/or standards recommended by IACP, PCIE, and the Commission on Accreditation for Law Enforcement Agencies and overall are comparable to those employed by DEA and FBI, except for the distinctions noted herein. On the basis of our review of ATF’s investigative files, ATF has complied with its investigative procedures in effect at the time of the investigation except that two investigative files did not contain a record of headquarters review as required by the procedures. Also, ATF’s investigations determined all shootings to be justified and most use of excessive force allegations to be unsubstantiated. In addition, agents who ATF determined had engaged in some type of misconduct were either suspended or received letters of reprimand. Finally, we found that ATF has implemented lessons learned from shooting investigations, and is in the process of implementing lessons learned from Treasury’s investigation of the Waco incident.
Pursuant to a congressional request, GAO reviewed the Bureau of Alcohol, Tobacco, and Firearms' (ATF) use of deadly force and dynamic entry, focusing on: (1) ATF policies on the use of deadly force; (2) how ATF conveys its policies to its agents; (3) the reasons for and the extent to which ATF uses dynamic entry and what equipment it uses; (4) ATF compliance with its procedures for investigating shooting and alleged excessive force incidents; and (5) how the Drug Enforcement Administration (DEA) and the Federal Bureau of Investigation (FBI) address similar issues. GAO found that: (1) except for a few instances, the 1988 ATF policy on deadly force was consistent with prior DEA and FBI policies and the 1995 Departments of the Treasury and Justice uniform policies which superceded their agencies' policies; (2) agents may use deadly force only when they reasonably believe that suspects pose an imminent threat of death or serious injury to themselves or other persons; (3) the three agencies' new agent training in their deadly force policies is similar and all agents are required to be retrained on a quarterly basis throughout their careers; (4) dynamic entry is used to ensure personal safety when access to premises is needed in high-risk situations or when suspects might swiftly destroy evidence; (5) the three agencies' method of dynamic entry and weaponry and equipment used was similar; (6) ATF reporting, investigating, and review procedures for shooting and excessive force incidents are consistent with recommended standards and similar to the other agencies'; (7) ATF and DEA excessive force procedures are generally comparable, but FBI procedures require that all allegations be submitted to Justice for possible criminal or civil rights violations before the allegations are self-investigated; (8) ATF generally complied with its investigative procedures during fiscal years 1990 through 1995; (9) ATF found that all intentional shootings were justified, most allegations of excessive force were unsubstantiated, and 5 agents warranted sanctioning for misconduct; and (1O) ATF is implementing lessons learned from the incidents, particularly the Waco, Texas, raid.
In September 2012, we found that DHS did not know how much its components invested in R&D, making it difficult to oversee R&D efforts across the department. According to DHS budget officials, S&T, DNDO, and the U.S. Coast Guard were the only components that conducted R&D and we found that they were the only components that reported budget authority, obligations, or outlays for R&D activities to OMB as part of the budget process. However, we reported that the data DHS submitted to OMB underreported DHS’s R&D obligations because DHS components obligated money for R&D contracts that were not reported to OMB as R&D. Specifically, for fiscal year 2011, we identified an additional $255 million in R&D obligations by other DHS components. These obligations included DHS components providing S&T with funding to conduct R&D on their behalf and components obligating funds through contracts directly to industry, universities, or with DOE’s national laboratories for R&D. Further, we found that the data for fiscal years 2010 through 2013 DHS submitted to OMB also underreported DHS’s R&D budget authority and outlays because DNDO did not properly report at least $293 million in R&D budget authority and at least $282 million in R&D outlays. We reported that DHS budget officials agreed that DHS underreported its R&D spending and when asked, could not provide a reason why the omission was not flagged by DHS review. In addition, in our 2012 report, we found that DHS’s R&D budget accounts included a mix of R&D and non-R&D spending. For fiscal year 2011, we estimated that 78 percent of S&T’s Research, Development, Acquisition, & Operations account, 51 percent of DNDO’s Research, Development, & Operations account, and 43 percent of the Coast Guard’s R&D budget account funded R&D activities. As a result, this further complicated DHS’s ability to identify its total investment in R&D. We also reported in September 2012 that DHS did not have a department wide policy defining R&D or guidance directing components how to report R&D activities. As a result, we concluded that it was difficult to identify the department’s total investment in R&D, which limited DHS’s ability to oversee components’ R&D efforts and align them with agency wide R&D goals and priorities, in accordance with Standards for Internal Control in the Federal Government.OMB’s definition of R&D, but the definition was broad and its application may not be uniform across components, and thus, R&D investments may not always be identified as R&D. We found that the variation in R&D definitions may contribute to the unreliability of the reporting mechanisms DHS officials told us at the time that DHS used for R&D investments in budget development and execution, as discussed above. We recommended that DHS develop and implement policies and guidance for defining and overseeing R&D at the department that include, among other things, a well-understood definition of R&D that provides reasonable assurance that reliable accounting and reporting of R&D resources and activities for internal and external use are achieved. DHS agreed with our recommendation and stated that it planned to evaluate the most effective path forward to guide uniform treatment of R&D across the department in compliance with OMB rules and was considering a management directive, multi-component steering committee, or new policy guidance to help better oversee and coordinate R&D. As of July 2014, DHS has updated its guidance to include a definition of R&D, but as discussed in more detail below efforts to develop a specific policy outlining R&D roles and responsibilities and a process for coordinating R&D with other offices remain ongoing and have not yet been completed. We will continue to monitor DHS’s efforts to implement these recommendations. We reported in September 2012 that the Homeland Security Act of 2002 provides S&T with the responsibility for, among other things, coordinating and integrating all research, development, demonstration, testing, and evaluation activities within DHS and establishing and administering the primary R&D activities of the department. S&T developed coordination practices that fall into four general categories: (1) S&T component liaisons, (2) R&D agreements between component heads and S&T, (3) joint R&D strategies between S&T and components, and (4) various R&D coordination teams made up of S&T and component project managers, which are discussed in detail in our 2012 report and 2013 testimony. Despite S&T’s efforts to coordinate R&D activities, in September 2012, we reported that R&D at DHS was inherently fragmented because several components within DHS—S&T, the Coast Guard, and DNDO—were each given R&D responsibilities in law, and other DHS components may pursue and conduct their own R&D efforts as long as those activities are coordinated through S&T. Fragmentation among R&D efforts at DHS may be advantageous if the department determines that it could gain better or faster results by having multiple components engage in R&D activities toward a similar goal; however, it can be disadvantageous if those activities are uncoordinated or unintentionally overlapping or duplicative. Specifically, we found at least six department components involved in R&D activities in our review of data on about 15,000 federal procurement contract actions coded as R&D taken by DHS components from fiscal years 2007 through 2012. We examined 47 R&D contracts awarded by these components—selected because they appeared to have similar activities to another contract—and found 35 instances among 29 contracts in which the contracts overlapped with activities conducted elsewhere in the department. Taken together, these 29 contracts were worth about $66 million. In one example of the overlap, we found that two DHS components awarded five separate contracts that each addressed detection of the same chemical. While we did not identify instances of unnecessary duplication among these contracts, in September 2012 we found that DHS had not developed a policy defining who is responsible for coordinating R&D activities at DHS that could help prevent overlap, fragmentation, or unnecessary duplication and did not have tracking mechanisms or policies to help ensure that overlap is avoided and efforts are better coordinated consistent with Standards for Internal Control in the Federal Government. S&T officials told us at the time that a process did not exist at DHS or within S&T to prevent overlap or unnecessary duplication but that relationships with components mitigate that risk. They also stated that S&T has improved interactions with components over time. We concluded that the existence of overlapping R&D activities coupled with the lack of policies and guidance defining R&D and coordination processes was an indication that not all R&D activities at DHS were coordinated to ensure that R&D is not unnecessarily duplicative. We also found in September 2012 that neither DHS nor S&T tracked all ongoing R&D projects across the department, including R&D activities contracted through the national laboratories. As part of our review, we identified 11 components that reimbursed the national laboratories for R&D from fiscal years 2010 through 2012, but S&T’s Office of National Laboratories could not provide us with any information on those activities and told us it did not track them. According to S&T, the Office of National Laboratories’ ability to provide information on activities across the department is limited by components inconsistently operating within the defined process for working with the national laboratories. As a result, we recommended that DHS develop and implement policies and guidance for overseeing R&D that includes, among other things, a description of the department’s process and roles and responsibilities for overseeing and coordinating R&D investments and efforts, and a mechanism to track existing R&D projects and their associated costs across the department. DHS agreed with our recommendation and stated at the time that S&T was implementing a collaborative, end-user focused strategy to coordinate and interact with components to better ensure S&T’s efforts aligned with components’ needs and that it was considering developing new policy guidance for R&D activities across the department. As of July 2014, DHS has not developed new policy guidance but is conducting portfolio reviews across the department, as directed in committee reports accompanying the fiscal year 2013 DHS appropriation act, aimed at coordinating R&D activities.recommendation to develop a policy that defines roles and responsibilities for coordinating R&D and coordination processes, as well as a mechanism that tracks all DHS R&D projects, could better position DHS to mitigate the risk of overlapping and unnecessarily duplicative R&D projects. We will continue to monitor DHS’s efforts to develop a policy to better coordinate and track R&D activities at the department. In September 2013, we reported that DHS S&T, Coast Guard, and DNDO reported producing 97 Border and Maritime R&D deliverables at an estimated cost of $177 million from fiscal years 2010 through 2012. The type of border and maritime R&D deliverables produced by these R&D entities were wide-ranging in their cost and scale, and included knowledge products and reports, technology prototypes, and software. For example: Knowledge products or reports: One of the DHS Centers of Excellence developed formulas and models to assist in randomizing Coast Guard patrol routes and connecting networks together to assist in the detection of small vessels. Technology prototypes: S&T BMD developed prototype radar and upgraded video systems for use by Border Patrol agents and a prototype scanner to screen interior areas of small aircraft without removing panels or the aircraft skin. Software: DNDO developed software that extracts data from radiation portal monitors and uses the data to improve algorithms used in detecting radioactive material. As we reported in September 2013, R&D customers we met with had mixed views on the impact of the R&D deliverables they received. For example, we reviewed 20 S&T BMD deliverables produced from fiscal years 2010 through 2012 at a cost of $28.7 million. We found that the customers of 7 deliverables stated that the deliverables met their office’s needs, customers of 7 did not, customers of 4 did not know, and customers for 2 could not be identified.CBP’s Office of Technology Innovation and Acquisition reported that S&T’s analysis and test results on aircraft-based use of wide area surveillance technology helped CBP to make a decision on whether it should pursue acquiring such technology. In cases where customers said that the deliverables were not meeting their needs, the customers explained that budget changes, other ongoing testing efforts, or changes in mission priorities were the reasons deliverables had not met their needs, and customers pointed out that their relationship with S&T had been positive and highly collaborative. In other cases, customers pointed out that while the deliverable had not been used as intended, it informed their office’s decision making and helped to rule out certain technologies as possibilities. In this regard, the customers felt the R&D was successful, despite the fact that the deliverable had not or was not being used. For example, customers within S&T BMD officials explained that some of its older projects did not have identifiable customers because its former process for selecting projects created the potential to engage in R&D without a clear commitment from the customer. In February 2012, S&T issued a new project management guide that requires project managers to specify the customer by office and name, and to describe customer support for the project, including how the customer has demonstrated commitment for and support of the project. S&T officials said they believed this new process would prevent future R&D funding from going towards projects without a clear customer. Additionally, we reported that from fiscal year 2010 through fiscal year 2012, DNDO produced 42 deliverables at a cost of $115.9 million, which included 6 discontinued projects and 36 projects that were either transitioned to the next phase of R&D or were completed. DNDO R&D is different from the R&D of S&T for many reasons. For one, a DNDO project may start at a basic research level, and may end up being merged into other similar efforts in order to achieve a higher project goal. In these cases, the R&D customers are DNDO project managers rather than another DHS customer, such as CBP. We discussed 5 DNDO R&D deliverables at various R&D phases with DNDO officials—4 of which were deliverables from ongoing or completed projects and 1 of which was a discontinued project. Two of the 5 projects we discussed had moved from early-stage R&D into other projects further along in DNDO’s project management process. Two of the 5 projects were completed, with 1 project that was reported to have provided information that informed furthered DNDO decision-making and the other project resulting in a commercialized product. With regard to the 1 discontinued project, DNDO officials said that the particular project’s technology was determined to be too expensive to continue pursuing. We reported that although S&T project managers sought feedback from their customers during the execution of projects, S&T did not gather and evaluate feedback from its customers to determine the impact of its completed R&D efforts and deliverables, making it difficult to determine if the R&D met customer needs. Further, in some cases, the customer of S&T’s R&D was not clear or the results of the R&D were unknown. For example, a CBP customer identified by S&T was aware of two R&D deliverables that S&T said were transitioned to his office, but the official was unable to provide additional information on the project’s impact. According to S&T officials, since they deal with multiple DHS components and are not within the same agencies as its customers, it is sometimes difficult to identify who the customer of the R&D is and also difficult to determine what the impact of the R&D was. S&T officials also stated that in S&T’s 2012 update to its project management guide, in its project closeout process, S&T had included a step to collect feedback from all relevant customers and a template for collecting this feedback. While we found in September 2013 that S&T had developed a process and template to collect feedback at the end of each project and incorporated this into its project management plan, we also found that it did not plan to survey customers each time it provides a deliverable to the customer. This is relevant because S&T projects are often conducted over several years before they are concluded and these projects also often produce multiple deliverables for a customer over many years that are designed to meet a specific operational need. For example, a Ground Based Technologies project began in fiscal year 2006 and was slated to continue through fiscal year 2018. During this period, S&T provided multiple R&D deliverables to CBP—including test results comparing different ground based radar systems. The National Academy of Sciences has stated that feedback from both R&D failures and successes may be communicated to stakeholders and used to modify future investments.At the time of our report, S&T had not established timeframes and milestones for collecting and evaluating feedback from its customers on the extent to which the deliverables it provides were meeting its customer’s needs. As a result, we recommended that S&T establish timeframes and milestones for collecting and evaluating feedback from its customers to determine the usefulness and impact of both its R&D projects and project deliverables, and use it to make better-informed decisions regarding future work. S&T officials concurred with the recommendation at the time of our review, and reported that it was developing R&D strategies with DHS components, which would include a strategic assessment of components’ R&D needs and updated annually on the basis of customer feedback. As of July 2014, S&T has completed strategic plans with Border Patrol, the Transportation Security Administration (TSA), and the Secret Service. Further, at the time of our review, S&T reported that it was developing a new project management guide to improve R&D management at all stages of development, and that the guide would include a template for project managers to use to gather customer feedback on a more consistent basis. In November 2013, S&T finalized its guide which includes a customer survey template to obtain feedback on the quality, timeliness, and relevance of a deliverable, as well as detailed descriptions of actions project managers should take throughout a project to ensure the R&D is aligned with customer needs. We will continue to review the implementation of these actions and to determine whether they fully address the intent of our recommendation. In September 2013, we also reported that S&T’s BMD, the Coast Guard, and DNDO reported taking a range of actions to coordinate with one another and their customers to ensure that R&D is addressing high priority needs. Officials from BMD identified several ways in which it coordinated R&D activities with its customers, which are primarily offices within CBP. For example, BMD officials reported having a person detailed to CBP’s Office of Technology Innovation and Acquisition and identified its integrated product teams, such as its cross border tunnel threat team, and jointly funded projects as ways in which the division worked to ensure its R&D efforts were coordinated with CBP. We also found that opportunities existed for DHS to enhance coordination with universities conducting R&D on its behalf. Specifically, we reported that the S&T Office of University Programs could help ensure that the approximately $3 million to $4 million a year dedicated to each university center is used more effectively by more carefully considering data needs, potential access issues, and potential data limitations with its federal partners before approving projects. We recommended that S&T ensure design limitations with regard to data reliability, accessibility, and availability are reviewed and understood before approving Center of Excellence R&D projects. S&T Office of University Programs officials concurred with the recommendation and discussed the variety of ways in which centers and DHS components collaborate and share information. Office of University Programs officials stated that the office’s process for soliciting research topics and evaluating proposals is good and that it keeps the centers flexible. However, officials from DHS’s primary land border security Center of Excellence reported challenges with respect to a lack of clarity regarding protocols for access to DHS information when conducting R&D. Specifically, officials from this center reported that they have been regularly unable to obtain data from CBP to complete research it was conducting on CBP’s behalf, which resulted in delays and terminated R&D projects. Given the challenges raised by officials from universities leading the R&D for land border security, we recommended that S&T conduct a more rigorous review of potential data-related challenges and limitations at the start of a project in order to help R&D customers (such as CBP) identify data requirements and potential limitations up front so that money is not allocated to projects that potentially cannot be completed. In concurring with our recommendation, S&T Office of University Programs officials agreed that making sure their clients take additional steps to identify data requirements up front could help address these challenges and following our review had started taking steps to address the recommendation. For instance, in September 2013, the Office of University Programs reported that it was working to develop standard guidelines and protocols that would apply to all of its centers of excellence. These protocols were to describe how data sets must be modified to enable their use in open- source research formats. In March 2014, the Office of University Programs and the National Center for Border Security and Immigration, a DHS S&T Center of Excellence, co-hosted a workshop to identify common problems the centers have in accessing data from DHS, understand DHS constraints in sharing data, and develop best practices for requesting and sharing data between the centers of excellence and DHS. We believe this is a step in the right direction and should move S&T closer toward meeting the intention of our recommendation. We will continue to monitor DHS’s efforts in this area. Chairman Bucshon, Chairman Broun, Ranking Member Lipinski, Ranking Member Maffei, and members of the committee, this completes my prepared statement. I would be happy to respond to any questions you may have at this time. If you or your staff members have any questions about this testimony, please contact me at (202) 512-9627 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Key contributors to this statement include Adam Hoffman, Assistant Director; Aditi Archer, and Charlotte Gamble. Francis Cook, Michele Fejfar, Emily Gunn, Richard Hung, Gary Malavenda, and Linda Miller also made contributions to this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Conducting R&D on technologies for detecting, preventing, and mitigating terrorist threats is vital to enhancing the security of the nation. Since its creation, DHS has spent billions of dollars researching and developing technologies used to support its missions including securing the border, and detecting nuclear material among others. Within DHS, S&T conducts and is responsible for coordinating R&D across the department. Other components also conduct R&D to support their respective missions. This statement discusses (1) how much DHS invests in R&D and the extent to which DHS has policies and guidance for defining and overseeing its R&D efforts across the department, (2) the extent to which R&D is coordinated across DHS, and (3) the results of DHS border and maritime security R&D efforts and the extent to which DHS has obtained feedback on these efforts. This statement is based on GAO's previously issued work from September 2012 to September 2013, and selected updates conducted in July 2014 on the status of GAO's prior recommendations. To conduct the updates, GAO reviewed agency documentation. In September 2012, GAO reported that the Department of Homeland Security (DHS) did not know the total amount its components invested in research and development (R&D) and did not have policies and guidance for defining R&D and overseeing R&D resources across the department. According to DHS, its Science & Technology Directorate (S&T), Domestic Nuclear Detection Office (DNDO), and Coast Guard were the only components that conducted R&D, and GAO found that these were the only components that reported budget authority, obligations, or outlays for R&D activities to the Office of Management and Budget. However, GAO identified an additional $255 million in R&D obligations made by other DHS components. At the time of GAO's review, DHS reported it was difficult to identify all R&D investments across the department because DHS did not have a department wide policy defining R&D or guidance directing components how to report all R&D activities. GAO recommended that DHS develop policies to assist components in better understanding how to report R&D activities and better position DHS to determine R&D investments. DHS concurred with the recommendation and, as of July 2014, had updated its guidance to include a definition of R&D but had not yet determined the most effective path to guide R&D across the department. GAO will continue to monitor DHS's efforts to develop its approach for overseeing R&D at the department. GAO also reported in September 2012 that S&T had taken some steps to coordinate R&D efforts across DHS, but the department's R&D efforts were fragmented and overlapping, which increased the risk of unnecessary duplication. GAO recommended that DHS develop a policy defining roles and responsibilities for coordinating R&D and establish a mechanism to track all R&D projects to help DHS mitigate existing fragmentation and overlap and reduce the risk of unnecessary duplication. DHS concurred with the recommendation. As of July 2014, S&T has not developed new policy guidance but is conducting portfolio reviews across the department, as directed by the fiscal year 2013 appropriations act, aimed at coordinating R&D activities. GAO will continue to monitor DHS's efforts to develop a policy to better coordinate and track R&D activities at the department. In September 2013, GAO reported that DHS border and maritime R&D components reported producing 97 R&D deliverables from fiscal year 2010 through 2012 at an estimated cost of $177 million. GAO found that the type of border and maritime R&D deliverables produced by S&T, the Coast Guard, and DNDO varied, and R&D customers GAO met with had mixed views on the impact of the deliverables. These deliverables included knowledge products and reports, technology prototypes, and software. For example, S&T developed prototype radar and video systems for use by Border Patrol. However, GAO reported that S&T had not established timeframes for collecting and evaluating feedback on the extent to which deliverables met customers' needs. GAO recommended that S&T collect such feedback from its customers to better determine the usefulness and impact of its R&D projects and deliverables and make better-informed decisions regarding future work. As of July 2014, DHS had taken steps to address this recommendation, including making plans to gather customer feedback. GAO will continue to monitor DHS's efforts in this area. In its prior reports, GAO recommended, among other things, that DHS develop policies and guidance for defining, overseeing, coordinating, and tracking R&D activities across the department; and that S&T collect and evaluate feedback from its customers. DHS concurred with GAO's recommendations and has actions underway to address them.
The Gulf Coast hurricanes collectively represented the most costly natural disaster in recent U.S. history. As table 1 shows, the estimated property damage from these hurricanes exceeded $118 billion, nearly five times greater than the damage from the 1994 Northridge earthquake and more than two and one-half times greater than the damage from the 2004 Florida hurricanes. Hurricane Katrina was the first of these disasters, causing fatalities and damage in southern Florida in late August 2005 before striking the northern Gulf Coast region. This region received the brunt of the storm, including extensive damage and significant loss of life in Louisiana and Mississippi. Damage from Hurricane Katrina also extended into the Florida panhandle, Georgia, and Alabama and covered approximately 90,000 square miles—an area larger than the size of Great Britain. Hurricane Rita was the next disaster to strike the Gulf Coast region, making landfall near the Texas and Louisiana border on September 24, 2005, and causing a wide swath of damage from eastern Texas to Alabama, flooding some areas in Louisiana that had already been impacted by Hurricane Katrina about 1 month earlier. Hurricane Wilma was the last of these disasters to strike the region, making landfall in southern Florida on October 24, 2005, and inflicting widespread damage across the state. The federal government provides funding and assistance after disasters through a variety of agencies and programs. Congress created FEMA to coordinate response and recovery efforts under presidential disaster declarations. FEMA works with other federal, state, and local agencies to assist victims after major disasters, and volunteer organizations such as the American Red Cross also participate in these efforts. Following a presidential disaster declaration, FEMA will open Disaster Recovery Centers where disaster victims can meet with representatives, obtain information about the recovery process, and register for federal disaster assistance. Victims may also register with FEMA by telephone or via FEMA’s Internet site. FEMA provides housing assistance to disaster victims through the Individuals and Households Program (IHP). Under the IHP, FEMA can make grants available to repair or replace housing damaged in a disaster that is not covered by insurance. However, the IHP is a minimal repair program that is designed to make the victim’s home habitable and functional, not to restore the home to its predisaster condition. When disaster victims register for FEMA assistance, they are asked to provide their approximate household income. If the applicant’s income exceeds certain thresholds, FEMA automatically refers them to SBA’s Disaster Loan Program. SBA’s Disaster Loan Program is the primary federal program for funding long-range recovery for private sector, nonfarm disaster victims and the only form of SBA assistance not limited to small businesses. The Small Business Act authorizes SBA to make available the following two types of disaster loans: Physical disaster loans—These loans are for permanent rebuilding and replacement of uninsured or underinsured disaster-damaged property. They are available to homeowners, renters, businesses of all sizes and nonprofit organizations. These loans are intended to repair or replace the disaster victim’s damaged property to its predisaster condition. Economic injury disaster loans—These loans provide small businesses with necessary working capital until normal operations resume after a disaster declaration. They cover operating expenses the business could have paid had the disaster not occurred. The act restricts economic injury disaster loans to small businesses only. Under a presidential disaster declaration, SBA disaster assistance staff members secure space within FEMA—established Disaster Recovery Centers and begin meeting with victims to explain the agency’s disaster loan process, issue loan applications and, if requested, assist victims in completing applications. Figure 1 illustrates SBA’s disaster loan process. During the application entry stage, SBA screens all incoming applications to determine if they are acceptable. In addition, SBA conducts a preliminary financial analysis of home loan applications to determine whether the applicant’s income falls below the agency’s minimum income thresholds or if repayment ability is evident based on a review of the applicant’s gross income and fixed debts. SBA declines home loan applicants that do not meet its minimum income requirements or demonstrate repayment ability. SBA also obtains a credit bureau report for business and home loan applicants, and SBA may decline an applicant based on information contained in the report. SBA refers to denials made during the application entry stage as preprocessing declines. SBA intended for these declines to eliminate delays in notifying applicants about loan denials. SBA will refer most home loan applicants denied a loan to FEMA for possible grant assistance under a presidential disaster declaration. After the application entry stage, applications move to the loss verification stage, and SBA staff members scan application documents into DCMS. During the loss verification stage, loss verifiers conduct on-site damage inspections for physical disaster loan applications to estimate the cost of restoring damaged property to predisaster condition. Loss verifiers use tablet personal computers with software tailored to complete and submit reports electronically into DCMS. The verified loss becomes the basis for the loan amount. Once the loss verification is complete, an application moves to the application processing stage, where loan officers check for duplication of benefits and assess the applicant’s credit history and ability to obtain credit elsewhere. Loan officers also examine other applicant eligibility criteria, including compliance with child support obligations and history on other federal debt, such as student loans. Loan officers use a financial analysis tool within DCMS to determine if the applicant has the ability to repay the loan. As with preprocessing declines, SBA generally refers home loan applicants denied a loan in application processing to FEMA for possible grant assistance under presidential disaster declarations. For secured loans, legal staff members review the draft loan authorization and agreement for sufficiency of collateral instruments and other legal concerns. They also create a loan closing checklist—a list of the requirements necessary to generate the loan closing and other legal documents. Attorneys enter a legal concurrence into DCMS, which obligates the loan funds through an interface with SBA’s accounting system. Legal support staff members prepare closing documents and mail them to the applicant or nearest Disaster Recovery Center. SBA can make a maximum initial disbursement, without collateral, of up to $10,000 for physical disaster loans and $5,000 for economic injury disaster loans, once the agency receives signed closing documents from the applicant. SBA can make a maximum initial disbursement of up to $25,000 for physical disaster loans with collateral—preferably real estate. SBA generally makes subsequent disbursements on physical disaster loans based on the applicant’s needs and how they spent prior disbursements. DCMS replaced SBA’s largely manual, paper-based loan process and its Automated Loan Control System (ALCS), which it had used since the early 1990s. ALCS enabled SBA to track the movement of paper loan application files from one stage of the process to another, but the manual loan process required the movement and storage of large volumes of paper. In December 1998, SBA began planning for a replacement disaster loan system. SBA purchased a commercial-off-the-shelf package as the foundation for DCMS in 2003 and had the package customized. SBA intended for DCMS to help it move toward a paperless processing environment by automating many of the functions staff members had performed manually, such as obtaining FEMA referral data and credit bureau reports, as well as completing and submitting loss verification reports from remote locations. SBA began a phased implementation of DCMS in November 2004 at its former Niagara Falls Disaster Area Office (DAO). In January 2005, SBA began using DCMS to process loan applications for all new disaster declarations and by March 2006, SBA completed the migration of all data for disaster loan applications processed since 2000 from ALCS to DCMS. According to SBA, the cost for planning, acquiring, implementing, and operating DCMS totaled about $32 million through April 2006. See appendix II for a more detailed discussion of SBA’s acquisition and implementation of DCMS. We identified several factors that affected SBA’s ability to provide timely disaster assistance, including a large volume of applications that exceeded any previous disaster. In addition, although DCMS allowed SBA to streamline the disaster loan process, SBA focused only on its historical experience and did not consider the possibility of a single or series of disasters of the magnitude of the Gulf Coast hurricanes when planning the system’s maximum user requirements. SBA’s limited planning contributed to insufficient DCMS user capacity, which restricted the number of staff that could access the system and process the large volume of applications in a timely manner. Further, SBA did not completely stress test DCMS before implementation to help ensure that it could function at its maximum user capacity and thus did not detect that the wrong processors had been installed by its hosting contractor and that the system could not support planned capacity. As a result of these and other processing-related factors, SBA experienced significant backlogs and delays in processing applications. Overall, SBA processed disaster loan applications in 74 days, on average, as of May 27, 2006, compared with its goal of within 21 days. According to SBA officials, the large volume of disaster loan applications it processed for victims of the Gulf Coast hurricanes was a significant challenge. The volume of applications associated with these hurricanes greatly exceeded any disaster in SBA’s history. As table 2 shows, as of May 27, 2006, SBA had issued more than 2.1 million applications to victims affected by the Gulf Coast hurricanes. This represented almost four times as many applications as SBA issued to victims of the Northridge earthquake—the single largest disaster SBA had previously faced. In addition, our analysis showed that SBA received a large influx of applications during the initial months following Hurricane Katrina—at the same time that SBA hired and trained a large number of temporary staff to process applications received from victims of the disasters. Specifically, SBA received about 280,000 applications during the first 3 months following Hurricane Katrina, approximately 30,000 more applications than SBA received over a period of about 1 year from victims of the Northridge earthquake. SBA officials told us that the large volume of applications that it mailed and received resulted in part from the large number of referrals FEMA made to SBA’s Disaster Loan Program without applying SBA’s income thresholds, specifically for disaster victims who registered for disaster assistance via FEMA’s Internet site and did not report any income. According to a FEMA official, disaster victims who register via FEMA’s Internet site can select the “Income Unavailable/Refused” option if they do not wish to or cannot provide their income. The official stated that these individuals are advised that selecting this option will result in an SBA referral. The FEMA official also stated that, per an SBA request, FEMA refers all applicants who claim self-employment as their primary source of income to SBA’s Disaster Loan Program, regardless of their income, because the income tests are not a valid measure of repayment ability for self-employed applicants. In both cases, FEMA’s registration system automatically fills $0 as the disaster victim’s income and refers these individuals to SBA’s Disaster Loan Program. The FEMA official stated that about 17 percent of the individuals referred to SBA for Hurricanes Katrina and Rita refused to provide their income, and another 17 percent indicated that they were self-employed. SBA officials referred to these cases as “$0 income” referrals. In February 2006, SBA’s Office of Inspector General issued an advisory memorandum, stating that many $0 income referrals ultimately failed SBA’s criteria for disaster loan eligibility and were processed as declines. SBA’s Office of Inspector General added that these referrals impacted SBA by increasing the cost incurred by SBA in mailing loan applications to disaster victims that normally would not be referred to SBA’s Disaster Loan Program; delaying response times for those applicants who did qualify for SBA’s lowering SBA’s disaster loan approval rates; and increasing the transaction flow through DCMS, which was near maximum capacity. SBA’s Office of Inspector General recommended that SBA improve its screening processes within DCMS when processing $0 income referrals and work with FEMA to reduce unnecessary online disaster referrals. In commenting on a draft of the advisory memorandum, SBA agreed that it should work with FEMA to improve their joint screening process prior to referral and issuing an SBA disaster loan application. DCMS provided SBA with a number of benefits compared with its previous system, such as the capability to complete loss verification reports and other processing-related tasks electronically. However, SBA planned DCMS’s maximum user capacity based solely on the volume of applications it received from victims of the Northridge earthquake and its other historical data; it did not consider the information available from catastrophe risk modeling firms or disaster simulations such as the likelihood and severity of damages from potential catastrophes. Although agencies are not specifically required to consider such information in developing their system’s user capacity requirements, this information could have helped SBA predict the volume of loan applications to expect and the necessary user capacity needed to process such a volume. SBA officials acknowledged that they could have considered this information in planning DCMS’s user capacity requirements but lacked the funding to do so. SBA’s limited planning and other system and processing related issues diminished the agency’s ability to provide disaster assistance in a timely manner. Many insurance companies and government agencies currently use computer programs offered by several modeling firms to estimate the financial consequences of various natural catastrophe scenarios. Risk modeling firms, which have existed since the late 1980s, rely on sophisticated mathematical modeling techniques and large databases containing information on past catastrophes, population densities, construction techniques, and other relevant information to assess the severity of potential catastrophes so that other organizations can plan accordingly. For example, one modeling firm recently estimated that 1.5 million people were vulnerable to an earthquake on the San Andreas Fault in the San Francisco area and that an earthquake similar to the 1906 earthquake would cause an estimated $260 billion in damages to residential and commercial properties. This study also noted that the U.S. Geological Survey estimated that there was a 21 percent probability of a major earthquake on this fault occurring before 2032. Another modeling firm study of a strong hurricane striking the densely populated Northeast region estimated this event could cause more than $200 billion in economic losses, including significant damage from flooding to properties and infrastructure in lower Manhattan and Long Island. While SBA would not utilize this information the way insurance companies do to assess the financial consequences of potential disasters, catastrophe risk modeling firms provide important information on the severity of damages from such events. This information could be helpful in estimating the potential number of loan applications that SBA could receive for processing and the concurrent user capacity necessary to process such applications in a timely manner if such an event were to occur. Government agencies and other organizations also participate in disaster simulation exercises to prepare for their response to natural disasters. While SBA would not use this disaster simulation information to plan a response to victims’ immediate needs, the estimated number of buildings damaged and number of people evacuated provides important information that can be considered in planning the user capacity of a disaster loan system. For example, FEMA brought together numerous officials from local, state, federal, and volunteer organizations to conduct an exercise referred to as Hurricane Pam in July 2004. This exercise used realistic weather and damage information developed by the National Weather Service, the U.S. Army Corps of Engineers, the Louisiana State University Hurricane Center, and other state and federal agencies to help officials develop joint response plans for a catastrophic hurricane in Louisiana. This fictional hurricane brought sustained winds of 120 miles per hour, up to 20 inches of rain in parts of southeast Louisiana, and storm surge that topped levees in the New Orleans area. Hurricane Pam, as projected, destroyed between 500,000 and 600,000 buildings and forced the evacuation of more than 1 million residents from the New Orleans area. In planning the maximum user capacity for DCMS, SBA relied solely on the volume of applications it received from victims of the Northridge earthquake and its other historical data, such as the average number of applications processed for the previous 5 years. SBA did not plan for the likelihood of a single disaster or series of disasters of the magnitude of the Gulf Coast hurricanes. If SBA had considered the information available from catastrophe risk modeling firms or disaster simulations, such as the likelihood and potential damages from catastrophic events, to help it predict the volume of loan applications that might be expected and the user capacity needed to process this volume, the agency may have acquired additional capacity that would have enabled it to reduce its backlog of applications sooner. SBA’s limited planning contributed to insufficient DCMS user capacity, which restricted the number of staff that could access the system and process the large volume of applications in a timely manner. SBA experienced instability with DCMS during the initial months following Hurricane Katrina, as users experienced outages, difficulties connecting to the system, and slow response times in completing loan processing tasks. For example, our review of DCMS system logs showed that between September and December 2005 SBA experienced the following incidents: 19 incidents where DCMS was not available to all system users due to an unscheduled outage, and 26 incidents where DCMS was not available to various units due to an unscheduled outage. SBA officials told us that the longest period of time DCMS was unavailable to users due to an unscheduled outage was 1 business day. These unscheduled outages and other system-related issues slowed productivity and affected SBA’s ability to provide timely disaster assistance; however, we could not determine the specific impact on the agency’s time frames for processing disaster loan applications received from Gulf Coast hurricane victims. According to SBA officials, ineffective technical support contributed to the system instability experienced by users, as its hosting contractor did not properly monitor the DCMS network as contractually required and did not make the agency aware of incidents that could make the system unstable prior to DCMS users being affected. In addition, SBA officials told us that its hosting contractor did not provide the agency with the correct computer hardware for DCMS as contractually required, which further contributed to the instability users initially experienced with the system and reduced processing power by about one-third. Specifically, in developing DCMS, SBA planned for a maximum capacity of 1,500 concurrent users. SBA officials told us that they discovered that DCMS was operating near 100 percent capacity in September 2005 before the agency had reached its maximum user capacity. At that time, SBA discovered that the hosting contractor had not provided the agency with the correct computer hardware required per its contract in order to support 1,500 concurrent users. However, SBA did not verify that its hosting contractor provided the agency with the correct computer hardware specified in its contract. Federal procurement policies require agencies to have trained and experienced officials available to judge whether contractors are performing according to contract terms and conditions, particularly when contracting for highly specialized or technical services. In addition, SBA’s internal procurement procedures require that the agency inspect each item or service provided under a contract, report capital equipment acquisitions immediately—including computer equipment, and provide a serial number for capital equipment acquisitions for tracking purposes. SBA officials did not have an explanation for why the agency did not verify that the hosting contractor provided the correct computer hardware. If SBA had verified this equipment as required, the agency might have discovered this issue prior to the Gulf Coast hurricanes and been able to take the appropriate corrective action. Prior to implementation, SBA did not completely stress test DCMS to ensure that the system could operate effectively at maximum capacity, which contributed to the initial system instability SBA experienced. In 2003, SBA began testing various aspects of DCMS, including the core application interfaces and additional components such as loss verification and scanning. Although SBA conducted performance testing for DCMS, we found that the agency only stress tested the system for up to 120 concurrent users due to limitations with the hardware in the testing environment. The testing environment simulated an increasing number of concurrent users and exercised different functional scenarios, but the hardware used in the simulation reached its capacity earlier than anticipated. Even if the testing environment functioned as planned, an estimate showed that DCMS could accommodate approximately 600 concurrent users at this time—significantly less than the system’s planned maximum capacity of 1,500. According to leading information technology organizations, to be effective, practices for testing software should be planned and conducted in a structured and disciplined environment. Typically, this involves testing increasingly larger increments of a system until the complete system and all of its functionality are tested and accepted. It also involves stress testing and fully demonstrating the effectiveness and accuracy of the system. Additionally, SBA’s internal systems development manual requires that the agency determine testing and acceptance criteria that must be met for a system to be accepted as “fit for use” by the user or sponsoring organization and requires user or sponsoring organization approval of all acceptance criteria. Further, the manual identifies how acceptance testing is to be conducted and reported to determine whether the system meets its requirements upon completion of its development. In doing limited stress testing of DCMS, SBA did not completely follow its own requirements or industry best practices for systems testing. When these requirements are not met, there is potential risk that the implemented system will not meet the system requirements. If SBA had conducted complete stress testing, the agency might have detected that it did not receive the correct equipment and had an opportunity to address this issue before implementing DCMS. Because of the unpredictable nature of disasters and the cost of maintaining staff that it might not need, SBA hires and trains a large number of temporary staff to help process loan applications following any large scale disaster, such as the Gulf Coast hurricanes. SBA also has a disaster reserve corps, a group of experienced individuals it relies upon who have worked with the agency in responding to previous disasters and are trained in its disaster loan process. SBA officials told us that it generally took approximately 30 days for loan officers without prior SBA experience to become fully productive. This slows processing during the initial months following a disaster, as loan officers become familiar with SBA’s disaster loan process and DCMS. In response to the Gulf Coast hurricanes, SBA also had to secure additional space and equipment to support loan processing. According to SBA officials, this process took approximately 30 to 60 days. As figure 2 shows, as the average number of loan processing staff increased, SBA generally processed more applications than it did during the first 2 months following Hurricane Katrina. Because SBA normally relies on temporary staff to help process loan applications after large disasters, it might be unrealistic to expect the agency to process a large volume of applications quickly during the initial period following such disasters. The geographic dispersion of disaster victims—in particular for Hurricanes Katrina and Rita—also affected SBA’s ability to provide timely disaster assistance. Figure 3 illustrates the location of displaced applicants affected by these disasters that registered for FEMA IA. These applicants relocated to all 50 states, with the largest concentrations in Louisiana, Mississippi, and Texas. SBA officials told us that FEMA referred many of these applicants to its Disaster Loan Program, and their widespread geographic dispersion made it more challenging to provide timely disaster assistance. Loan officers we met with also told us that contacting applicants to discuss the status of their loan application was difficult in some cases—particularly during the initial months following the disasters, as some applicants had moved or changed employment several times since applying for disaster assistance. Thus, SBA did not always have an applicant’s most current information, which slowed the processing of their application. Our analysis showed that it took SBA several months to significantly reduce the backlog of applications that developed in various stages of its disaster loan process because of the large volume of applications, limited planning for DCMS, and other processing-related challenges. For example, SBA did not clear the backlog in the application entry stage until nearly 3 months following Hurricane Katrina. SBA nearly cleared the backlog in the loss verification stage 8 months after the disaster when the backlog was reduced to less than 1,800 applications. However, at that time, SBA still needed to complete loan processing for about 25,000 applications. As figure 4 shows, SBA’s backlog in the loss verification and application processing stages increased significantly during the first 3 months following Hurricane Katrina as SBA began receiving a large volume of applications from victims of the other hurricanes. These backlogs combined peaked at over 204,000 applications in late December 2005. Figure 4 also shows that, individually, SBA’s backlog in the loss verification stage peaked at almost 129,200 applications about 3 months following Hurricane Katrina, and the backlog in the application processing stage peaked at more than 121,700 applications nearly 6 months after the disaster. As a result of the backlogs, victims of the Gulf Coast hurricanes waited about 74 days on average for SBA to process their loan applications, compared with the agency’s goal of within 21 days. Figure 5 shows SBA’s average processing time frames for approval and decline decisions made between mid-October 2005 and May 2006 compared with its goal of within 21 days. Although SBA began to reduce the total backlog in loss verification and application processing in late December 2005, average processing time frames for approval and decline decisions generally increased through May 2006 because of the average age of applications in the backlog. For example, SBA reduced the backlog in application processing to less than 4,500 by late May 2006; however, average processing time frames were still significantly higher than its goal because loan applications had been in the application processing queue for a long time—about 63 days on average. SBA’s processing average for approvals does not include additional time frames for loan closings and initial disbursements. For example, SBA received signed closing documents from borrowers about 35 days, on average, after making the approval, as of May 27, 2006. According to SBA officials, delays in closing loans were mostly the result of factors beyond their control. For example, SBA officials stated that they scheduled loan closings at the convenience of the borrower. These officials added that because of the displacement of Gulf Coast hurricane victims, SBA had closed about 50 percent of disaster loans by mail, a higher percentage than previous disasters, which generally takes more time than closings done in person. SBA officials also stated that there were a significant number of disaster victims who had not returned to the affected area and who had expressed uncertainty about rebuilding their homes and businesses. As a result, these victims had been reluctant to quickly close on their loans. SBA’s disaster lending procedures generally require applicants to close loans within 60 days of the date on the loan authorization and agreement. These procedures also allow SBA to accept loan closing documents after 60 days on a discretionary basis. SBA officials told us they had allowed Gulf Coast hurricane victims additional time to determine if they really wanted the loan. To facilitate loan closings, SBA officials also told us they used staff to conduct follow-up calls with borrowers after closing documents were mailed. In addition, our analysis of an SBA data extract further showed that the agency made an initial disbursement for approved loans on average about 9 days after the receipt of closing documents. As of May 27, 2006—9 months after Hurricane Katrina—SBA had disbursed about $1.4 billion or 14 percent of the $9.7 billion approved loan dollars. As of the same date, about 73,000 approved loans had not been fully disbursed to disaster victims. As with loan closings, SBA officials stated that the length of time it took to disburse disaster loans was primarily determined by the borrower. SBA’s disaster lending procedures require borrowers to arrange for and obtain all loan funds within 12 months from the date of the loan agreement. However, SBA officials told us that it might be difficult for some disaster victims to meet this requirement. In our subsequent report on SBA’s response to the Gulf Coast hurricanes, we plan to discuss the perspectives of disaster victims related to the disaster loan process. Although SBA took several actions after the Gulf Coast hurricanes to improve its response to disaster victims, our analysis showed that some of these actions were more successful at reducing the backlog of loan applications than others. For example, SBA increased the number of concurrent users that could access DCMS by acquiring additional computer hardware and adding a second work shift for loan processing staff to better balance the system’s workload. In addition, SBA initiatives to relax filing requirements for applicants whose business records were destroyed and establish a satellite office to process disaster loans at its former Sacramento DAO allowed SBA to improve its response to disaster victims. However, SBA did not experience as much success with its initiatives to expedite small business financing to communities affected by the disasters and use private sector banks to process disaster loan applications. As a result, some of SBA’s initiatives did not significantly reduce the backlog of loan applications or the time victims waited for SBA to process their disaster loan applications. As previously discussed, SBA initially experienced instability and other issues with DCMS. However, the agency took actions to address these issues. In October 2005, SBA obtained the computer hardware, as agreed to with its contractor, that increased DCMS’s capacity to about 2,000 concurrent users. SBA also obtained additional support from its hosting contractor, at no additional cost, to ensure adequate monitoring of the DCMS network. By November 2005, because DCMS continued to operate near its maximum capacity, SBA added a second shift for loan processing staff at its Fort Worth processing facility to better balance DCMS’s workload. According to SBA officials, DCMS had been stable since January 2006, and users reported having a greater comfort level and more success in processing applications using the system. The officials added that the hosting contractor had provided better oversight over DCMS compared with the initial months following Hurricane Katrina. In April 2006, SBA officials advised us that the agency had not made any payments to its hosting contractor since August 2005 because it did not satisfy contract requirements, and negotiations were under way to determine the amount of any subsequent payments. In preparation for the 2006 hurricane season, SBA awarded a new contract in April 2006 for up to $54 million to its integration contractor to provide project management and information technology support for DCMS over the next 5 years. This contractor will continue to upgrade the system to support increased loan processing activity by implementing software changes and hardware upgrades, providing ongoing support to DCMS users, and supporting all information technology operations associated with the system under the contract. In addition, SBA has plans to increase DCMS’s maximum user capacity to at least 8,000 concurrent users by the summer of 2006. However, we could not determine how SBA selected this number or whether the agency considered the information available from catastrophe risk modeling firms or disaster simulations in determining the planned for increase in maximum user capacity. To facilitate this planned capacity increase, SBA added on to and extended the contract with its hosting contractor in February 2006. Although SBA had experienced problems with the initial oversight provided by this hosting contractor, according to agency officials, the contractor’s performance had improved. For example, the contractor had dedicated a project manager to this effort. Because of these improvements and the contractor’s familiarity with SBA’s needs, agency officials decided that the contractor could provide a hardware solution for the expanded capacity within the agency’s time frames. After the Gulf Coast hurricanes, SBA made several changes to its disaster loan process and implemented other initiatives intended to improve its response to victims. While some of these initiatives improved SBA’s ability to process large numbers of disaster loan applications, others did not. For example, in October 2005, SBA established a satellite office to process disaster loans at its former Sacramento DAO. SBA increased the number of loan processing staff in this Sacramento satellite office from approximately 40 in late August 2005 to more than 250 by February 2006. According to SBA, 8 months after Hurricane Katrina, the Sacramento satellite office had processed about 95,500 home and 4,800 business applications through DCMS for Gulf Coast hurricane victims. Table 3 describes other SBA changes or initiatives that improved its response to disaster victims by making the application process easier or referring some applicants to FEMA for grant assistance sooner. In contrast to these actions, SBA implemented other initiatives that had more limited success. For example, in November 2005, SBA implemented the GO Loan Program. SBA intended for this program to expedite small business financing for communities severely impacted by Hurricanes Katrina and Rita. This program provided an 85 percent guaranty to qualified lending partners, such as banks, that agreed to make expedited loans available under the agency’s 7(a) loan program up to $150,000 to small businesses located in communities affected by the disasters. Under the GO Loan Program, small businesses applied directly to qualified lenders, who evaluated their creditworthiness and determined if they required an SBA guaranty to make the loan. SBA agreed to make a decision on whether to apply a guaranty to the loan within 24 hours. While SBA prescribed the maximum interest rate lenders could charge, the lender and borrower negotiated the actual rate. For loans of $50,000 or less, lenders could charge a maximum interest rate of 6.5 percentage points over the prime rate and a maximum rate of 4.5 percentage points over the prime rate for loans over $50,000. Thus, lenders could charge disaster victims interest rates that were significantly higher under the GO Loan Program than the rates SBA charged under the Disaster Loan Program. For example, a disaster victim applying for a $60,000 GO Loan could have been charged an interest rate up to 11.5 percent in November 2005 when the prime rate was 7 percent. In contrast, a business owner not able to obtain credit elsewhere would have received a 4 percent rate under the Disaster Loan Program. SBA only guaranteed 222 GO Loans totaling $19 million through May 2006. The higher interest rates lenders could charge under the GO Loan Program made these loans less attractive than SBA disaster loans and likely contributed to the small number of loans made under the program. In December 2005, SBA implemented a pilot program to expedite the processing of disaster loan applications. Under this program, DCMS made automatic approval recommendations for applicants with credit scores indicating that they were less likely to default on a loan, and loan officers did not have to conduct the lengthy repayment analysis for these applications. According to SBA, loan officers processed 8 to 10 home loan applications per day, on average, under the pilot program—about twice as many applications as under the normal process. However, loan officers did not review DCMS-generated approval recommendations until after the loss verification stage under the program. In addition, when SBA implemented the pilot program, the agency faced a significant backlog of 115,000 applications in the loss verification stage, and these applications had been in the queue for 39 days on average. As a result, SBA’s data showed that the agency actually took longer to process expedited approvals compared with SBA’s average processing time frames for all approvals. Specifically, SBA processed expedited approvals in about 104 days on average between December 2005 and April 2006, compared with 94 days for all approvals processed through the end of April 2006. If SBA had implemented this initiative sooner when the backlog in loss verification was not so large or if the agency had implemented an expedited loss verification process for these applications, the pilot program may have been more effective in reducing the amount of time disaster victims waited for a decision on their application. Table 4 describes other SBA actions or initiatives that did not significantly reduce the backlog of loan applications because they were either not implemented in a timely manner or did not fully incorporate the use of DCMS to process applications. DCMS provided SBA with opportunities to help the agency move toward a paperless processing environment by automating many of the functions the agency previously performed manually, such as obtaining FEMA referral data and credit bureau reports as well as completing and submitting loss verification reports from remote locations. SBA officials also told us that DCMS improved its ability to process disaster loans, and the agency would have experienced even greater processing delays using its previous system and loan process. However, we found other potential opportunities during our review that might help SBA to process loans more efficiently and move closer to its goal of processing loan applications within 21 days when faced with disasters. For example, SBA may be able to increase the efficiency of its application entry process by implementing a secure Internet-based application feature for home loan applicants. Currently, SBA accepts only paper loan application documents from disaster victims, and data-entry staff manually input application data into DCMS. According to the Direct Loan Systems Requirements issued by the Joint Financial Management Improvement Program, federal agency loan systems “should provide for an electronic application process using various media, such as a secure Internet application.” SBA could reduce the number of paper application documents it receives, the number of documents it subsequently scans into DCMS, and the resources and time required to input application data by capturing much of this information electronically. According to SBA officials, DCMS has the capability to interface with a secure Internet-based application feature where this data could be captured electronically. However, SBA did not attempt to add this functionality after the Gulf Coast hurricanes because of the instability it initially experienced with DCMS. SBA officials added that the agency concentrated its efforts on expanding the capacity of DCMS and would examine adding this functionality to the system in the future. SBA officials told us that, prior to the Gulf Coast hurricanes, the agency initiated a business process reengineering effort within ODA to reevaluate the disaster loan process. As part of this effort, ODA planned to (1) determine what type financial analysis would be performed for applicants with credit scores indicating a high degree of default risk, (2) design a streamlined loan application (both paper and electronic), and (3) identify policy and legislative changes required to implement the new process. However, ODA postponed this effort after the Gulf Coast hurricanes because of the resources needed to meet the demands of the disaster loan program. Business process reengineering can help organizations identify, analyze, and redesign their core business processes with the aim of achieving dramatic improvements in critical performance measures such as cost, quality, service, and speed. According to SBA officials, it has plans to resume this effort in 2006 in order to identify ways to more efficiently process disaster loan applications and to maximize the benefits of DCMS. The Gulf Coast hurricanes presented SBA with unprecedented challenges that, in combination, led to significant backlogs and delays in processing disaster loan applications. For example, SBA faced the largest volume of disaster loan applications in its history, as the United States experienced three extremely destructive natural disasters over a period of about 2 months. This large volume was due in part to the large number of applicants automatically referred to SBA by FEMA’s Internet site, many of whom ultimately did not qualify for disaster loans. We also agree that SBA should improve its screening process within DCMS when processing “$0 income” referrals and continue to work with FEMA to reduce unnecessary online disaster referrals, as recommended by SBA’s Office of Inspector General. In addition, various system and processing-related issues also challenged SBA, such as a new disaster loan system that was not designed to effectively respond to a disaster of this magnitude and that was unable to operate at the planned maximum capacity. Moreover, SBA based the maximum number of concurrent users for DCMS solely on its historical experience rather than considering information available from catastrophe risk modeling firms and disaster simulations, such as the likelihood and severity of damages from potential catastrophes to help predict the volume of applications that it might expect from such events. While SBA has plans to greatly expand its capacity of concurrent users for DCMS and should be more capable of processing larger volumes of loan applications once it achieves this increased capacity, it is not clear how the agency determined the new maximum number of concurrent users and whether this new capacity will be appropriate to handle future large scale disasters like the Gulf Coast hurricanes. If SBA had considered information available from catastrophe risk modeling firms and disaster simulations to help predict the volume of loan applications it could expect to receive, SBA could have made better informed decisions and might have acquired additional capacity that could have enabled SBA to reduce the backlog of applications in a more timely manner. Such an analysis would also better position SBA to determine its loan processing capacity for future disasters. SBA’s limited planning was further exacerbated by the lack of complete stress testing and the ineffective technical support provided by the hosting contractor. If SBA had appropriately stress tested the system before implementation, it might have discovered before the Gulf Coast hurricanes struck that it had received the incorrect computer hardware. Going forward, SBA would benefit from improving its process for verifying that the equipment provided by contractors meets all required specifications. While some of SBA’s initiatives improved its response to disaster victims, other efforts did not help the agency significantly reduce the large backlog of applications because they were either not implemented in a timely manner, not attractive to the applicant, or did not fully incorporate the use of DCMS to process applications. If some of these initiatives had been implemented soon after the Gulf Coast hurricanes struck, they might have enhanced SBA’s ability to process a large volume of loan applications in a timely manner. In addition, DCMS has the capability to interface with an Internet-based application feature that could reduce the resources and time required to input application data for home loan applicants by capturing much of this information electronically. As the 2006 Atlantic hurricane season has already begun, SBA would benefit by expediting its plans to resume its business processing reengineering efforts to analyze ways to more efficiently process loan applications, including an evaluation of implementing an Internet-based application feature. In order to provide more timely disaster assistance in the future, we recommend that the Administrator of SBA direct the Office of Disaster Assistance to take the following four actions: reassess DCMS’s maximum user capacity and related loan processing resource needs based on such things as lessons learned from the Gulf Coast hurricanes, a review of information available from catastrophe risk modeling firms and disaster simulations, and related cost considerations; conduct complete stress testing to ensure that DCMS can function at planned for maximum user capacity levels; improve management controls over assessing contractor performance through inspections of all equipment purchased or leased to support DCMS; and expedite plans to resume business process reengineering efforts to analyze the disaster loan process and identify ways to more efficiently process loan applications including an evaluation of the feasibility of implementing a secure Internet-based application feature for home loan applicants. We provided SBA with a draft of this report for review and comment. The Associate Administrator for Disaster Assistance provided written comments that are presented in appendix III. In these comments, SBA provided additional context regarding the magnitude of the disasters and the impact on the Disaster Loan Program. SBA stated that it generally agreed with our recommendations and intended to improve the delivery of the Disaster Loan Program for events of all sizes. However, SBA disagreed with the following four findings and conclusions in our draft. First, SBA disagreed with our conclusions that it performed limited planning and that it would have been better prepared to reduce the backlog of applications through the use of catastrophe risk models rather than relying primarily on the Northridge earthquake to establish its capacity needs. As we noted in our report, SBA planned the maximum user capacity for DCMS based on the volume of applications it received from victims of the Northridge earthquake—the single largest disaster SBA had previously faced—and did not anticipate the likelihood of a single disaster or series of disasters of the magnitude of the Gulf Coast hurricanes. We continue to believe that catastrophe risk modeling firms and disaster simulations provide critical information, such as the likelihood and severity of damages from potential catastrophes. Combined with other elements of a comprehensive planning process, such information would have been useful in planning the maximum user capacity of DCMS. If SBA had considered this information, the agency may have concluded that the likelihood of large scale disasters exceeding the magnitude of the Northridge earthquake was significant enough to expand its maximum concurrent user requirement. This additional capacity would have better prepared SBA to reduce the backlog of loan applications more rapidly because additional staff in all phases of the loan application process would have been able to access DCMS. Second, SBA stated in its comments that our draft report does not include an analysis of the difference between using DCMS and ALCS—SBA’s previous system. SBA also highlighted in its comment letter many of the benefits offered by DCMS. While it was not in the scope of our work to conduct a comparative analysis of ALCS and DCMS, our report recognized some of the benefits realized by adopting DCMS. For example, we noted that ALCS tracked the movement of paper loan application files from one stage of the loan process to another and required the movement and storage of large volumes of paper. We also noted that DCMS helped SBA move toward a paperless processing environment by automating many of the functions staff members had performed manually using ALCS such as obtaining FEMA referral data and credit bureau reports, as well as completing and submitting loss verification reports from remote locations. Third, SBA stated that the draft report does not indicate that the specific computer components, which the hosting contractor incorrectly provided, were processing chips that were embedded subcomponents of the computer servers, which SBA personnel could only detect by opening and dismantling the computer hardware. We agree that the hardware was embedded in the computer servers and could have been verified by physical inspection. SBA conducted such an inspection in September 2005. However, alternative ways of verifying the computer hardware were possible. For example, SBA staff could have used its system utilities to view details of the hardware and operating system after the processors were installed and may have detected the incorrect processors and taken corrective actions in a more timely manner. Finally, SBA took issue with our finding that it actually took longer to process expedited approvals under a pilot program, compared with its average processing time frames for all approvals. SBA stated that our interpretation of the data was misleading because it did not adjust for the length of time an application was in the loss verification inventory before being assigned to the loan department for processing. We disagree that our interpretation of the data was misleading because all physical disaster loan applications had to go through loss verification before a decision was made, regardless of whether the application was part of the expedited pilot program. While the expedited approval pilot program may have reduced the amount of time for loan officers to complete the underwriting decision, our intent, consistent with our overall objective, was to show the total time disaster victims waited for SBA to make a decision on their application. This includes the time an application is in other stages of the disaster loan process, such as application entry and loss verification. As we noted in our report, SBA implemented the pilot program when the agency faced a significant backlog of 115,000 applications in the loss verification stage, and these applications had been in the queue for 39 days on average. SBA’s data showed that the agency actually took longer to process expedited approvals, about 104 days on average, compared with 94 days on average for all approvals. We continue to believe that it is appropriate to consider the total processing time frames when comparing applications approved under the pilot program with all approved applications. SBA also provided other technical corrections and comments, which have been incorporated in this report, where appropriate. We are sending copies of this report to appropriate congressional committees, the Administrator of the Small Business Administration, and other interested parties and will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. In this report, we evaluate: (1) what affected the Small Business Administration’s (SBA) ability to provide timely disaster assistance and (2) the actions SBA took after the disasters to improve its response to disaster victims. This report focuses primarily on the Disaster Credit Management System (DCMS) and the disaster loan process. We visited the Gulf Coast region to observe conditions and meet with federal, state, and local officials and victims of the disasters. We also interviewed officials from the Office of Disaster Assistance at SBA’s headquarters and officials from the Processing and Disbursement Center in Texas, Field Operations Centers East and West in Georgia and California, Customer Service Center in New York, DCMS Operations Center in Virginia, and Georgia District Office. We reviewed SBA’s standard operating procedures for approving, declining, and withdrawing disaster loans. In addition, we reviewed documents related to the agency’s response to the Gulf Coast hurricanes, congressional testimony, and other program documentation. We reviewed documents related to SBA’s acquisition and implementation of DCMS. In addition, we discussed the acquisition process with officials from SBA’s DCMS Operations Center, which provides technical and program management support for the system. We also reviewed SBA’s standards for system development and compared the acquisition process for DCMS with industry standards for effective information technology acquisition. Further, we interviewed officials from SBA’s Office of Inspector General and reviewed their reports related to the implementation of DCMS and SBA’s Disaster Loan Program. We did not conduct a comparative analysis of DCMS and ALCS—SBA’s previous system—as part of our work. To obtain the perspectives of system users, we interviewed loan processing staff at various SBA locations. We also obtained SBA’s total costs for planning, acquiring, and implementing DCMS through April 2006. However, we did not audit the reported costs and thus cannot attest to their accuracy or completeness. We obtained documents related to the performance of DCMS, including system status reports, troubleshooting reports, and system change requests. We reviewed these documents to assess the extent to which system-related problems detailed in the documents affected SBA’s ability to provide timely disaster assistance. In addition, we obtained various reports on SBA’s disaster lending activity for victims of Hurricanes Katrina, Rita, and Wilma. We used these reports to calculate descriptive statistics on the number of applications mailed and received, the number and amount of approved loans, the backlog of applications in various stages, and other characteristics for applications processed through May 27, 2006. For comparative purposes, we also obtained summary statistical reports related to SBA’s disaster lending for past significant disasters. We also obtained data extracts from DCMS of disaster loan applications SBA received from victims of Hurricanes Katrina, Rita, and Wilma for various dates. We used the extracts to calculate average time frames for various stages of the disaster loan process. In assessing the reliability of SBA’s data, we reviewed documents such as the DCMS Privacy Act Assessment and met with appropriate SBA officials. To increase our confidence in the reliability of SBA’s data, we compared information from selected hard copy application files with the information recorded in DCMS. We also performed various tests of the information in the data extracts we obtained to ensure the completeness of the data. We concluded that SBA’s data were sufficiently reliable for the purposes of our report. To evaluate actions SBA took after the disasters to improve its response to disaster victims, we reviewed documents related to changes SBA made to DCMS and changes SBA planned to make to the system. We discussed these changes with officials from SBA’s DCMS Operations Center. In addition, we obtained and reviewed documents related to changes SBA made to the disaster loan process and other initiatives intended to improve SBA’s response to disaster victims. We discussed these changes and initiatives with the appropriate SBA officials and obtained data on the impact of these efforts where available. We reviewed documents related to the Federal Emergency Management Agency’s (FEMA) Individuals and Households Program, which makes assistance available to victims of major disasters. We also contacted FEMA to obtain additional information regarding the agency’s process for referring applicants to SBA’s Disaster Loan Program. We performed our work in Atlanta, Ga.; Buffalo, N.Y.; Fort Worth, Tex.; New Orleans and Metarie, La.; Sacramento, Calif.; Bay St. Louis, Biloxi, Gulfport, and Waveland, Miss.; Herndon, Va.; and Washington, D.C. We conducted our work between November 2005 and July 2006 in accordance with generally accepted government auditing standards. Since the early 1990s, SBA utilized its Automated Loan Control System to track the movement of paper application files from each stage of the process until it made a decision on the application, disbursed funds for approved applications, and transferred the application file to servicing. SBA also obtained data manually from external data sources, including FEMA, the Internal Revenue Service (IRS), and the credit reporting agencies. In December 1998, after using a significant number of resources in response to victims of Hurricane Georges, which struck Puerto Rico that previous September, SBA began an effort to modernize its manual and paper-based disaster loan process. SBA intended for its new disaster loan system to support: (1) a “paperless” electronic loan application and loan process, (2) loan processing from any location where the system is implemented, (3) multiple interaction methods between loan applicants and the Office of Disaster Assistance (e.g., by Internet or telephone), and (4) access to external data sources. The modernization effort entailed the following actions: documenting SBA’s current loan process and proposed future loan performing requirements analysis, conducting a commercial-off-the- shelf (COTS) market survey, and developing a business case; and acquiring, customizing, and implementing the system. In March 1999, SBA completed a business process reengineering study to document the current process and proposed future process. In August 2000, SBA completed the initial development of the new system requirements. Subsequently, SBA contracted for a COTS survey of products meeting its requirements and leveraging its other information technology resources. The survey identified two products that met a significant number of SBA’s requirements, with some customization and integration of additional products needed to meet all requirements. After the contractor completed the survey, SBA’s information technology investment review board required the agency to complete a business case analysis for the proposed disaster loan system. SBA’s analysis involved researching the existing requirements, evaluating potential alternatives, and providing a recommendation. In March 2001, SBA completed the analysis, which evaluated three alternatives: (1) develop a custom solution, (2) acquire a COTS product, or (3) stay with the current environment. SBA determined that the COTS product represented the best solution after considering the costs and time frames associated with each alternative. In June 2002, a SBA contractor developed more specific requirements for the project because a considerable amount of time had passed since the first survey and because of the uniqueness of certain aspects of the disaster loan process, such as loss verification and a check for duplication of benefits. Later that year, SBA contracted for a separate COTS survey that utilized the Carnegie Mellon University’s Software Engineering Institute process for evaluating COTS products. SBA evaluated products from 10 different vendors, and after narrowing the selection to two products, received vendor demonstrations in January 2003. In March 2003, the contractor recommended a COTS product for SBA to use as the foundation for the Disaster Credit Management System (DCMS). In September 2003, SBA completed an analysis of the DCMS design to identify potential gaps between the recommended COTS product and the requirements for the system. For example, SBA recognized that the COTS product did not have the functionality to perform loss verification activities; therefore, SBA decided to implement a custom loss verification application and link the application to the core system. This ensured that loss verification data would automatically synchronize with DCMS. In 2003, SBA also began to test various aspects of its new system. In November 2003, the agency began testing the core application, interfaces, and additional components (loss verification, scanning, etc.). User validation readiness testing was conducted between December 2003 and March 2004. In October 2004, SBA contracted for an Independent Verification and Validation (IV&V) of an initial release of DCMS. An IV&V can help provide reasonable assurance that a system satisfies its intended use and user needs, and its use is recognized as an industry best practice. The IV&V conducted for DCMS found that the system was supported by strong requirements, test plans, test cases, and other supporting documentation. In addition, the IV&V found that DCMS was developed with a high level of user involvement. However, the IV&V did not evaluate performance testing, including tests to help ensure that the system could function at its maximum user capacity, because these tests were not completed until December 2004 after the agency had begun implementation. This performance testing was conducted with only up to 120 concurrent users due to problems with the hardware associated with the testing environment. If the testing environment had functioned as planned, it was estimated the system could accommodate approximately 600 concurrent users. SBA used a phased approach for implementing DCMS. In November 2004, SBA first implemented DCMS in its Niagara Falls, New York, Disaster Area Office. In January 2005, SBA implemented DCMS in its Fort Worth, Texas, and Sacramento, California DAO. SBA also began using DCMS to process applications for all new disaster declarations. As figure 6 illustrates, SBA’s process of moving from its former manual, paper-based disaster loan process to a more automated process using DCMS took about 6 years. SBA’s costs for planning, acquiring, implementing, and operating DCMS totaled about $32 million through April 2006. In addition to the individual named above, Daniel Blair, Assistant Director; Barbara Oliver, Assistant Director; Bernice Benta; Tania Calhoun; Marshall Hamlett; Marc Molino; David Pittman; Jennifer Popovic; Rhonda Rose; and Eric Trout made key contributions to this report.
Hurricanes Katrina, Rita, and Wilma (the Gulf Coast hurricanes) caused more than $118 billion in estimated property damages across the Gulf Coast region in 2005. The Small Business Administration (SBA) helps individuals and businesses recover from disasters through its Disaster Loan Program. GAO initiated work to determine how well SBA provided victims of the Gulf Coast hurricanes with timely assistance. This report, the first of two, focuses primarily on the Disaster Credit Management System (DCMS) and disaster loan process. Here, GAO evaluates (1) what affected SBA's ability to provide timely disaster assistance and (2) actions SBA took after the disasters to improve its response to disaster victims. In conducting this study, GAO analyzed data on loan applications and assessed key aspects of SBA's acquisition and implementation of DCMS. Although DCMS provided SBA with a number of benefits, several factors affected SBA's ability to provide timely disaster assistance to victims of the Gulf Coast hurricanes. First, the large volume of applications SBA processed greatly exceeded any previous disaster, including the 1994 Northridge earthquake--the largest single disaster SBA previously faced. Second, SBA primarily used this earthquake as the basis for planning the maximum user capacity for DCMS and did not consider information available from catastrophe risk modeling firms and disaster simulations, such as the likelihood and severity of damages from potential catastrophes, to help predict the expected application volume from such events. SBA's limited planning contributed to insufficient DCMS user capacity, which restricted the number of staff that could access the system and process the large volume of applications in a timely manner. SBA also did not receive the correct computer hardware from its contractor, and the agency did not completely stress test DCMS before implementation, which contributed to the system instability, outages, and slow response times initially experienced by SBA staff. As a result of these and other factors, SBA faced significant delays and backlogs in processing loan applications, as depicted in the figure below. This backlog peaked at more than 204,000 applications 4 months after Hurricane Katrina. As of May 27, 2006, SBA processed applications, on average, in about 74 days compared with its goal of within 21 days. Some of the actions SBA took after the Gulf Coast hurricanes helped to improve its response to disaster victims. For example, SBA addressed system-related issues by increasing the number of users that could access DCMS, and it plans to further increase the system's maximum user capacity. SBA implemented other initiatives that had limited success. For example, SBA made only a few loan guarantees under its Gulf Opportunity Pilot Loan Program for small businesses in communities affected by the disasters. SBA would benefit by expediting its planned business process reengineering efforts to analyze ways to more efficiently process loan applications, such as implementing a secure Internet-based application feature for home loan applicants.
The Army uses maintenance capabilities in both the public and private sectors to maintain, overhaul, and repair its military weapons systems, such as missiles, combat vehicles, tactical vehicles, aircraft, and communication and electronic equipment. The level at which maintenance work is performed depends largely on authorized capability, worker skills, and predefined work requirements. Legislative requirementswhich play an important role in managing the allocation of depot-level maintenance workmandate that DOD provide Congress with annual reports on the distribution of funding for depot-level maintenance workloads in the public and private sectors. The Army assigns maintenance work to four categories—unit support, direct support, general support, and depot-level support. Unit and direct support workloads, which are limited to routine or recurring requirements, such as oil changes and the removal and replacement of components, are performed at military units in field locations and funded by direct appropriations for operations and maintenance. General support, which consists of the repair and overhaul of parts and assemblies and some end items such as trucks, is generally performed at fixed (nonmobile) industrial facilities located on Army posts, camps, and stations, and it is funded by direct appropriations for operations and maintenance. Military personnel, government-employed civilians, or contractor employees may perform this maintenance. Depot-level support, which includes the overhaul; upgrading; and rebuilding of parts, assemblies, and subassemblies; and testing and reclamation of equipment, is the most intensive category of maintenance and requires the highest level of skilled workers and more sophisticated test and plant equipment. It traditionally has been performed by (1) government-employed civilians working at government-owned industrial facilities under the command and control of the Army Materiel Command (currently five public depots) or (2) contractor personnel working in contractor owned and operated facilities performing work specified by Army Materiel Command-managed maintenance contracts. The Army’s five government-operated maintenance depots are managed within the Army Working Capital Fund. Contract depot-level maintenance work is not managed under the working capital fund. The Army has two categories of depot-level maintenance activities: Activities that have been designated and organized by design and purpose to primarily perform depot-level maintenance and repair tasks. These activities would include the Army Materiel Command’s public depots; the Army’s forward deployed maintenance depots; and contractor depots, primarily located at both the national and installation levels. Activities below the depot level that have been granted approval to perform specific depot-level tasks through a special or one-time authorization or that have been designated as a source of repair. These activities include Army National Guard Readiness Sustainment Maintenance Sites and Aviation Classification Repair Activity Depots, Army Reserve Installation Materiel Maintenance Activities, and Army Forces Command Contract Maintenance Facilities. These activities are primarily located at the installation level, and the work may be done by either government or contractor personnel. Operations of the Army depots are guided by legislative requirements that divide the amount of depot work between the public and private sectors and add specificity to how such work is to be defined. For example, 10 U.S.C. 2464 provides for a government owned and operated core logistics capability that is sufficient to ensure an effective and timely response to a mobilization or other national emergency. Also, 10 U.S.C. 2466 generally prohibits the use of more than 50 percent of the funds made available in a fiscal year for depot-level maintenance and repair by nonfederal personnel. In addition, 10 U.S.C. 2460 defines depot-level maintenance to encompass material maintenance or repair requiring the overhaul, upgrading, or rebuilding of parts, assemblies, or subassemblies and the testing and reclamation of equipment, regardless of the source of funds for the maintenance or repair or the location where maintenance or repair work is performed. Depot-level maintenance also encompasses software maintenance, interim contractor support, and contractor logistics support to the extent that work performed in these areas is depot-level maintenance. The statute excludes from depot-level maintenance the nuclear refueling of an aircraft carrier, the procurement of major modifications or upgrades of weapons systems that are designed to improve program performance, and the procurement of parts for safety modifications, although the term “depot maintenance” does cover the installation of parts for safety modifications. Congress has made changes to various depot-level maintenance requirements over the years. For example, the 1998 Defense Authorization Act established a statutory definition of depot-level maintenance and repair and increased DOD’s authority to use its depot-level maintenance funds for the private sector’s performance of the work from 40 to 50 percent. On the basis of statutory language defining depot-level maintenance, the Office of the Secretary of Defense issues annual guidance to the military departments for reporting their public-private workload allocations. The military departments also issue internal instructions to manage the data collection and reporting process tailored to their individual organizations and operating environments. As we have reported in recent years in examining DOD’s compliance with its so-called “50-50 requirement” under 10 U.S.C. 2466, all of the military departments have continuing data errors and inconsistencies in reporting and problems in documenting and independently validating their annual reports. We also have recognized the limitations of their financial systems, operations, and controls, as well as their continuing inability to capture and report the full costs of depot-level maintenance programs. Some of our most recent reports on depot-level maintenance issues are listed in the Related GAO Products section of this report. We previously reported that the Army had not sufficiently identified the extent of depot-level maintenance work performed at nondepot facilities in its April 14, 1999, report to the House Committee on Armed Services on depot proliferation. While the Army’s report indicated that 40 staff years of depot-level maintenance work was performed outside of the formal depot system by nondepot maintenance providers operating under specialized repair authorities, it also recognized that the figure was likely understated for a variety of reasons to include limitations in systems and procedures to fully quantify such work. We agreed. We also noted that in July 1999 the Army designated its Army Materiel Command as its National Maintenance Manager with responsibility for overseeing the Army’s logistics and maintenance support programs and managing maintenance facilities. In doing so, we noted then that while the Army recognized that it needed to modify and standardize Army data systems to fully account for depot-level maintenance work at all locations, it had not established clear action plans, milestones, and funding requirements for doing so. Our September 2003 report on DOD’s compliance with the 50-50 requirement found that the Army’s latest reporting on depot-level workloads for fiscal years 2001 and 2002 had utilized a new, more centralized financial system to collect 50-50 data that corrected some of the transcription errors we had found the previous year but that we continued to find errors, omissions, and inconsistencies in its data. Moreover, we reported that, as in prior years, the Army underreported public- and private-sector depot-level maintenance work at field locations as it continues unfinished efforts to consolidate maintenance activities and better control the proliferation of depot-level tasks at nondepot facilities. Although the mandate directed the Army to identify the proliferation of depot-level maintenance performed outside the public depots, the Army’s report on depot-level maintenance proliferation did not fully identify the extent of depot-level maintenance work performed at nondepot facilities. Instead, the report estimated that depot-level maintenance work valued at $188.6 million for fiscal year 2001 was not included in the Army’s depot- level maintenance data and that further validation of this amount was needed. While this estimate may not be fully indicative of depot-level maintenance work being performed outside the public depots, it indicates underreporting in this area that is consistent with the observations we have made in our prior work. Although the report recognized that the Army has redundant capabilities and capacities, it did not provide any information on the extent of this redundancy or the extent of maintenance activities that could be consolidated. We also have previously reported the existence of this problem. While the Army’s report provided an estimate of depot-level maintenance work that was not appropriately identified as such in fiscal year 2001, it acknowledged that the amount was incomplete and needed further validation. The report listed seven specific areas where depot-level maintenance work performed by nondepot facilities was not identified and estimated this amounted to be $188.6 million. As illustrated by table 1, most of the unidentified amount occurred in field-level facilities that perform depot-level maintenance tasks. According to the report, two categories of work accounted for about 75 percent of the $188.6 million. These were facilities that performed field-level maintenance under the National Maintenance Program with embedded depot tasks and those under One-Time Repair authorizations. The report pointed out that some of the unidentified depot-level maintenance work resulted from a misunderstanding between the Army Materiel Command and its subordinate commands over which organization would report this type of work. The report’s identification of unidentified depot-level maintenance work performed by nondepot facilities is consistent with our prior reviews of the Army’s annual 50-50 data. For example, in our most recent report, we identified work categories such as unreported one-time repair actions and unreported work by commands that did not receive Army reporting guidance, which contributed to the Army’s inability to fully account for its depot-level maintenance work in 2002. We noted that, as in past years, the Army did not fully identify public- and private-sector depot-level maintenance work at field locations as it continued unfinished efforts to consolidate maintenance activities and better control the proliferation of depot-level tasks at nondepot locations. While neither we nor the Army can precisely identify the amount of depot-level maintenance work being performed in nondepot maintenance facilities, our prior work and the Army’s latest report suggest that the $188.6 million estimate should not be construed as fully representing the amount of depot-level maintenance work performed at nondepot facilities. The Army’s proliferation report pointed out that the Army’s maintenance infrastructure has redundant capabilities and capacities that could be consolidated and streamlined to be more cost-effective. While the active Army, Army National Guard, and Army Reserve operate extensive maintenance facilities, some of which have the capability and capacity to perform depot-level maintenance work, the report did not provide any data to quantify the extent of redundancy or identify any possible candidates for consolidation. It did suggest that the Army further study the issue for opportunities to streamline its current expansive depot-level maintenance infrastructure. Moreover, the Army’s full implementation of its National Maintenance Programanother report recommendationis also intended to address streamlining the Army’s maintenance infrastructure. While we did not attempt to identify the full extent of this maintenance infrastructure as part of this review, our analysis supports the Army report’s contention that the Army has extensive nondepot facilities, some of which have the capability and capacity for depot-level maintenance tasks and are performing depot-level maintenance work. At the Army sites we visited, we observed maintenance activities involved with all levels of maintenance for ground and aviation systems. Similar to the Army’s public depots, these activities occupied large facilities that included machine shops, automobile and heavy- equipment repair shops, paint and body shops, and sandblasting areas. The pictures in figure 2 show some contrast and similarities in maintenance facilities at depot and nondepot locations. Some of the activities had the capability and capacity for depot-level maintenance activities and were performing depot-level maintenance work. For example, the Readiness Business Center at Fort Campbell, Kentucky, has been authorized to perform depot-level maintenance tasks to repair components for tactical wheeled vehicles, radios, and helicopters. Of the total $27.1 million maintenance work performed by the Business Center in fiscal year 2002, about $4.5 million, or about 17 percent, was identified as depot-level maintenance. Also maintenance officials at several facilities at Fort Riley, Kansasthree of them operated by the National Guard, one by the Fort Riley Directorate of Logistics, one by the Forces Command, and one by the Army Reserveestimated that their maintenance work for fiscal year 2002 totaled about $58.5 million. The National Guard performed about $35 million worth of depot-level maintenance in fiscal year 2002 and expects this workload to significantly increase. More details on the Army’s maintenance infrastructure are provided in appendix II. We have previously reported on the Army’s proliferation of facilities that perform depot-level maintenance work and the lack of a strategic plan for depots to guide its decisions on this issue. In an October 1999 report, we pointed out that the Army’s April 1999 study of the proliferation of depot-level maintenance activities at nondepot facilities did not sufficiently identify the extent of this type work. We also highlighted that the Army’s study, citing inadequate data on the subject of proliferation, did not make any recommendations for consolidating depot-level maintenance facilities. We noted that a key challenge that the Army faced was determining and overseeing the amount of depot-level maintenance capabilities controlled by major commands in the active Army and the Army National Guard. For various reasons, these commands were reluctant to reduce their present capability for performing depot-level maintenance workloads. For example, Reserve and National Guard Bureau officials said that having local maintenance facilities capable of performing some depot-level tasks was a readiness issue in that such facilities allowed their units more rapid turnaround time on equipment requiring this type of repair. In July 2003 we reported that work performed in the Army’s public depots had declined by 36 percent from fiscal year 1987 through fiscal year 2002, while the total depot-level maintenance program grew. We pointed out that future workload projections indicated further decline but that the full impact of the Iraq conflict on future depot-level workload was largely unknown. Among the host of factors that contributed to this decline were (1) DOD’s policy for greater reliance on the private sector for depot-level support of new weapons systems and major upgrades and (2) its increased reliance on the use of regional repair activities and private-sector contractors for work that might otherwise be done in the depots. We noted that neither DOD nor the Army had a comprehensive and current depot-level maintenance strategic plan, which was an essential aspect of ensuring future depot efficiency and viability. Without complete information on the extent of depot-level maintenance work performed in nondepot facilities, DOD’s annual report to Congress cannot fully account for the allocation of depot-level maintenance funds between the public and private sectors. In our analysis of DOD’s 50-50 reporting, we have said that underreporting depot work in nondepot facilities is one of the limitations affecting the Army’s ability to fully account for its depot-level maintenance work. Consistent with our work in this area, the Army’s report on proliferation identifies a number of factors that preclude the Army from fully capturing and reporting its depot-level maintenance data at nondepot facilities. These factors include (1) inconsistent application of the congressionally mandated definition of “depot maintenance” and related guidance, (2) weaknesses in the management information systems for collecting and reporting data, and (3) the failure to follow established policies and procedures for authorizing depot-level work at field-level facilities and outsourcing work. Our current analysis and our prior work identify these factors as underlying causes affecting the Army’s determination that it has complied with the 50-50 rule. Furthermore, these limitations will become more significant as the Army approaches the statutory ceiling on the performance of depot-level maintenance work by contract. We have reported in the past that by not having complete information on the amount of depot-level maintenance work being performed in nondepot facilities, DOD cannot provide Congress with an accurate and complete report regarding the allocation of depot-level maintenance between the public and private sectors as required by 10 U.S.C. 2466. For example, our September 2003 report stated that our prior 50-50 reports have documented continuing problems and shortcomings in accurately and consistently reporting depot-level maintenance accomplished by both public- and private-sector sources at nondepot locations. For example, one-time depot repair actions at unit-level facilities went unreported. Other nondepot work was not reported because some commands did not receive 50-50 instructions and others misapplied the guidance. Contractors performed some of this work, and military or civilian government employees performed some of it. While neither the Army nor we know the extent of unreported work nor the amount performed by public- and private-sector employees, the impact effectively limits the accuracy and completeness of DOD’s report to Congress on the allocation of depot-level maintenance funds between the public and private sectors. Additionally, as discussed below, both the Army and we have identified three key factors inhibiting the Army’s ability to accurately and completely report depot-level maintenance work performed at nondepot facilities. A key factor inhibiting the Army’s ability to accurately and completely identify all depot-level maintenance work performed in nondepot facilities in DOD’s 50-50 report is that Army military activities inconsistently apply the congressionally mandated definition of “depot maintenance.” The Army’s proliferation report concluded that the congressionally mandated definition of depot-level maintenance is not widely known below the major command headquarters. In addition, the definition is open to interpretation, and the reporting guidance is not always well defined. At most of the commands and installations we visited, maintenance officials said that, in determining whether a maintenance task is depot-level maintenance, they follow the guidance found in the Army’s Maintenance Allocation Charts; technical manuals; and source, maintainability, and recovery codes for reparable components rather than apply the congressionally mandated definition. They expressed concerns that the congressional definition is not always consistent with this guidance, is too broad, and is subject to too much interpretation over what maintenance tasks should be counted as depot-level tasks. For example, officials at the National Guard Bureau said that applying the definition to repair work performed by direct support and general support activities caused uncertainty in that the bureau considered most of the work at these levels to be nondepot-level work and to identify what work should be considered depot-level work required subjective decisions. Officials at the Reserve Command said that, while only maintenance work defined by the Army’s technical manuals as depot-level work should be reported as such, under the expanded definition of depot-level maintenance, some work defined as below depot-level could involve depot-level tasks such as changing and swapping out engines and transmissions for wheeled vehicles. Officials at the installation maintenance sites we visited made similar comments. In commenting on the proliferation report, the Army Materiel Command said that the application of the definition of depot-level maintenance contributed to the report’s findings that depot-level maintenance tasks at nondepot facilities were being underreported. The command added that tasks performed by these facilities were not distinguished as depot-level tasks in the Army guidance but, in the aggregate, these tasks may be equivalent to depot-level maintenance. Finally, the command said that the Army could only approximate the extent of work performed at nondepot facilities because it currently does not have a system to precisely capture information on maintenance work for DOD’s 50-50 report. In prior reports, we have concluded that the Army had not revised its maintenance policies and technical manuals to reflect the expanded definition of depot-level maintenance and, as a result, any attempt to estimate its extent at local facilities would be misleading. We also recently reported that some Army commands did not receive 50-50 instructions and that others misapplied the guidance. The Army’s 2003 report indicates that the Army will have to make these changes in its maintenance policies and technical manuals. For example, in recognizing that the Army had not yet incorporated the expanded definition into its policies and procedures for 50-50 reporting, the Army’s report suggested that the Army (1) provide more explicit guidance for 50-50 reporting to help ensure that its commands better understood reporting requirements and (2) develop an easy-to-use reference guide to help the commands better determine what maintenance work should be included in the 50-50 report. Inadequate Army management information systems are a second key factor inhibiting the Army’s ability to fully capture depot-level maintenance work performed in nondepot facilities. The Army’s problems with its management information systems are longstanding. In a December 2000 report, the Army Logistics Transformation Agency concluded that the Army’s maintenance environment was characterized by many “stovepipe” information systems and application programs that are predominately fed data manually by maintainers and operators. It also concluded that a wide range of maintenance-related information does not exist, is not adequate, or is not accessible. In our prior reviews, we also have reported weaknesses in the Army’s management information system. For example, in our 1999 report, we concluded that deficiencies in management information systems contributed to the Army’s inability to develop accurate and consistent estimates of its depot-level maintenance work. In our September 2003 report on DOD’s compliance with the 50-50 requirement, we found that the Army’s latest reporting on depot-level workloads for fiscal years 2001 and 2002 had utilized a new, more centralized financial system to collect 50-50 data. This new system helped correct some of the transcription errors we had found the previous year, but we continued to find errors, omissions, and inconsistencies in the Army’s data. The Army’s proliferation report concluded that current management information systems for capturing depot-level maintenance work at the installation level are inadequate for collecting and reporting 50-50 data. According to the report, the systems cannot, among other things, (1) archive the data in a readily accessible manner or (2) allow for the separate counting of multiple maintenance actions associated with a single work order. (A work order may include three different levels of maintenance, including depot-level maintenance, but only one maintenance code can be assigned to the order.) Also the report pointed out that collecting and reporting depot-level maintenance work outside the Army’s five public depots was a convoluted and manual process. Another factor inhibiting the accuracy and completeness of the 50-50 report is that policies and procedures for authorizing depot-level work in nondepot facilities are not always followed. The Army’s proliferation report made the same conclusion and identified several areas where reporting officials did not believe that maintenance facilities were following policies and procedures for authorizing and reporting depot- level maintenance work. For example, the report noted that maintenance facilities at the installation level were undertaking depot-level maintenance work without having higher command authorization and that some authorized one-time repairs were not being reported. The report also concluded that some weapons systems managers were not following current DOD and Army guidance in determining sources for providing depot-level maintenance support. In a prior report related to DOD’s process for determining depot-level maintenance repair strategies for its new weapons systems and major upgrades, we noted that many weapons systems managers, including those in the Army, were not following existing guidance regarding such tasks as adequately performing required cost comparisons between public and private facilities and coordinating maintenance support decisions between acquisition and logistics officials. We noted that service officials attributed these problems, in large part, to weaknesses in guidance, which they believed was inadequate, unclear, and sometimes contradictory. As the Army moves closer to the statutory ceiling for the funding for depot-level maintenance work performed in the private sector, the limitations in the Army’s ability to precisely capture its depot-level maintenance work will become more significant. For fiscal year 2002, the Army’s reported data ($2.7 billion for the total program) indicated that its funding in the private sector for depot-level maintenance remained below the 50-percent limit. However, our adjustments for known errors in reporting for that year increased the percentage of private-sector work to 49 percent from the 46.5 percent reported by the Army. An increase of more than 1 percent in the use of the private sector to perform more depot-level maintenance in the future, could cause the Army to exceed its statutory limitation. Consequently, the Army would be required to seek a national security waiver and notify Congress as provided for in 10 U.S.C. 2466(b). With regard to estimates of future compliance, the Army’s report noted that the Army might exceed the 50 percent ceiling for contractor support by fiscal year 2006. More recently, an official from the Army Materiel Command said that, for fiscal years 2002 and 2003, the Army experienced a 3 to 5 percent increase in its contract requirements for depot-level maintenance because increased operational requirements made the public depots unable to meet the total demand for depot-level maintenance work. She pointed out that, if this trend were to continue, the Army might have to seek a waiver from the Secretary of Defense, possibly as early as fiscal year 2004, to exceed the 50 percent limitation for work performed by the private sector. Another official at the Army Materiel Command said that the Army’s depot-level maintenance work in 2004 might increase by about $2.5 billion because of operational requirements for Army equipment deployed in the Middle East. He also said that, in an effort to keep up with maintenance demands, the Army’s five public depots have used extensive overtime, added second work shifts, hired temporary employees, and allowed some retirees to return to work. In his view, the public depots could not meet the demands of the increased maintenance work and the Army would have to use more contractors. The Army report’s recommendations are focused on key problem areas and are consistent with recommendations we have made in the past. If fully implemented, the recommendations in the Army’s proliferation report could improve the identification of additional depot-level maintenance work in nondepot facilities, and the accuracy and completeness of 50-50 reporting. Efforts have been undertaken to address some of the problem areas; however, no action plan to manage the implementation has been developed. Evaluating the success of the proposed 29 recommendations will be difficult until the Army develops an action plan with priorities, time frames, responsible organizations, evaluation criteria, and the resources required to implement these recommendations. If actions are not implemented in a timely way, the Army will not likely have the comprehensive information that it needs in the near term to comply with the 50-50 reporting requirements or to effectively manage the existing excess maintenance capabilities and infrastructure. On the other hand, the extent of improvements likely to be achieved in the long term is uncertain, given previous delays and the significant challenges that the Army faces in instituting solutions to ensure the consistent application of 50-50 reporting criteria. The Army report’s recommendations present an array of corrective measures that are focused on four key areas in which the Army could better evaluate the proliferation of depot-level maintenance facilities and manage its depot-level maintenance program. Appendix III lists the 29 recommendations. Basically, the key areas represent a need for the following: Improved communication and emphasis for the 50-50 requirement. The 14 recommendations in this area address improving the 50-50 process. They include conducting annual 50-50 workshops, issuing clear guidance for 50-50 reporting, publicizing information about the depot-level maintenance program in professional publications, ensuring that compliance with the 50-50 rule becomes a priority, and developing an easy-to-use reference guide to help reporting activities better identify depot-level maintenance work that should be reported. Improved management information systems. The three recommendations in this area address continuing efforts to develop a single integrated management information system capable of capturing and reporting depot-level maintenance work at nondepot facilities. Enhanced compliance with policies and procedures for depot-level maintenance operations. The nine recommendations in this area address revising policies to ensure consistency in compliance with all applicable legislation, regulations, and policies; developing a policy requiring the acquisition of access to system technical data for use by government or other contract maintenance activities; and developing and implementing a plan for documenting baseline data to compare contractor and public depot support costs. Develop the National Maintenance Program and consolidate maintenance activities. The three recommendations in this area address efforts to develop the National Maintenance Program and to conduct further analyses to identify opportunities for consolidating depot-level maintenance facilities. Of the Army proliferation report’s 29 recommendations to improve the identification and reporting of depot-level maintenance data, 3 were specifically directed toward managing the proliferation of depot-level maintenance at nondepot facilities. One of the recommendations identified the need for additional study and identification. The proliferation report’s recommendations are consistent with our prior recommendations regarding the Army’s proliferation of depot-level maintenance facilities and the 50-50 reporting process. For example, in September 2003, we recommended that the 50-50 reporting guidance be appropriately disseminated to reporting organizations and individuals and that staff be properly trained in a timely way to apply the guidance. In October 1999, we recommended that the Army address the following challenges: Improving its management information systems. Our recommendation was that the Army identify requisite action items, time frames, and funding requirements for improving the Army’s information management systems to fully identify the magnitude and cost-effectiveness of depot-level maintenance work at various locations within the Army. Finding opportunities to consolidate maintenance activities. We recommended that the Army establish (1) clear time frames and action plans for assessing requirements for the various types of depot-level maintenance facilities and (2) plans for achieving necessary consolidations and reductions of excess capabilities. Enhancing the National Maintenance Program. We recommended that the Army incorporate the depot-level maintenance capabilities of both active and reserve components under the National Maintenance Program and assign the national maintenance manager with requisite responsibility and authority for depot-level maintenance capabilities in active and reserve facilities. While we made these recommendations 4 years ago, they continue to be essential to addressing the problems of the proliferation of depot-level maintenance facilities and inaccuracies in 50-50 reporting. The Army’s 2003 report noted that the Army had taken numerous steps since 1999 to improve its management of the proliferation of depot-level maintenance facilities and its 50-50 data. However, the report pointed out that the Army needed to implement the report’s recommendations before it can claim with complete confidence that it is meeting the 50-50 requirement. Headquarters officials responsible for the report told us that the Army maintenance organizations concurred with the report’s recommendations. The Army has already begun implementing some of its report’s recommendations such as holding annual workshops and revising its guidance to include the congressional definition of depot-level maintenance. However, it has not yet developed an overall action plan for managing the implementation of all of the recommendations and, in particular, for setting priorities for the more-critical recommendations. As the report indicates, corrective actions are essential to improving the Army’s ability to better manage the proliferation of maintenance facilities and capture data for 50-50 reporting. Some of the critical recommendations, such as the need to identify opportunities for consolidating depot-level maintenance facilities, set up a single integrated management information system capable of capturing all depot-level maintenance data, and develop a National Maintenance Program to better manage depot-level maintenance work have been in process for several years. Thus, the identification of specific actions would appear necessary to help the Army accomplish the implementation of these recommendations more timely. A plan would include (1) the Army’s priority for implementing the recommendations, (2) the Army organizations accountable for implementation, (3) the specific time frames for accomplishment, (4) whether the benefits of accomplishment support the cost of implementation, (5) the funding required and the source of funds, and (6) the criteria to determine the effectiveness of the recommendations once they are implemented. Officials at the Army Communications-Electronics Command said that streamlining maintenance facilities is a good idea, but they do not have the tools that would enable the command to fully implement the proliferation report’s recommendations in this area. They said that updating Army Regulation 750-1 (Army Materiel Maintenance Policy) and conducting 50-50 workshops were a step in the right direction, but that both actions need to be more comprehensive in educating field-level personnel on policy and requirements. For example, they pointed out that just reiterating the congressional definition of depot-level maintenance in Army Regulation 750-1 did little to add clarity to the definition. Additionally, the last two workshops were spent largely on training for the Depot Maintenance Operations Planning System, a new system to be used by all Army major commands and acquisition managers to capture and report annual 50-50 workload data. According to these personnel, this system appears extremely complex and training should be aimed at those who are actually responsible for reporting maintenance data. Officials at the National Guard Bureau generally disagreed that the Army report’s recommendations will ensure accurate 50-50 reporting—especially in view of the apparent disconnect between the recommendations and the Army’s maintenance transformation process, and the Army’s plans for moving to two-level maintenance. Army Communications-Electronics Command officials also suggested that, for timely and effective implementation, the Army establish a working group of representatives with subject matter expertise from all levels within the Army to oversee the implementation. Officials at Army headquarters who were responsible for the report told us that they did not yet have a formal action plan established. They said they were planning to establish a working group in December 2003 to review the recommendations and determine what actions needed to be taken. As previously discussed the Army has taken steps to improve its management of the proliferation of depot-level maintenance facilities and its 50-50 reporting. The Army’s 2003 report and our analysis indicate that some of the key actions related to improving the Army’s long-standing issues in effectively identifying the proliferation of depot-level maintenance facilities and improving its 50-50 reporting have been in process for several years. The magnitude of long-term improvements likely to be realized remains uncertain, given prior delays in instituting solutions and the inconsistent understanding and application of 50-50 reporting criteria. During our review, we observed that several actions related to the recommendations had been under way for a number of years but that completion dates had slipped and funding had become uncertain. For example, the recommendation that the Army continue efforts to establish a fully integrated national maintenance requirements determination process that includes all depot-level maintenance requirements refers to a program known as the National Maintenance Program. The Army initiated the program in July 1999 and planned to implement it by fiscal year 2004. However, full implementation has slipped to fiscal year 2006. Additionally, required funding to complete the program is uncertain. Army Materiel Command officials said that the program’s goal is to centrally coordinate and control depot-level maintenance work by developing standards for items being repaired at qualified repair sources. They said that (1) the program is helping to better identify and manage facilities that perform depot-level maintenance outside the public depots; (2) the number of maintenance facilities in the program has declined from 60 in fiscal year 2000 to 45 in fiscal year 2003, and these are expected to further decline to 25 in fiscal year 2005 as the Army decides which facilities will be qualified to perform depot-level maintenance work; and (3) the command was working with other commands to reduce the number of nondepot maintenance facilities. Full implementation of the National Maintenance Program appears to be a key initiative in addressing the proliferation of depot-level maintenance facilities. At the same time, our analysis indicates that, although the concept of the National Maintenance Program could help improve future annual reporting and eliminate some of the current fragmented and duplicative depot-level maintenance workload assignments, it is too early to assess the program’s full impact. The extent to which the program can resolve all the problems related to the proliferation and inaccurate and incomplete identification of depot-level maintenance work performed in nondepot facilities is unclear. The Army’s 2003 report noted that the Army’s schedule for completing the implementation of the National Maintenance Program has slipped to fiscal year 2006. During our review, we noted that, as of October 2003, the Army had completed standards for only 737about 18 percentof the 4,148 candidates for the program. While an estimated development cost of about $120 million has already been spent, the Army is a long way from having the required standards that are needed for the program. An Army Materiel Command official said that the Army was not planning to provide any additional funding for the further development of these standards. In a prior report, we observed that, while the program is intended to consolidate and distribute overhaul work for components returned to the supply system, the evolving management framework will continue to allow local maintenance facilities to repair items returned directly to using organizations—maintenance-to-maintenance transactions that could meet the statutory definition of depot-level maintenance. Additionally, the program did not include some other depot-level maintenance work. For example, it does not address the allocation of depot-level maintenance requirements for overhauling, rebuilding, or upgrading major end items such as tactical wheeled vehicles that are currently being overhauled in field-level maintenance facilities or by contracts managed by field-level organizations even though this work meets the statutory definition of depot-level maintenance work. With regard to resolving deficiencies in management information systems, it is uncertain when the required changes that will be capable of capturing and reporting depot-level maintenance workloads performed in nondepot facilities will be operational. In October 2003, the Army began testing the transition of its database for Specialized Repair Authority and One-Time Repair authorizations into an automated system referred to as the Joint Computer-Aided Acquisition and Logistic Support system. The Army’s 2003 report described this as an interim initiative to capture and report on maintenance work performed under these two types of authorizations. While this initiative will automate the capturing of this work, it will not identify all depot-level maintenance work that may be performed in field- level facilities. For example, it will not capture depot-level maintenance actions performed on equipment that will be returned directly to the user without going through the Army’s supply system. A single integrated management information system is hoped for when the Army’s evolving Logistics Modernization Program is fully implemented. Testing at the first site, the Tobyhanna Army Depot, is ongoing, but problems have occurred. Earlier estimates of completion are about 18 to 24 months, but some delays are expected. Additionally, to what extent it will resolve all the deficiencies in the current systems is uncertain. We believe that the Army’s implementation of our prior recommendations, as well as the recommendations made in the Army’s proliferation report, are essential to providing the Army with more precise information for crucial decisions in the area of the Army’s depot-level maintenance infrastructure. Their timely and effective implementation depends largely on the necessary emphasis from senior Army leadership. If actions are not implemented in a timely manner, the Army will unlikely have the comprehensive information that it needs to determine the extent of proliferation and effectively manage its excess capabilities and the infrastructure it has—key data needed for complying with the existing reporting statute, identifying excess infrastructure and making appropriate consolidations, and making appropriate decisions for an additional round of base realignments and closures that has been authorized for 2005. The Army has not yet developed a plan for implementing the recommendations in the Army proliferation report. We believe it is essential that the Army have such a plan to help ensure the timely and effective implementation of the recommendations. Such a plan would include evaluating the priority for implementation and identifying the time frame for implementing the recommendations, the responsible organizations, and the criteria for measuring the desired results. While improvements should be accomplished, the complexity and vastness of the Army’s maintenance system and continuing questions about such issues as the definition of depot-level maintenance and changing maintenance strategies could continue to present challenges in fully recording all depot-level maintenance work. To ensure the timely and effective implementation of the recommendations in the Army’s 2003 proliferation report to help the Army improve its management of maintenance operations, including the proliferation of depot-level maintenance facilities, and more precisely capture and report depot-level maintenance data, we recommend that the Secretary of the Defense direct the Secretary of the Army to establish a specific plan to manage the implementation of the 29 recommendations identified in the 2003 proliferation report. The plan should include the priority and time frames for implementation, the responsible organizations for implementing the plan, and the criteria for measuring success. The Department of Defense provided written comments (see app. IV) on a draft of this report. In commenting on the draft, the Office of the Deputy Undersecretary of Defense for Logistics and Materiel Readiness concurred with our recommendation that the Army establish a plan to manage the implementation of the 29 recommendations identified in the 2003 depot maintenance proliferation report. The Department noted that the Army is in the process of establishing an integrated product team to develop an action plan to address the 29 recommendations to include reevaluating the validity and modifying the recommendations where appropriate. The Department also stated that the Army expects to have an action plan in place no later than March 31, 2004. The Department’s response noted specifically that the recommendation contained in the Army’s report related to the designation of core work needs to be revised. We recognize that some adjustment to the recommendations may be necessary as the implementation plan is developed. Whether some recommendations require modification for implementation is not as significant for the Army as is the need for timely action and follow through to address the issues identified in the Army’s depot maintenance proliferation report. We are sending copies of this report to interested congressional committees; the Secretary of Defense; the Secretary of the Army; and the Director, Office of Management and Budget. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have questions regarding this report, please contact me at (202) 512-8412 or [email protected] or Julia Denman, Assistant Director, at (202) 512-4290 or [email protected]. Other major contributors to this report were Nancy Benco, Wayne Gilliam, and Bobby Worrell. To answer the Senate and House Committees on Armed Services mandate contained in Report of the Committee on Armed Services, House of Representatives, Floyd D. Spence National Defense Authorization Act for Fiscal Year 2001, we reviewed the Army’s Fiscal Year 2002 Study of the Proliferation of Depot Maintenance-Type Activities Phase II Report, dated July 31, 2003. We interviewed Army officials and analyzed pertinent information regarding that report at (1) Army Headquarters in the Washington, D.C., area; (2) Headquarters, Army Materiel Command in Alexandra, Virginia; (3) three subordinate commands—the Army Aviation and Missile Command, Huntsville, Alabama; Communications-Electronics Command, Fort Monmouth, New Jersey; and the Tank-automotive and Armaments Command, Warren, Michigan; (4) Headquarters, National Guard Bureau, Arlington, Virginia; (5) Headquarters, Army Forces Command, Atlanta, Georgia; and (6) Headquarters, Army Reserve Command, Atlanta, Georgia. Also, we interviewed managers and reviewed pertinent information regarding maintenance facilities located at Fort Campbell, Kentucky; Fort McCoy, Wisconsin; Fort Riley, Kansas; and Fort Rucker, Alabama. We made extensive use of our prior work related to Army depot-level maintenance. To determine the extent to which the Army’s report identified the total amount of depot-level maintenance work performed at nondepot facilities, we examined the requirements of section 2466 of title 10, U.S. Code, and Army regulatory provisions for 50-50 reporting. We analyzed the report’s scope and methodology, findings, and disclosure of the amount it identified and compared these data with our prior work done on the Army’s annual 50-50 reporting process. We interviewed the study group manager, Army officials, and maintenance managers about the nature of maintenance work performed by nondepot facilities and about whether it was being reported as required. Because the Army has no central database or readily available data, we did not attempt to determine the Army’s universe of facilities that perform depot-level maintenance. To answer whether the Army can accurately account for its depot-level maintenance workloads and the key issues that preclude accurate reporting, we examined the report’s findings to determine what areas were identified as contributing problems. We used our prior work on the Army’s depot-level maintenance program to correlate, compare, and test the consistency of the identified problems with the ones we had previously reported. We interviewed the study group manager, Army officials, and maintenance personnel about the relevancy of the findings and the application to the Army’s depot-level maintenance operations. To answer whether the corrective actions identified in the Army’s report are likely to address the proliferation issue and enhance the Army’s reporting, we examined the recommendations to determine how effectively they were linked to the identified problems. We also compared the recommendations with those that we had previously made to test for consistency. Finally, we discussed the relevancy of the recommendations with the study group manager, Army officials, and maintenance representatives. We conducted our analysis of the Army’s report from July through October 2003 in accordance with generally accepted government auditing standards. The Army operates a number of maintenance facilities and has an extensive infrastructure for the maintenance of its military weapons systems and support equipment. For example, as we reported in April 2003, the Army employs about 10,000 personnel at its five public depots to overhaul, repair, and upgrade its ground and air combat systems, subsystems, and assemblies. The Army also has a vast number of other maintenance facilities operated by U.S. government-employed civilians and contractors. For example, we reported in October 1999 that the Army had another 102 maintenance facilities that were potential providers of depot-level maintenance services within the continental United States—28 active Army, 2 Army Reserve, and 72 Army National Guard. In addition, the Army operates maintenance activities that provide maintenance below the depot level. For example, as of August 2003, the Forces Command reported that it had maintenance facilities at 10 installations that provide direct support for vehicle maintenance; the Army reserve had about 160 maintenance facilities located throughout the United States that perform unit, direct, and general maintenance support; the National Guard had additional maintenance facilities performing unit and direct support maintenance; and Army installations had a number of maintenance facilities that perform various levels of maintenance. We visited 17 of the Army, Army Reserve, and Army National Guard maintenance sites. These 17 sites performed maintenance work valued at more than $500 million during fiscal year 2002; employed more than 4,700 military, civilian, and contractor personnel; and occupied facilities with more than 2 million square feet. Table 2 provides summary capacity and capability information about the sites. The Army’s Fiscal Year 2002 Study of the Proliferation of Depot Maintenance-Type Activities identified 7 issues and made 29 recommendations for the following improvements to enhance the Army’s ability to (1) evaluate the proliferation of nondepot facilities that perform depot-level maintenance and (2) identify and report on its 50-50 data. These recommendations were consistent with our prior recommendations—noted in referenced GAO products. Issue 1: Title 10 Definition of Depot Maintenance and 50-50 Reporting Policy Guidance Recommendations: Post the new Army Regulation 750-1 as soon as possible to the Army Publications Agency Web site. Posting the regulation will contribute to the education of Army activities on the congressional definition of “depot maintenance” and the 50-50 reporting process. Continue to conduct annual 50-50 workshops and issue clear guidance regarding depot maintenance policies to major commands, program executive officers, and program managers. Use Headquarters, Department of the Army, senior-level maintenance boards to communicate depot maintenance policies. Publish depot maintenance information articles in professional publications. [See also GAO-03-1023; U.S. General Accounting Office, Depot Maintenance: Key Unresolved Issues Affect the Army Depot System’s Viability, GAO-03-682 (Washington, D.C.: July 7, 2003); and U.S. General Accounting Office, Depot Maintenance: Change in Reporting Practices and Requirements Could Enhance Congressional Oversight, GAO-03-16 (Washington, D.C.: Oct. 18, 2002).] Issue 2: Accuracy of Army’s Current 50-50 Reports Recommendations The Army Deputy Chief of Staff (G-4) should ensure that compliance with the 50-50 rule becomes an Army priority and that command emphasis is applied to correct all reporting problems. Army G-4 should submit an amended 50-50 report for fiscal year 2002. Army G-4 should include all Army major commands, program executive offices, and separate commands in the 50-50 reporting process. [See also GAO-03-1023; U.S. General Accounting Office, Depot Maintenance: Workload Allocation Reporting Improved, but Lingering Problems Remain, GAO/NSIAD-99-154 (Washington, D.C.: July 13, 1999); and U.S. General Accounting Office, Depot Maintenance: Management Attention Required to Further Improve Workload Allocation Data, GAO-02-95 (Washington D.C.: Nov. 9, 2001).] Issue 3: Specialized Repair Authority and One-Time Repair Workload Reporting Requirements Recommendations Army G-4 should provide applicable major commands with immediate and specific guidance to reinforce compliance with current Specialized Repair Authority and One-Time Repair policies. Strong emphasis should be placed on collecting One-Time Repair data for 50-50 reporting purposes. Current policy and procedures regarding the approval and tracking of One-Time Repair should be expanded in Army Regulation 750-1. Major commands should appoint an installation or local Specialized Repair Authority coordinator to ensure that (1) all major command facilities at an installation are provided the most current information on Specialized Repair Authority and One-Time Repair policies and (2) the appropriate Specialized Repair Authority and One-Time Repair production data are submitted properly through the installation or local regional major command Specialized Repair Authority coordinator to the major command headquarters. Similarly, the National Guard Bureau should appoint Specialized Repair Authority coordinators for each state, territory, and the District of Columbia. Army G-4, in coordination with the Army Materiel Command (AMC), should direct the Aviation and Missile Command to terminate the existing Aviation Repair Authority process and immediately comply with all Headquarters, Department of Army, Specialized Repair Authority policies. [See also U.S. General Accounting Office, Depot Maintenance: Change in Reporting Practices and Requirements Could Enhance Congressional Oversight, GAO-03-16 (Washington D.C.: Oct. 18, 2002).] Issue 4: Army G-4 Guidance Regarding Depot Maintenance Reporting Procedures Recommendations Army G-4 should revise the 50-50 standard operating procedures to provide the major commands, program executive officers, and program managers with more explicit guidance regarding those 50-50 reporting requirements that are still causing confusion, Army G-4 should coordinate with major commands, program executive officers, and program managers to ensure that the standard operating procedure revisions are clearly understood and that the 50-50 standard operating procedures address the upgrade and modification programs. Army G-4 should develop a “decision tree” (an easy-to-use reference guide) to better distinguish between those programs that should be and should not be reported. Issue 5: Impact of National Maintenance Program on 50-50 Reporting Processes Below Public Depot Level Recommendations AMC should continue efforts to establish a fully integrated national maintenance requirements determination process that includes all depot maintenance requirements. The Army should complete the implementation of the National Maintenance Program by fiscal year 2006 as currently planned. The Army should conduct further analyses to identify opportunities for consolidating depot-level maintenance activities. Issue 6: Management Information System Requirements for Improving Current Methods of Capturing and Reporting 50-50 Data Recommendations The Army should develop an integrated management information system capable of capturing and reporting depot maintenance workloads below the organic depot level. AMC should consider replacing the interim Specialized Repair Authority business systemthe combined Joint Computer-Aided Logistics Support System/Logistics Integrated Database/Army Electronic Product Support Systemswith a single integrated Specialized Repair Authority tracking and workload system through the development of the Logistics Modernization Program and as the various major command feeder systems are either replaced or consolidated by way of the development of the Global Combat Support System-Army. Army G-4 should query AMC’s Logistics Integrated Database monthly to ensure that major command activities are using the new Department of the Army Pamphlet 738-750 maintenance codes for inputting workload data into automated systems to account for Specialized Repair Authority and One-Time Repair tasks embedded in maintenance workloads at the installation level. Army G-4 should continue to support AMC’s efforts to develop a Maintenance Contract Database under the National Maintenance Program. Issue 7: DOD and Army Policies Affecting Army’s Ability to Manage the Proliferation of Depot Maintenance Activities Recommendations New weapons systems should be designated as core or non-core up front in the system’s life cycle at Milestone C (Production and Deployment). The system’s Logistics Support Plan should be revised accordingly on the basis of core depot assessment and presented at Milestone C for approval. The Army should adopt a new core depot assessment process for weapons systems that have not yet undergone core determination analyses. The Army should continue to revise/replace flawed Department of the Army policies that apply to the Depot Source of Repair (DSOR) decision process. The primary objectives of this effort should be the consistent compliance of all applicable legislation, regulations, and policies, and to ensure that the organic depots are not excluded from the DSOR decision process. The Army should audit/review program executive officer, program manager, and AMC activities to ensure that they are following guidance on core logistics requirements, weapons system support strategies, and the DSOR decision process in accordance with the Assistant Secretary of the Army for Acquisition, Logistics, and Technology memorandum dated January 9, 2003, entitled Depot Considerations in Acquisition. The ASA (ALT) should develop an Army acquisition policy that requires program executive officers and program managers to acquire access to system technical data owned by the original equipment manufacturer for use by appropriate organic or contractor maintenance facilities during the performance of system logistical support. The ASA (ALT), Army G-4, and AMC should establish a partnership with an approved memorandum of agreement to integrate acquisition weapons system requirements with traditional end item and secondary item overhaul requirements to (1) assist AMC in maximizing the capabilities of the five organic depots to meet core requirements and (2) seek a renewed commitment from all parties that the depots will not be excluded from the DSOR process without the required analyses being conducted. Army G-4, ASA (ALT), and AMC should work closely together to develop and implement a plan for documenting baseline data to compare contractor costs with organic support costs. Army G-4, ASA (ALT), AMC, and AMC’s major subordinate commands should work closely with Headquarters, Department of the Army ASA (ALT) staff to market the depots with the program executive officers and program managers at every opportunity. Depot Maintenance: DOD’s 50-50 Reporting Should Be Streamlined. GAO-03-1023. Washington, D.C.: September 15, 2003. Depot Maintenance: Key Unresolved Issues Affect the Army Depot System’s Viability. GAO-03-682. Washington, D.C.: July 7, 2003. Department of Defense: Status of Financial Management Weaknesses and Progress Toward Reform. GAO-03-931T. Washington, D.C.: June 25, 2003. Depot Maintenance: Change in Reporting Practices and Requirements Could Enhance Congressional Oversight. GAO-03-16. Washington D.C.: October 18, 2002. Depot Maintenance: Management Attention Needed to Further Improve Workload Allocation Data. GAO-02-95. Washington D.C.: November 9, 2001. Defense Logistics: Actions Needed to Overcome Capability Gaps in the Public Depot System. GAO-02-105. Washington, D.C.: October 12, 2001. Defense Maintenance: Sustaining Readiness Support Capabilities Requires a Comprehensive Plan. GAO-01-533T. Washington, D.C.; March 23, 2001. Depot Maintenance: Key Financial Issues for Consolidations at Pearl Harbor and Elsewhere Are Still Unresolved. GAO-01-19. Washington, D.C.: January 22, 2001. Depot Maintenance: Action Needed to Avoid Exceeding Ceiling on Contract Workloads. GAO/NSIAD-00-193. Washington, D.C.: August 24, 2000. Depot Maintenance: Air Force Waiver to 10 U.S.C. 2466. GAO/NSIAD-00-152R. Washington, D.C.: May 22, 2000. Depot Maintenance: Air Force Faces Challenges in Managing to 50-50 Ceiling. GAO/T-NSIAD-00-112. Washington, D.C.: March 3, 2000. Depot Maintenance: Future Year Estimates of Public and Private Workloads Are Likely to Change. GAO/NSIAD-00-69. Washington, D.C.: March 1, 2000. Depot Maintenance: Army Report Provides Incomplete Assessment of Depot-Type Capabilities. GAO/NSIAD-00-20. Washington, D.C.: October 15, 1999. Depot Maintenance: Status of the Navy’s Pearl Harbor Project. GAO/NSIAD-99-199. Washington, D.C.: September 10, 1999. Depot Maintenance: Workload Allocation Reporting Improved, but Lingering Problems Remain. GAO/NSIAD-99-154. Washington, D.C.: July 13, 1999. Navy Ship Maintenance: Allocation of Ship Maintenance Work in the Norfolk, Virginia, Area. GAO/NSIAD-99-54. Washington, D.C.: February 24, 1999. Defense Depot Maintenance: Public and Private Sector Workload Distribution Reporting Can Be Further Improved. GAO/NSIAD-98-175. Washington, D.C.: July 23, 1998. Defense Depot Maintenance: DOD Shifting More Workload for New Weapon Systems to the Private Sector. GAO/NSIAD-98-8. Washington, D.C.: March 31, 1998. Defense Depot Maintenance: Information on Public and Private Sector Workload Allocations. GAO/NSIAD-98-41. Washington, D.C.: January 20, 1998. Defense Depot Maintenance: Uncertainties and Challenges DOD Faces in Restructuring Its Depot Maintenance Program. GAO/T-NSIAD-97-112. Washington, D.C.: May 1, 1997. Also, GAO/T-NSIAD-97-111. Washington, D.C.: March 18, 1997. Defense Depot Maintenance: DOD’s Policy Report Leaves Future Role of Depot System Uncertain. GAO/NSIAD-96-165. Washington, D.C.: May 21, 1996. Defense Depot Maintenance: More Comprehensive and Consistent Workload Data Needed for Decisionmakers. GAO/NSIAD-96-166. Washington, D.C.: May 21, 1996. Defense Depot Maintenance: Privatization and the Debate Over the Public-Private Mix. GAO/T-NSIAD-96-148. Washington, D.C.: April 17, 1996. Also, GAO/T-NSIAD-96-146. Washington, D.C.: April 16, 1996. Depot Maintenance: Issues in Allocating Workload Between the Public and Private Sectors. GAO/T-NSIAD-94-161. Washington, D.C.: April 12, 1994.
Each year, the U.S. Army spends about $3 billion on depot-level maintenance and repair work for weapons systems and other equipment. However, because its data gathering and reporting processes have been limited, the Army historically has been unable to fully identify how much depotlevel maintenance takes place outside its five public depots. As a result, it has not been able to determine with precision how well it was meeting statutory requirements to limit contracted depot-level maintenance work to 50 percent of the program budget. In the House report on the Fiscal Year 2001 Defense Authorization Act, Congress directed the Army to report on the proliferation of depot-level maintenance work at nondepot facilities and asked GAO to review that report. GAO examined the extent to which (1) the Army's report identifies the amount of depot-level maintenance work done outside public depots; (2) the Army can account for its depot-level maintenance workload, as required by statute; and (3) the corrective actions in the report are likely to address the proliferation issue and enhance the Army's reporting. The Army's proliferation report, issued in September 2003, did not fully identify the extent of depot-level maintenance work performed outside the Army's public depots. The report estimated that the Army underreported its fiscal year 2001 $2.7 billion depot-level maintenance program by $188.6 million but indicated that this was a rough estimate and that further analysis is needed. It attributed this underreporting largely to work performed in two categories--work that met the criteria for depot-level maintenance work but was not reported as such and work at nondepot field facilities that involved depot-level maintenance tasks. GAO's prior reviews also identified these categories as key contributors to underreporting. While the report noted that the Army has an extensive maintenance infrastructure with redundant capabilities, it did not address the extent of this redundancy. The lack of complete information on the extent of depot-level maintenance workloads limits the Army's ability to fully account for this work in the Department of Defense's (DOD) annual report to Congress on the allocation of public- and private-sector depot-level maintenance spending. The 2003 proliferation report identified key Army limitations, including inconsistencies in applying the congressionally mandated definition of "depot maintenance," weaknesses in its management information systems, and the failure to follow established policies and procedures for authorizing depot-level maintenance work at nondepot facilities. GAO's current analysis and prior work confirmed that these limitations make it difficult for the Army to fully account for its maintenance workload as it moves closer to the 50 percent ceiling for work performed by contractors. GAO's most recent report on the Army's 50-50 reporting for fiscal year 2002 showed that, after adjustments for known underreporting, the percentage of private-sector work increased to 49 percent. If implemented, the 29 recommendations in the 2003 report could enhance the Army's ability to report on its 50-50 data and to evaluate the proliferation of depot-level maintenance work at nondepot facilities. The recommendations, which are consistent with those that GAO has previously made, are focused on key problem areas, such as the need for an improved understanding about the 50-50 rule and for compliance with reporting policies and procedures. Efforts have been undertaken to address some of the problem areas. However, the Army has not yet developed an action plan that identifies priorities, time frames, roles and responsibilities, evaluation criteria, and resources for managing the implementation of the recommendations. Until the Army does this, it will be difficult to assess to what extent the Army is likely to meet its desired objectives. While improvements should be accomplished, the complexity and vastness of the Army's maintenance system and continuing questions about such issues as the definition of "depot maintenance" and changing maintenance strategies could continue to present challenges in fully recording all maintenance work that should be captured.
The Defense Environmental Restoration Program promotes and coordinates cleanup of hazardous substances associated with past DOD activities. Funding for cleanup at operational installations and formerly used defense sites has come from the Defense Environmental Restoration Account (DERA) since 1984. Cleanup associated with installations designated for closure or realignment has been funded through the Base Realignment and Closure (BRAC) process since 1991. Under its statutory reporting requirements, DOD annually reports to Congress by providing information on installation cleanup sites, including, for example, background, status, progress made, and cost incurred and remaining to complete cleanup. Since fiscal year 1993, the report has listed when installation cleanup activity schedules are impeded by a lack of funding. The Deputy Under Secretary of Defense, Environmental Security, formulates policy and provides oversight for the Defense Environmental Restoration Program at operational and BRAC installations and formerly used defense sites. In fiscal year 1997, the centralized DERA was partitioned into five environmental restoration accounts: Army, Navy (including Marine Corps), Air Force, formerly used defense sites, and defense-wide. The components plan, program, and budget for the individual installation cleanup projects. The Army, as executive agent for DOD, implements the program at formerly used defense sites through the U.S. Army Corps of Engineers. DOD planning and budget guidance, as well as headquarters and component instructions, govern departmentwide planning, programming, and budgeting for the environmental restoration program. The Army, the Navy, and the Air Force headquarters allocate funds to intermediate commands, which ultimately allocate funds for cleanup at specific installations. Other defense components, such as the Defense Logistics Agency (DLA), directly allocate funds to specific locations for cleanups, and the Army Corps of Engineers executes funding for formerly used defense sites. The impact of overall planning and budget guidance is not necessarily traceable to specific installations or sites. The way components allocate funding is described in appendix II. For fiscal year 1995, Congress appropriated $400 million less than DOD requested for cleanup at operational installations and formerly used defense sites and rescinded another $300 million from the amount that had been appropriated. For fiscal year 1996, Congress appropriated $200 million less than DOD requested. Table 1 shows fiscal years’ 1993 to 1997 DERA funding decreased from $1.638 billion to $1.314 billion. For fiscal years 1995 and 1996, appropriations for DOD’s environmental cleanup program were less than requested. For fiscal year 1995, Environmental Security either provided written guidelines on how to determine which projects to fund or conferred with component officials verbally on how funds should be used. For fiscal year 1996, no guidance was given because the congressional appropriation specified distribution of funds among services. Environmental Security’s November 1994 guidance to the defense components emphasized cleanup of sites that are the highest priority to stakeholders (those having interest in cleanup activities, such as the community surrounding the installation) and regulators’ considerations included by (1) involving stakeholders in decision-making, (2) taking interim remedial actions (early response action that is identified and implemented at any time during the study or design phases of cleanup) instead of continuing studies, (3) giving priority to higher relative risk sites, (4) deferring studies that are not essential for safety or compliance with agreements, (5) reviewing expense data, (6) considering innovative technologies and generic remedies, and (7) funding field locations according to their fair share. The guidance was not installation specific and service officials made the site decisions. Environmental Security officials stated that they discussed guidance with the defense components on how to implement the rescission. Considerations addressed included not deobligating funds for site projects already underway, limiting medium or low relative risk site work, limiting studies while ensuring a proper mix of study and cleanup, and deferring projects scheduled to begin in the later months of the fiscal year. The officials stated that they were unable to issue written guidance because there was about a month between hearing about the proposed rescission and the actual rescission. DOD’s fiscal year 1996 appropriations act stipulated how the $1.42 billion for environmental restoration was to be distributed to the components. Environmental Security officials stated that, as a consequence, no further guidance was provided to defense components regarding the funding change. DOD’s annual reports to Congress for fiscal years 1995 and 1996 show that an increasing number of installations reported that their cleanup schedules were affected by fund limits. Some installations that received less funding than planned reported schedule delays, while others did not. However, some installations that received more funding than planned also reported unspecified schedule delays. Reported cleanup schedule delays increased from 6 in fiscal year 1993 to 204 and 481 in fiscal years 1995 and 1996, respectively. In fiscal year 1997, reported funding schedule delays decreased to 135. Installations with the largest budget increases and decreases reported schedule delays with about equal frequency, and not all of the installations with the largest decreases reported schedule delays due to funding. Beginning in fiscal year 1993, DOD’s annual cleanup reports to Congress have identified installations where cleanup schedules were delayed by funding. Other causes for delays identified in the annual reports were technical, contracting, personnel, and regulatory. The reports further specified which of the four phases of cleanup (studies, interim actions, design, or actual cleanup) were affected. The greatest number of installations reporting cleanup schedule delays due to funding was in fiscal years 1995 and 1996. Figure 1 shows delays of cleanup schedules due to funding at installations, as reported by DOD, for fiscal years 1993-97. Among the installations that reported cleanup delays caused by funding limitations were facilities that received some of the largest increases in funding as well as facilities that received less funding than planned. Twelve of 23 Army facilities with the greatest total decreases between budget requests and funding in 1995 and 1996 reported schedule delays in one or both years. However, 16 of 23 Army facilities with the largest increases in funding also reported schedule delays. None of the 16 facilities that reported schedule delays despite receiving more funding than originally planned identified specific delays in the annual reports. Some facilities—for example, White Sands Missile Range, New Mexico—reported that their ability to undertake certain future actions depended on the availability of funding. Table 2 identifies selected installations that reported schedule impacts caused by funding. The table includes installations receiving net funding of at least $9 million less than planned for fiscal years 1995 and 1996 combined and shows which of the four cleanup phases were reported to be affected. DOD’s annual reports contain narratives of activity progress associated with a specific installation. Sometimes included in this description is reference to funding effects. Examples of report narrative in fiscal year 1995 are: Adak Naval Air Facility: “Several activities planned in were deferred due to funding cutbacks, including IRAs [interim remedial action] at SWMUs . Removal Actions at two PCB sites; and a basewide Remedial Investigation and Feasibility Study.” Aberdeen Proving Ground: “Several activities were not completed or delayed because of funding cutbacks, including, the J-Field FS [feasibility study], RI characterization activities at Canal Creek, and RI/FS activities at the Westwood and Other Aberdeen Areas.” Twin Cities Army Ammunition Plant: “Closure of the [Grenade Range and Outdoor Firing Range] areas was hindered as a result of funding cutbacks.” However, not all installations that reported schedule delays due to funding provided a narrative reference to the effect in DOD’s annual reports. For example, although funding schedule delays were reported for Robins AFB and Oceana Naval Air Station as shown in table 2, no description of these delays were provided in narratives. Overall, of the 204 installations reporting a schedule delay due to funding in fiscal year 1995, 42 described the nature of the effect in the report’s progress narratives. In fiscal year 1996, 190 of 481 installations described the effect in narratives. In discussing these reports, DOD and service officials stated that there is no requirement to provide detailed narratives. In discussing defensewide reports of funding impacts, an Environmental Security official noted that installations may be aware of some changes in funding planned and allocated by projects but not others. The official said his experience indicated that specific funding changes may sometimes be affected by other changes, especially among installations with similar priorities. Although DOD developed information on the effect of receiving less funding than it requested, the actual changes were often different than envisioned. Environmental Security officials prepared a May 1995 list that identified specific locations and sites that could be potentially affected by the $500 million in budget decreases. That list identified 5 programs and specific sites at 409 potentially affected locations. In discussing a draft of this report, DOD and service officials emphasized that variances should be expected in envisioned versus actual impacts in quick reaction responses such as the May 1995 list. Officials said this was especially true in this case because actual expenditures by mid-year would already have varied from plans available at headquarters. The May 1995 list and our visits to locations selected from it indicated that as DOD components made specific decisions, the potential effects of receiving less than requested during fiscal years 1995 and 1996 did not always occur as initially envisioned by DOD and that the results of funding changes varied widely at the affected locations. For example: Dugway Proving Ground, Utah, was identified to receive about $22 million less than requested, according to the list, for a medium relative risk priority site and a site not yet evaluated. However, Army records showed that the installation has 199 sites identified and actually received an increase of $631,000. An installation official stated that they were not aware of a potential reduction during that time period. Dahlgren Naval Surface Warfare Center, Virginia, was identified to receive $1.6 million less than requested according to the list but received $9.5 million less according to Navy data. Command officials overseeing cleanup at the center stated that funding was reduced because its projects were not known to be executable. However, center officials stated that the projects had been delayed and that they knew of no impediments to beginning work on the affected sites if funds had been made available. Center officials also said that they did not provide input for DOD’s 1995 annual report and did not know why a funding impact was not reported. The former Lake Ontario Ordnance Works, New York, was identified to receive about $10.9 million less than requested, according to the list. But Army officials responsible for the location said that the site was still in the design phase and that they knew of no plan to spend $10.9 million in fiscal years 1995 or 1996. Regarding your specific interest in the Badger Army Ammunition Plant, Wisconsin, DOD’s May 1995 list identified the plant to potentially receive about $1.3 million less than requested. DOD’s annual reports for fiscal years 1995 and 1996 indicated that the plant had schedule delays due to funding for projects in cleanup design and actual cleanup phases. Although plant officials told us that they did not receive the full amounts they had requested in fiscal years 1995 and 1996, they did not know of funding differences attributable to changes in actual funds versus requested funds made by Congress. Funding data for the plant varied by reporting source. For example, Environmental Security office data attributed $2.7 million of the President’s budget for fiscal year 1995 to the Badger plant, increasing to $6.5 million after the $300-million rescission. Army Environmental Center data initially attributed $17.2 million to the plant rather than $2.7 million, but showed a figure similar to Environmental Security’s figure after the rescission ($6.5 million). At the times of our visits, Army, state, and contractor officials were working together to optimize results within available funds. For example, plant officials had proposed reducing ground water monitoring wells while increasing actual cleanup, such as for contaminated soil. Also, the Army Industrial Operations Command determined, subsequent to our visits, that the plant is excess to its production mission, requiring some additional demolition of the facilities. DOD uses its planning, programming, and budgeting process for making funding decisions, and DOD components ultimately make site-specific decisions. When DOD received less funds than requested or rescissions occurred, Environmental Security provided written or oral guidance for DOD components’ actions. Cleanup schedule delays occurred at installations when the funding received was more or less than planned. Reports of cleanup schedule and other impacts varied according to individual project circumstances and were not clearly linked to installation planned and allocated funding levels. We requested comments on a draft of this product from the Secretary of Defense or his designee. An official of the Office of the Deputy Under Secretary of Defense for Environmental Security stated that DOD concurred with our presentation of the issues. Technical comments have been incorporated as appropriate. As arranged with your office, we plan no further distribution of this letter until 30 days from its issue date, unless you publicly announce the letter’s contents earlier. At that time, we will make copies available to the appropriate congressional committees; the Secretaries of Defense, the Army, the Navy, and the Air Force; the Commandant, Marine Corps; the Directors, Defense Logistics Agency and Defense Special Weapons Agency; and other interested parties. Please contact me on (202) 512-8412 if you have any questions about this report. Major contributors to this report are listed in appendix IV. To describe the Department of Defense’s (DOD) process for allocating funds, we reviewed DOD’s April 1994 management guidance that addressed how DOD handles funding responsibilities for the defense restoration program, and a March 1998 update to this guidance. In addition, we reviewed supplemental program guidance, DOD’s April 1996 defense instruction on the Defense Environmental Restoration Program, and components’ restoration guidance. We interviewed officials at the Environmental Security office, the Defense Logistics Agency (DLA), the Defense Special Weapons Agency, and the military services about the implementation of DOD’s guidance for allocating funds. To identify reported schedule changes due to funding, we compared automated funding data obtained from the defense components showing planned and obligated cleanup funding by installation with automated information from DOD’s annual reports on cleanup schedules affected by funding. We discussed funding changes and effects with environmental and budget officials and compared what was reported for the installations in DOD’s annual reports for fiscal years 1995 and 1996 with command and installation records at the following selected commands and field installations. Army Materiel Command, Alexandria, Virginia Naval Facilities Engineering Command, Chesapeake Activity, Washington, D.C. Naval Facilities Engineering Command, Atlantic Division, Norfolk, Virginia Air Force Aeronautical Systems Center, Dayton, Ohio DLA Defense Distribution Region East, New Cumberland, Pennsylvania Dugway Proving Ground, Utah Badger Army Ammunition Plant, Wisconsin Dahlgren Naval Surface Warfare Center, Virginia Yorktown Naval Weapons Station, Virginia Camp Lejeune Marine Corps Base, North Carolina Tinker Air Force Base, Oklahoma Air Force Plant Number 4, Texas former Lake Ontario Ordnance Works, New York We conducted our review from March 1997 to July 1998 in accordance with generally accepted government auditing standards. Beginning in fiscal year 1997, with the devolvement of the Defense Environmental Restoration Account (DERA), the responsibility for planning, programming, and budgeting transferred from the Office of the Secretary of Defense to the individual military components. According to March 1998 management guidance for the Defense Environmental Restoration Program, the Office of the Under Secretary of Defense for Environmental Security formulates policy and provides oversight for the environmental restoration program at operational and Base Realignment and Closure installations and formerly used defense sites. The components, the Defense Logistics Agency (DLA), and the Defense Special Weapons Agency execute their own restoration programs. Environmental Security’s April 1994 guidance for the execution of the Defense Environmental Restoration Programs of fiscal years 1994-95 and development of the program for fiscal year 1996 directed defense components to submit funding requirements to Environmental Security, which would transfer funding to military component appropriation accounts like operations and maintenance. Furthermore, the guidance states that the risk to human health and the environment presented by a site should be the main factor in determining priority and be considered in scheduling site cleanup with regulatory agencies. Although the previously single DERA was devolved to five separate accounts, Environmental Security is still involved in setting policy and oversight for component execution of DOD’s cleanup program. The Assistant Secretary of the Army for Installations, Logistics, and Environment, through the Deputy Assistant Secretary of the Army for Environment, Safety, and Occupational Health, is responsible for policy on all Army environmental programs. The Assistant Chief of Staff for Installation Management, through the Director of Environmental Programs, oversees the Army’s environmental program. The U.S. Army Environmental Center, as the program manager, develops the budget and workplan and coordinates program activities and requirements with the major Army commands. Before fiscal year 1997, the Army Environmental Center allocated funds to the installations, but that function is now the responsibility of the major commands. Army officials indicated that, before allocating funds to major commands for program execution, funding is first set aside for priority installations, program management, defensewide programs, and certain sites with either medium or low relative risk. The Assistant Secretary of the Navy for Installations and Environment is responsible for the Navy program and coordinates Navy and Marine Corps policy. The Chief of Naval Operations establishes policy, directs and monitors the program, and coordinates sites with the Marine Corps. The Naval Facilities Engineering Command executes the Navy and the Marine Corps programs, provides technical support, develops and supports resource requests and programs, and manages funds allocated for program execution. The command implements the program through its engineering field divisions and activities, which are responsible for executing the program at the installation level. These field divisions and activities provide information for the Navy, manage and administer cleanup contracts, coordinate and negotiate remediation agreements with regulators, develop and perform site-specific projects in coordination with installations, track project progress, and provide technical and financial oversight. The Deputy Assistant Secretary of the Air Force for Environmental, Safety and Occupational Health, is responsible for interpreting and disseminating environmental guidance, and overseeing the development and dissemination of Air Force restoration policy and program guidance. The Air Force Civil Engineer has overall responsibility for the Air Force program and oversees the related policy and guidance. The Civil Engineer develops Air Force policy and guidance, develops Air Force goals, submits the budget, and monitors its execution. Air Force major commands are responsible for providing guidance to their installations, validating and programming funding requirements, and executing the program. The Civil Engineer allocates funds to Air Force commands, which allocate funds to their installations. DLA and the Defense Special Weapons Agency both centrally manage funding of their installations. Environmental Security determines how much funding each agency will receive based on cleanup requirements submitted to support their budget requests. Funding plans are developed by the agencies for executing cleanup. DLA uses the Army Corps of Engineers to implement and oversee cleanup operations at its installations. The Army serves as the executive agent for formerly used defense sites, and its program is executed by the Army Corps of Engineers. Corps districts implement and oversee projects. Corps officials stated that the Corps consolidates and prioritizes requirements workplans, which are provided to the Army for approval. Environmental Security programs and budgets funding to the Army, which then provides funds to the Corps, for formerly used defense sites. Corps districts allocate funds for site cleanup and oversee action. Because of the devolvement of DERA, a separate environmental restoration account exists for the formerly used defense sites program. Margaret Armen The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on: (1) the Department of Defense's (DOD) process for allocating approved environmental cleanup budgets when funds received are less than requested or budget rescissions occur; and (2) reported cleanup schedule delays due to lack of funding. GAO noted that: (1) DOD develops and allocates approved budgets through its departmentwide planning, programming, and budget process; (2) the components used DOD guidance to establish priorities and distribute funds to the various installations, but the impact of that guidance is not necessarily traceable to specific installations or sites; (3) during fiscal years (FY) 1993 to 1997, Congress took three actions that significantly affected funding for DOD cleanup activities; (4) in FY 1995, Congress appropriated $400 million less than DOD requested and then rescinded an additional $300 million of the amount appropriated; (5) Congress appropriated $200 million less than DOD had requested for FY 1996; (6) in each case, DOD components adjusted funding priorities in light of the congressional actions and DOD guidance; (7) while specific guidance varied, both written and verbal guidance encouraged priority for sites of high risk and discouraged cleanup studies that were not essential; (8) data contained in DOD's annual reports to Congress and in DOD components' records do not show a direct relationship between installations receiving less or more funding than planned and those reporting cleanup schedule delays due to funding; (9) for example, during FY 1995 and FY 1996, about half of the Army installations with the largest decreases in funding reported cleanup schedule delays--a frequency similar to Army installations with the largest increases in funding; (10) during this period, GAO also found that actual funding changes under the DOD process often varied from that initially envisioned because of such reasons as inherent uncertainty during cleanup planning; and (11) for example, DOD initially identified a potential decrease in funding for two sites at Dugway Proving Ground, Utah, whereas the Army allocated a slight overall funding increase to that installation, which has 205 cleanup sites.
We are unable to give an opinion on the Statement of Financial Position as of September 30, 1996, because IRS could not provide adequate documentation to support the classification of its inventory of unpaid assessments as federal tax receivables and compliance assessments. Because we were unable to determine the appropriateness of IRS’ classifications of its inventory of unpaid assessments, we were unable to determine whether the amounts reported for net federal tax receivables and the related allowance for doubtful accounts as reflected on the Statement of Financial Position as of September 30, 1996, were fairly stated. Also, because of this limitation, which affects over 95 percent of the custodial assets on the Statement of Financial Position and which prevented us from being able to give an opinion, we did not perform testing of other line items on the Statement of Financial Position, such as Frozen Tax Refunds and Credits, Tax Refunds Payable, Advances, and Commitments and Contingencies. As we have reported in the past and as discussed in a later section of this report, IRS lacks an accounts receivable subsidiary ledger or other similar mechanism which routinely tracks receivables individually from period to period. This condition requires that IRS use alternative methods to identify the amounts to be recorded as federal tax receivables on its financial statements. However, these methods thus far have not provided IRS with the capability to report accurate and supportable amounts for federal tax receivables. Further, these methods have not provided the means necessary for IRS to effectively manage and routinely monitor the status of amounts owed by taxpayers. This makes it difficult to determine a reasonable estimate of amounts deemed collectible and could minimize the amounts IRS may ultimately be able to collect on its federal tax receivables. Because IRS could not provide sufficient evidence to support the classification of certain itemized taxes collected and refunded, we could not determine if the classifications of collection and refund amounts by tax type—for example, payroll versus corporate taxes—as reflected on the Statement of Custodial Activity were reliable. Otherwise, in our opinion, the Statement of Custodial Activity presents fairly, in all material respects, in conformity with a comprehensive basis of accounting other than generally accepted accounting principles as described in note 1, IRS’ custodial activities for taxes collected, refunded, and distributed. We evaluated management’s assertion about the effectiveness of its internal controls designed to safeguard assets against loss from unauthorized acquisition, use, or assure the execution of transactions in accordance with laws and regulations that have a direct and material effect on the Custodial Financial Statements or are listed in Office of Management and Budget (OMB) audit guidance and could have a material effect on the Custodial Financial Statements; and properly record, process, and summarize transactions to permit the preparation of reliable financial statements and to maintain accountability for assets. IRS management stated that, except for the material weaknesses in internal controls presented in the agency’s fiscal year 1996 FMFIA report on compliance with the internal control and accounting standards, internal controls provide reasonable assurance that the following would be prevented or detected for amounts material in relation to the financial statements: unauthorized acquisition, use, or disposition of assets, that could lead to noncompliance with laws and regulations; and misstatements in amounts reported in the financial statements. Management made this assertion based upon criteria established under FMFIA and the OMB Circular A-123, Management Accountability and Control. For financial statement reporting, a material weakness is a condition that precludes the entity’s internal control from providing reasonable assurance that losses, noncompliance, or misstatements material in relation to the financial statements will be prevented or detected in a timely basis. The following material weaknesses, which we also found in our prior audits of IRS, were reported in IRS’ FMFIA report for fiscal year 1996, with the exception of the computer security issues discussed below. These deficiencies in internal controls may adversely affect any decision by management that is based, in whole or in part, on information that is inaccurate because of the deficiencies. Our internal control work would not necessarily disclose material weaknesses not reported by IRS. Unaudited financial information reported by IRS may also contain misstatements resulting from these deficiencies. The nature of these weaknesses was such that they affected our ability to (1) render an opinion on IRS’ fiscal year 1996 financial statements taken as a whole and (2) conclude on IRS’ compliance with laws and regulations we tested as discussed in a later section of this report. Consequently, we believe that the internal controls were not effective in satisfying the objectives discussed above during fiscal year 1996. As discussed above, IRS does not maintain an accounts receivable subsidiary ledger or other similar mechanism that routinely tracks receivables and their related activity on an ongoing basis. Consequently, IRS does not have readily available the information on receivables it needs to prepare its financial statements. To compensate for this, IRS runs computer programs against its masterfiles—the only detailed record of taxpayer information it maintains—to identify taxpayer accounts for which assessments or other debits exceed receipts received or other credits made to taxpayers’ accounts. After these accounts—unpaid assessments—have been identified, IRS runs computer programs that utilize transaction and other codes within the masterfiles to separately classify these accounts as financial receivables or compliance assessments. Those accounts that are classified as financial receivables are then evaluated on a statistical basis by IRS to estimate what amount IRS ultimately believes it will collect on its receivables. The total amount deemed collectible by IRS, based on a projection of its statistical sample, is reported as federal tax receivables on its custodial Statement of Financial Position. The difference between the amount estimated to be collectible and the total amount identified as financial receivables is reported on the custodial Statement of Financial Position as an allowance for doubtful accounts. In our audit of IRS’ fiscal year 1995 financial statements, we reported that IRS was unsuccessful in deriving reliable receivables information for use in preparing the financial statements. We reported that errors we identified in the transaction and other coding of assessments within the masterfiles, coupled with mistakes IRS made in performing the statistical procedures, resulted in IRS’ sampling results being unreliable for purposes of projecting both the gross and net receivable amounts for financial reporting. For our fiscal year 1996 audit, we again reviewed IRS’ process for extracting and classifying taxpayer assessments into financial receivables and compliance assessments. We also tested samples of assessments classified by IRS as both financial receivables and compliance assessments to determine whether IRS’ classifications were appropriate. To test for proper classification, we attempted to review supporting documents in taxpayer files, such as tax returns, receipt deposits, correspondence between the taxpayer and IRS, and other pertinent information. We found that IRS could not locate sufficient supporting documentation (such as tax returns and installment agreements) for us to determine whether IRS had properly classified its inventory of unpaid assessments as either federal tax receivables or compliance assessments. Thus, we were unable to determine whether IRS had appropriately recognized federal tax receivables on the Statement of Financial Position. IRS officials stated that the missing documents had either been destroyed based on the agency’s record retention policies or simply could not be located. The lack of a detailed listing, or subsidiary ledger, for receivables, coupled with IRS not readily maintaining supporting documents on outstanding accounts receivable, increases the risk that material amounts may be inappropriately included or excluded from the financial statements. Additionally, IRS’ not maintaining adequate documentation, in many cases, to support the underlying assessments, could affect IRS’ ability to pursue collection from taxpayers on amounts owed, resulting in lost tax revenue, including interest and penalties, to the government. In an effort to address some of the concerns noted above, IRS is continuing to review all individual assessments in excess of $10 million identified through its computer programs as financial receivables and compliance assessments to ensure their proper classification. Additionally, IRS is continuing to refine its efforts to more accurately classify its unpaid assessments inventory through various enhancements to the computer programs it uses to classify these assessments. As part of a larger and long-term effort to modernize its systems, IRS is also identifying and refining the business and system requirements necessary to assess the status of its unpaid assessments and manage its receivables. IRS’ efforts are consistent with our recommendations from prior years’ audits that IRS take steps to ensure that (1) in the long-term, tax system modernization efforts provide for a mechanism to enable IRS to readily identify and routinely track and report on the status of federal tax receivables and (2) in the short-term, continue to identify ways to improve the accuracy of receivables reporting through further enhancements to its computer programs and detailed reviews of taxpayer accounts. (See appendix I.) As we have reported in our prior financial audits, IRS’ custodial financial management system was not designed to readily support the preparation of financial statements. Specifically, IRS’ Revenue Accounting Control System—its general ledger—is unable to sufficiently identify detailed tax revenues collected and related refunds paid to permit the preparation of its Custodial Financial Statements. For fiscal year 1995, we reported that IRS had attempted to extract taxpayer information from its masterfiles to support the amounts it reported as revenues on the fiscal year 1995 Custodial Financial Statements. We reported that, while IRS extracted taxpayer information from its masterfiles, it could not adequately reconcile this information to its general ledger and the Department of Treasury’s Financial Management Service’s (FMS) records. For fiscal year 1996, IRS again extracted detailed taxpayer information from its masterfiles to derive the reported amounts for revenue collections and refunds by tax types on the Custodial Financial Statements. IRS then performed reconciliations between the information used to derive the financial statements and (1) summary amounts recorded in its general ledger and (2) amounts reported for tax revenues collected and refunds paid by FMS. We found that, for fiscal year 1996, IRS’ overall reconciliation between its masterfile, general ledger, and amounts reported by FMS, in total, were materially the same. Based on this, and on our detailed tests of revenue collection and refund transactions, we were able to determine that the total Net Collections of Federal Revenue as reported on the fiscal year 1996 Statement of Custodial Activity was fairly stated in all material respects in relation to the financial statements taken as a whole. However, we were unable to determine whether revenue collection and refund amounts reported by tax types on the financial statements were properly classified. The primary reasons we were unable to make this determination were because (1) IRS could not always provide documentation to support certain transactions and (2) its record retention policies and practices resulted in the destruction of other key documents. By not maintaining the necessary documentation to support revenue collection and refund activity, IRS’ ability to accurately report such activity by tax type on its financial statements is significantly reduced. To address its record retention problems, IRS is performing an in-depth review to determine for what period, and in what form, records will be retained to ensure that it has the information necessary to support tax revenue collections and refunds. IRS relies on computerized information systems to process and account for its revenue and taxpayer data. These systems should include controls to prevent or detect unauthorized access and intentional or inadvertent unauthorized modifications to the data and related computer programs. In our prior audits of IRS’ financial statements, we reported material weaknesses in IRS’ computer security. Also, in April 1997 we reported that IRS continues to have serious weaknesses in the controls used to safeguard IRS computer systems, facilities and taxpayer data. Our review of controls, done to support our audit of IRS’ fiscal year 1996 financial statements, found that such controls continued to be ineffective. Many issues we previously identified at five IRS sites remained unresolved at the completion of our review of IRS computer security controls in May 1997. These include serious weaknesses in the areas of (1) physical security, (2) logical security, (3) data communications management, (4) risk analysis, (5) quality assurance, (6) internal audit and security, (7) security awareness, and (8) contingency planning. As a result, we consider computer security as a material weakness because IRS data or programs could be added, altered, or deleted and not detected in a timely manner. Further, we identified examples of weaknesses in our current review that allowed for unauthorized access and modification to computer resources, including computer programs and data. The more significant weaknesses include the following: Computer support personnel were granted excessive access to read or change sensitive system files or resources. This access gave them the ability to change, alter, or delete taxpayer data and associated programs. Access to such data files, which include the basic operating system software, should be limited to the minimum number of computer support personnel needed for maintenance and review. For example, at one facility, 88 computer support personnel had the ability to implement programs not controlled by the security software. Computer support personnel were granted inappropriate access, including the ability to both obtain access to data or programs and alter the automated audit trail that identifies who entered or changed data. The inherent risk in these privileges is that data or programs can be added, modified, or deleted and the related audit trail masked or deleted. Computer support personnel access to system resources was not adequately monitored. Monitoring the access activities of employees, especially those who have the ability to alter sensitive programs and data, can help identify any significant problems and deter employees from inappropriate and unauthorized activities. IRS systems record user and system activity in automated audit logs. However, when thousands of transactions are involved, reviews cannot be effective unless reports are available to managers that highlight activity that is unusual or suspicious so that such activity can be investigated. Proper supervision of employee actions, especially those having broad access privileges, requires routine assurance concerning the propriety of their activities. IRS sites had incomplete disaster recovery plans. The absence of a comprehensive, current plan increases the likelihood that IRS would not be able to restore the operations on a timely basis in the event of a local disaster and increases the risk of unavailability of the computerized information systems at IRS. At one site, IRS allowed improper access to the commands used to authorize and generate taxpayer refund checks. Having access to commands would allow an individual to process a refund payment without review and approval by a second party. In addition, although there were methods available for reviewing such access, there were no monitoring nor any review processes in place to detect improper refund transactions. This increases the likelihood that a person with such privileges could perform unauthorized refund activities. Further, without timely review, the likelihood of identifying such incidents is decreased. As discussed above, IRS could not provide adequate documentation to support the classification of its inventory of unpaid assessments with respect to federal tax receivables, and of certain itemized taxes with respect to tax collections and tax refunds. As a result, we were unable to (1) determine whether federal tax receivables as reported were valid and collectible, (2) determine whether tax collections and refunds were properly classified within the appropriate tax class, and (3) test for compliance with laws deemed significant to the financial statements.Accordingly, we are unable to report on IRS’ compliance with laws and regulations. When sufficient evidence to support information reported in the financial statements is not available for audit, we cannot determine whether IRS complied with laws and regulations deemed significant to the financial statements. For example, as discussed earlier, IRS was unable to provide documentation in many cases to support unpaid tax assessments. Similarly, as discussed earlier, IRS was unable to provide documentation to support its reporting of tax collections and refunds by tax type. Consequently, in both of these cases, we were unable to determine whether the transactions recorded in IRS’ accounting records complied with laws and regulations. However, we did note that one issue we have reported in our prior audits continued to exist during fiscal year 1996. Specifically, IRS did not base its certifications of excise tax amounts distributed to specific trust funds on the basis of amounts actually collected. As we have reported in prior audits, IRS based its certifications of excise tax distributions to specific trust funds on the assessed amount, or amount owed, as reflected on the tax returns filed by taxpayers. This is because IRS does not require taxpayers to provide the necessary information at the time taxes are collected to certify the distributions on the basis of amounts actually collected. By law, distributions of excise taxes to specific trust funds are to be based on actual collections. IRS has studied various options to enable it to make final certifications of amounts distributed based on actual collections and to develop the underlying information needed to support such distributions. IRS has finalized a methodology for addressing this issue and intends to implement it in fiscal year 1998. We will assess IRS’ implementation of its proposal in future audits. IRS’ Overview and Supplemental Information contain various data, most of which is not directly related to the Custodial Financial Statements. We do not express an overall opinion on this information. Additionally, because we were unable to express an opinion on the financial statements taken as a whole due to IRS’ inability to provide sufficient evidence to support amounts reported in its financial statements and the material weaknesses in internal controls discussed above, we did not pursue further work on this information. In our prior reports, we made 30 recommendations aimed at improving IRS’ custodial accounting operations. In our assessment this year, we determined that, to date, IRS had completed action on eight of these recommendations. IRS believes that it has resolved an additional 13 recommendations and anticipates closing the remaining nine in fiscal year 1998. We will review IRS’ actions to resolve the 13 recommendations IRS believes it has closed as part of our fiscal year 1997 financial statement audit. With respect to six of the 22 recommendations, we provided more specific recommendations that are contained in our April 1997 report on IRS systems security. Progress has been made and actions are underway by IRS to try to resolve the material weaknesses in internal controls and financial management problems reported in our audits. Additional corrective actions are still needed, and IRS continues to state its intention to commit the necessary resources and management oversight to resolve these weaknesses. We will continue to advise IRS on how to resolve these long-standing financial management problems. Appendix I provides a status of IRS’ implementation efforts on the remaining outstanding recommendations. preparing the annual Custodial Financial Statements in conformity with the basis of accounting described in note 1; establishing, maintaining, and assessing internal control to provide reasonable assurance that the broad control objectives of FMFIA are met; and complying with applicable laws and regulations. We are responsible for obtaining reasonable assurance about whether (1) the Statement of Custodial Activity is reliable (free of material misstatements and presented fairly, in all material respects, in conformity with the basis of accounting described in note 1), and (2) management’s assertion about the effectiveness of internal controls is fairly stated, in all material respects, based upon criteria established under the Federal Managers’ Financial Integrity Act of 1982 and the Office of Management and Budget Circular A-123, Management Accountability and Control. In order to fulfill these responsibilities, we examined, on a test basis, evidence supporting the amounts in the Statement of Custodial Activity and related disclosures; assessed the accounting principles used and significant estimates made by management in the preparation of the Statement of Custodial Activity; evaluated the overall presentation of the Statement of Custodial Activity; obtained an understanding of the internal control structure related to safeguarding assets, compliance with laws and regulations, and financial reporting, except in the above-noted areas where IRS was unable to provide sufficient evidence to support amounts reported in its financial statements; and tested relevant internal controls over safeguarding, compliance, and financial reporting and evaluated management’s assertion about the effectiveness of internal controls, except in the above-noted areas where IRS was unable to provide sufficient evidence to support amounts reported in its financial statements. We did not evaluate all internal controls relevant to operating objectives as broadly defined by FMFIA, such as those controls relevant to preparing statistical reports and ensuring efficient operations. We limited our internal control testing to those controls necessary to achieve the objectives outlined in our opinion on management’s assertion about the effectiveness of internal controls. We attempted to perform audit procedures on the limited information IRS provided; however, for the reasons stated above, we were unable to perform the necessary audit procedures to opine on IRS’ Custodial Statement of Financial Position or report on IRS’ compliance with laws and regulations. We did our work in accordance with generally accepted government auditing standards and OMB Bulletin 93-06, Audit Requirements for Federal Financial Statements. In commenting on a draft of this report, IRS stated that its ability to obtain a qualified opinion on its Statement of Custodial Activity was a significant accomplishment. IRS also reaffirmed its commitment to improving its revenue reporting and to developing a revenue accounting system that will address the shortcomings cited in this report. IRS stated that it generally agreed with the findings and conclusions in this report; however, it questioned our inability to express an opinion on the financial statements taken as a whole because of concerns with internal controls. IRS officials based that view on their interpretation of auditing standards. In referring to these standards, IRS stated that internal control weaknesses do not preclude rendering an opinion on the financial statements since the assessment of internal controls is performed to determine the extent of reliance that can be placed on internal controls and hence the nature, timing, and extent of substantive testing required. While this statement is conceptually correct, the nature of one of the significant internal control weaknesses discussed in this report—specifically the lack of supporting documentation—prevented us from substantiating significant line items on IRS’ financial statements. In planning the fiscal year 1996 audit, we had to consider the weak internal control environment at IRS and, in fact, designed our audit procedures based on the assumption that we could not rely on internal controls. This resulted in our having to increase the level of testing necessary to support our opinion. However, the lack of sufficient evidence to support (1) the validity of amounts included in its tax accounts receivable and (2) classifications of receipts and refunds by tax class precluded us from being able to opine on the financial statements taken as a whole. As discussed in this report, among the basic documents that IRS could not locate and, therefore, were not available to us were tax returns and other agreements which are typically generated or signed by the taxpayer. As a result, we were unable to verify the amounts reported in the financial statements for taxes receivable and receipts and refunds by tax class, which are material to the financial statements taken as a whole, and to report on IRS’ compliance with laws and regulations. The existence of an audit trail to substantiate transactions is fundamental to good accounting practices, and appropriate documentation is necessary to permit audit assurance absent other means to validate these transactions. Further, while IRS believes it provided enough alternative supporting documentation for the majority of tax accounts receivable cases where it could not obtain supporting documentation, we considered the alternatives provided and found that they were unacceptable. Specifically, we found that the information that was generated from IRS systems could not be corroborated with sources external to IRS. While acknowledging that material internal control and system weaknesses related to tax accounts receivable existed, IRS disagreed that these weaknesses would impact its ability to effectively manage and routinely monitor the status of amounts owed by taxpayers or its ability to pursue collection. We disagree. As we reported in prior years, improved internal controls and systems would allow IRS to more effectively manage its tax accounts receivable. For example, IRS could better manage its collection efforts if it had readily available detailed subsidiary records of collection activity to augment data used to establish collection priorities. Also, not having available relevant supporting documentation, such as tax returns filed and collection files, can impact the collection process when taxpayers dispute amounts owed. The results of our efforts to audit IRS’ fiscal year 1992, 1993, 1994, and 1995 Principal Financial Statements were presented in our reports entitled Financial Audit: Examination of IRS’ Fiscal Year 1992 Financial Statements (GAO/AIMD-93-2, June 30, 1993), Financial Audit: Examination of IRS’ Fiscal Year 1993 Financial Statements (GAO/AIMD-94-120, June 15, 1994), Financial Audit: Examination of IRS’ Fiscal Year 1994 Financial Statements (GAO/AIMD-95-141, August 4, 1995), and Financial Audit: Examination of IRS’ Fiscal Year 1995 Financial Statements (GAO/AIMD-96-101, July 11, 1996). In these prior reports, we made numerous recommendations to improve IRS’ custodial accounting operations. We determined the status of recommendations based on our audit work on IRS’ fiscal year 1996 Custodial Financial Statements and on our discussions with IRS officials. Our assessments of IRS’ actions for several recommendations are discussed in the report. However, we have not fully assessed the effectiveness of all of the responses identified in the following table. Financial Audit: IRS Significantly Overstated Its Accounts Receivable (GAO/AFMD-93-42, May 6, 1993) Provide the IRS Chief Financial Officer authority to ensure that IRS accounting system development efforts meet its financial reporting needs. At a minimum, the Chief Financial Officer’s approval of related system designs should be required. Take steps to ensure the accuracy of the balances reported in IRS financial statements. In the long term, this will require modifying IRS systems so that they are capable of (1) identifying which assessments currently recorded in the Master File System represent valid receivables and (2) designating new assessments that should be included in the receivables balance as they are recorded. Until these capabilities are implemented, IRS should rely on statistical sampling to determine what portion of its assessments represent valid receivables. Clearly designate the Chief Financial Officer as the official responsible for coordinating the development of performance measures related to receivables and for ensuring that IRS financial reports conform with applicable accounting standards. (continued) Modify the IRS methodology for assessing the collectibility of its receivables by —including only valid accounts receivable in the analysis; —eliminating, from the gross receivables balance, assessments determined to have no chance of being collected; —including an analysis of individual taxpayer accounts to assess their ability to pay; —basing group analyses on categories of assessments with similar collection risk characteristics; and —considering current and forecast economic conditions, as well as historical collection data, in analyses of groups of assessments. Once the appropriate data are accumulated, IRS may use modeling to analyze collectibility of accounts on a group basis, in addition to separately analyzing individual accounts. Such modeling should consider factors that are essential for estimating the level of losses, such as historical loss experience, recent economic events, and current and forecast economic conditions. In the meantime, statistical sampling should be used as the basis for both individual and group analyses. IRS Information Systems: Weaknesses Increase Risk of Fraud and Impair Reliability of Management Information (GAO/AIMD-93-34, September 22, 1993) Limit access authorizations for individual employees to only those computer programs and data needed to perform their duties and periodically review these authorizations to ensure that they remain appropriate. Monitor efforts to develop a computerized capability for reviewing user access activity to ensure that it is effectively implemented. Establish procedures for reviewing the access activity of unit security representatives. Use the security features available in IRS’ operating systems software to enhance system and data integrity. Require that programs developed and modified at IRS headquarters be controlled by a program librarian responsible for (1) protecting such programs from unauthorized changes including recording the time, date, and programmer for all software changes, and (2) archiving previous versions of programs. Establish procedures requiring that all computer program modifications be considered for independent quality assurance review. Formally analyze Martinsburg Computing Center’s computer applications to ensure that critical applications have been properly identified for purposes of disaster recovery. (continued) Monitor service center practices regarding the development, documentation, and modification of locally developed software to ensure that such software use is adequately controlled. Review the current card key access system in the Philadelphia Service Center to ensure that only users who need access to the facilities protected by the system have access and that authorized users each have only one unique card key. Establish physical controls in the Philadelphia Service Center to protect computers with access to sensitive data that are not protected by software access controls. Financial Management: Important IRS Revenue Information Is Unavailable or Unreliable (GAO/AIMD-94-22, December 21, 1993) Develop a method to determine specific taxes collected by trust fund so that the difference between amounts assessed and amounts collected is readily determinable and excise tax receipts can be distributed as required by law. This could be done by obtaining specific payment detail from the taxpayer, consistent with our April 1993 FTD report. Alternatively, IRS might consider whether allocating payments to specific taxes based on the related taxpayer returns is a preferable method. Determine the trust fund revenue information needs of other agencies and provide such information, as appropriate. If IRS is precluded by law from providing needed information, IRS should consider proposing legislative changes. Identify reporting information needs, develop related sources of reliable information, and establish and implement policies and procedures for compiling this information. These procedures should describe any (1) adjustments that may be needed to available information and (2) analyses that must be performed to determine the ultimate disposition and classification of amounts associated with in-process transactions and amounts pending investigation and resolution. Establish detailed procedures for (1) reviewing manual entries to the general ledger to ensure that they have been entered accurately and (2) subjecting adjusting entries to supervisory review to ensure that they are appropriate and authorized. Monitor implementation of actions to reduce the errors in calculating and reporting manual interest, and test the effectiveness of these actions. Give a priority to the IRS efforts that will allow for earlier matching of income and withholding information submitted by individuals and third parties. (continued) Financial Audit: Examination of IRS’ Fiscal Year 1993 Financial Statements (GAO/AIMD-94-120, June 15, 1994) Ensure that system development efforts provide reliable, complete, timely, and comprehensive information with which to evaluate the effectiveness of its enforcement and collection programs. Establish and implement procedures to analyze the impact of abatements on the effectiveness of assessments from IRS’ various collection programs. Reconcile detailed revenue transactions for individual taxpayers to the master file and general ledger. Establish and implement procedures to proactively identify errors that occur during processing of data, and design and implement improved systems and controls to prevent or detect such errors in the future. Develop and implement systems and standard operating procedures that incorporate controls to ensure that seized asset inventory records are accurately maintained, which include Establishing specific procedures to ensure the prompt and accurate recording of seizures and disposals, including guidance addressing the valuation of seized assets; Reconciling accounting and inventory records monthly as an interim measure until the successful integration of inventory and accounting systems is completed; and Implementing mechanisms for ensuring that annual physical inventories at field locations are effectively performed, that discrepancies are properly resolved, and that inventory records are appropriately adjusted. Determine what information related to seized assets, such as proceeds and liens and other encumbrances, would be most useful to IRS managers for financial management purposes and develop a means for accounting for these data. The following is GAO’s comment on the Commissioner of IRS’ letter dated November 26, 1997. 1. Discussed in “Agency Comments and Our Evaluation” section. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO examined the Internal Revenue Service's (IRS) custodial financial statements for fiscal year (FY) 1996. GAO noted that: (1) GAO was unable to give an opinion on the statement of financial position because IRS could not provide adequate documentation to support its balance of federal taxes receivable; (2) the statement of custodial activity was reliable in all material respects, except that sufficient evidence supporting the classification of itemized tax collections and refunds was not available; (3) while GAO found that total collections of federal revenue (net) and total transfers to Treasury, net of refund appropriations, as reported on the statement of custodial activity, are fairly presented in all material respects in relation to the financial statements taken as a whole, the classification of itemized collections and refunds of federal taxes presented on the statement may not be reliable; (4) IRS management asserted that, except for the material weaknesses identified in IRS' FY 1996 Federal Managers' Financial Integrity Act of 1982 report, internal controls were effective in: (a) safeguarding assets; (b) assuring material compliance with laws and regulations; and (c) assuring that there were no material misstatements in amounts reported in the financial statements; (5) consequently, the internal controls were not effective in satisfying the objectives discussed during FY 1996; and (6) material weaknesses in internal control and recordkeeping systems also precluded the tests necessary to provide a basis for any report on compliance with pertinent laws and regulations.
RECA established a procedure to make partial restitution to individuals who contracted serious diseases, such as certain types of cancers, presumably resulting from their exposure to radiation from aboveground nuclear tests or as a result of their employment in the uranium industry. In addition to creating eligibility criteria for compensation, RECA created a Trust Fund to pay claims. The Attorney General is responsible for reviewing applications to determine whether applicants qualify for compensation and establishing procedures for paying claims. To discharge these two responsibilities, the Attorney General has issued implementing regulations. The regulations established RECP within DOJ’s Civil Division and charged it with administering claims adjudication and compensation under the act. To file for compensation, the claimant or eligible surviving beneficiary, either acting on his or her own behalf or represented by counsel, submits the appropriate claim forms along with corroborating documentation to RECP, whose claims examiners and legal staff review and adjudicate the claims. If the claim is approved, a letter is sent notifying the person of the approval and enclosing an “acceptance of payment” form for the claimant to return to RECP. According to program officials, upon receipt of a signed acceptance of payment form, DOJ authorizes the Treasury Department to make payment from the Trust Fund. The RECA Amendments of 2000 require that the Attorney General pay claims within 6 weeks of approval. If the victim is deceased, compensation may be awarded to the victim’s eligible survivors (e.g., the victim’s spouse or children). Appendix III shows RECP’s claims adjudication process, including the procedures for refiling and administratively appealing denied claims. If a RECP claim does not satisfy the eligibility criteria, the claimant is notified of the deficiency in writing. The claimant is allowed 60 days in which to provide documentation correcting the deficiency. At the expiration of the 60-day period, if the claim remains deficient, DOJ issues a final denial decision explaining the reasons for the denial, and a copy is sent to the claimant. Claimants may refile a claim with new information to RECP up to two more times. DOJ’s decision denying the claim may be appealed administratively to a DOJ Appeals Officer, who can affirm or reverse the original decision or remand the claim back to RECP for further action. Claimants who are denied may also seek judicial review in a U.S. district court. Under DOJ implementing regulations, claimants must first exhaust their administrative remedies within DOJ prior to seeking judicial review. Program officials said that from program inception in 1992 through September 30, 2002, only eight claims denied by the RECP have been brought to district court. The RECA Amendments of 2000 broadened the scope of eligibility for benefits coverage, including increasing the geographical areas covered, allowing more individuals to qualify, and establishing a prompt payment period. Figure 1 shows the affected areas under RECA. Some of the major changes resulting from the amendments include permitting eligible aboveground uranium mine employees, uranium mill workers, and uranium ore transporters to qualify for compensation; increasing the geographic areas included for eligibility and increasing the time period considered for radiation exposure for uranium mine employees; expanding the list of specified diseases that may qualify individuals for compensation to include other types of cancers and also noncancers; decreasing the level of radiation exposure that is necessary to qualify for compensation for uranium mine employees; making certain medical documentation requirements are less stringent; eliminating distinctions between smokers and nonsmokers pertaining to diseases such as lung cancer and nonmalignant respiratory diseases; construing all reasonable doubts about the eligibility of a claimant in favor of the claimant; allowing previously denied claimants to file up to three more times; and requiring the Attorney General to ensure that a claim is paid within 6 weeks of approval. On November 2, 2002, the 21st Century Department of Justice Appropriations Authorization Act was enacted. This law included several provisions that further amended RECA. The amendments affect eligibility criteria and revise claims adjudication procedures. These provisions were enacted near the end of our review, and we did not assess their potential impact on the program. Some of the major changes include re-insertion of a Downwinder area that was inadvertently eliminated when RECA was amended in July 2000; requiring that lung cancer must, like other compensable cancers, be “primary” (i.e., originate in the specified organ or tissue); allowing uranium miners to qualify by meeting either the 40 Working Level Months (WLM) exposure standard or the 1-year duration of employment standard; and striking the requirement that, in cases where the claimant is living, a claimant with lung cancer must submit the medical documentation required for proof of a “non-malignant respiratory disease.” Appendix II provides a more comprehensive summary of the key provisions of RECA by claimant category. In addition to RECP, other programs are authorized to provide compensation to persons who have presumably become ill as a result of working for the federal government in producing or testing nuclear weapons. For example, the Radiation-Exposed Veterans Compensation Act of 1988 provides, in general, monthly compensation for specific diseases to veterans who were present at certain atomic bomb exercises, served at Hiroshima and Nagasaki during specific periods of the post World War II occupation of Japan, or were prisoners of war in Japan. In addition, Title XXXVI of the Floyd D. Spence National Defense Authorization Act for Fiscal Year 2001 establishes the “Energy Employees Occupational Illness Compensation Program” to compensate covered employees or their survivors who contracted certain illnesses resulting from exposure to certain ultra-hazardous materials during employment in Department of Energy facilities that processed or produced radioactive materials used in the production of atomic weapons. Certain uranium employees who are eligible for compensation under RECA may also be eligible for additional compensation and medical benefits under title XXXVI. Specifically, uranium miners, uranium mill workers, and uranium ore transporters, approved under Section 5 of RECA, are eligible to receive under title XXXVI an additional $50,000 lump-sum payment plus medical benefits. The enactment of the RECA Amendments of 2000 was followed by a significant increase in the number of claims. Although RECP received and processed record numbers of claims in fiscal years 2001 and 2002, claims are taking longer to process. In addition, the percentage of claims that are adjudicated within 12 months has dropped, and the number of pending claims has grown sharply. Since its inception in April 1992 through the end of fiscal year 2002, RECP has received 14,987 claims for compensation. The total number of RECA claims filed has increased 92 percent, from 7,819 at the end of fiscal year 2000 to 14,987 by the end of fiscal year 2002. In fiscal year 2001, the year following the enactment of the RECA 2000 Amendments, RECP received over 3,800 claims—more claims than were filed in the prior 6 fiscal years combined. There were over 3,300 claims filed in fiscal year 2002. At the end of fiscal year 2002, there were 2,654 claims pending adjudication. In fiscal year 2003, about 3,200 new filings are anticipated. Figure 2 shows the number of claims filed each fiscal year. When RECP reviews a claim, the review process ends in one of two possible outcomes—approval or denial of the claim. If approved, the claim is forwarded to Treasury for payment. If denied, applicants may refile their claims or pursue other avenues of appeal. Of the total 14,987 claims filed, RECP reached a disposition on 12,333. The remaining 2,654, or about 18 percent of claims, were pending, as of September 30, 2002. Of the claims that were adjudicated, 7,915, or about 64 percent, were approved and 4,418, or about 36 percent, were denied. Excluding pending claims, RECP approved about 56 percent of the uranium mine employee claims, about 75 percent of the downwinder claims, about 34 percent of the onsite participant claims, about 82 percent of the uranium mill claims, and about 81 percent of the ore transporter claims. Table 1 shows the number of claims approved, denied, and pending as of September 30, 2002. Through the end of fiscal year 2002, RECP approved about $530.5 million to claimants. RECP approved $230.5 million to eligible individuals based on uranium mine employee applications (or about 43 percent of the total); $247.2 million based on Downwinder applications (or about 47 percent of the total); $33.4 million based on onsite participant applications (or about 6 percent of the total); $15.6 million based on uranium miller participant applications (or about 3 percent of the total); and $3.8 million based on ore transporter participant applications (or about 1 percent of the total). The RECA legislation requires that applications be processed within 1 year. However, the law permits applicants’ additional time to submit more documentation to support their claims. About 89 percent of the RECA applications were processed within 12 months over the period fiscal years 1992 through 2000. By the end of fiscal year 2002, the percentage of claims processed within 12 months was 79 percent. Table 2 shows the processing times in months for applicants over the course of RECP. We could not readily determine to what extent the 2,559 applications that were not processed within 1 year were due entirely to the granting of additional time. As shown in table 3, the average number of days to process a claim has increased in each category since our previous review. According to data provided by DOJ officials, for fiscal years 1992 through 2002, the overall average processing time from the date an application is filed until its disposition was 327 days for uranium miner employee claims. This is up from 269 days when we last reported. The average processing time for Downwinder claims is 244 days. This is up from 190 days when we last reported. The average processing time for onsite participant claims is 263 days. This is up from 245 days when we last reported. Uranium mill employee claims and ore transporter employee claims are new categories since we last reported. However, each of these claimant categories, on average, took well over a year to process, 459 days and 392 days, respectively. Table 3 shows the average number of days to process a claim for fiscal years 1992 through 2002 and the increase in processing time by claimant category since we last reported. RECP officials attributed the increase in average time required to process claims to differing characteristics associated with each claim and the different factors involved in the review and application of the RECA legislation, as amended, for the five claims categories. RECP officials told us that since the inception of the program, its policy has been to assist claimants in any way that it can. For example, rather than denying a claim for a lack of documentation, program officials said that they allow claimants additional time to provide corroborating documentation. In many cases, claimants in the uranium industry were employed as millers, miners, and ore transporters over the course of their career. RECP officials said that if a claimant filed a uranium miner claim, but could not provide sufficient documentation to satisfy RECA’s uranium miner requirements, RECP would work with the claimant to obtain additional documentation in order to satisfy the uranium miller or transporter requirements where appropriate. RECP officials cited other reasons for delays in processing claims, including RECP’s need, in certain cases, to gather medical records to address RECA’s statutory requirements for certain compensable diseases. RECP said that in these instances, staff would conduct additional research on behalf of the claimant or allow the claimant more time to provide the proof necessary to meet the eligibility criteria. In addition to the increase in the volume of claims, program officials said that the adjudication of the newly added claimant categories (uranium millers and ore transporters) presented challenges in terms of deciding the types of employment records that existed and which records should be required and, therefore, required additional processing time in some instances. Similarly, RECP had to determine the medical evidence that would be sufficient to establish proof of the new compensable diseases and illnesses added to RECA. Since the amendments of 2000, RECA claims are coming in more rapidly, and the processing of these claims is taking longer. As a result, the number of pending claims has grown sharply, from 653 at the end of fiscal year 2000 to 2,654 by the end of fiscal year 2002, about a 300-percent increase. In fiscal year 2003, RECP program officials estimate that 3,185 new claims will be filed. It is likely that the number of pending claims will grow further. According to DOJ budget justification documents for fiscal year 2003, because the 2000 amendments eased eligibility requirements, many of the claims submitted in 2002 were re-filings from previously denied claimants. According to program officials, the resolution of refiled claims is more straightforward. Therefore, these claims were processed first to speed payments to deserving claimants. But, program officials anticipate that the pace of claims processing will be slower in fiscal year 2003 than in fiscal year 2002, because the adjudications of the remaining claims in process will be more time-consuming and difficult. RECA program funding is provided from two sources. The RECA Trust Fund receives appropriated funds from which compensation is paid to eligible claimants. Funding for DOJ to administer the program is provided in a separate appropriation account for radiation exposure compensation administrative expenses. Table 4 shows the RECA Trust Fund activity from fiscal years 1992 through 2002, including the amounts appropriated each year and the balance at the end of each fiscal year. Money remaining in the Trust Fund at the end of any given fiscal year is generally carried forward to the next fiscal year. The RECA Trust Fund received over $200 million in the first 2 years of the program. Between fiscal years 1994 and 1996, the program was funded entirely by funds carried over from prior year appropriations. Beginning in fiscal year 1997, Congress resumed making annual appropriations to the RECA Trust Fund with the exception of fiscal year 1999 when no funds were appropriated to the Trust Fund. For fiscal year 2000, $11.6 million was available in the Trust Fund. This amount included $8.4 million carried forward from the prior year and a fiscal year 2000 appropriation of $3.2 million. For fiscal year 2001, $10.8 million was appropriated and $431,000 was carried over from fiscal year 2000. Later, in fiscal year 2001, the RECA program received a supplemental appropriation for “such sums as may be necessary” to pay claims only through the end of that fiscal year. This resulted in payments of $107.9 million for fiscal year 2001. Table 4 shows the Radiation Exposure Compensation Trust Fund Activity. The National Defense Authorization Act for fiscal year 2002 provided funding for the RECA Trust Fund to cover a 10-year period—fiscal years 2002 through 2011 up to a specified maximum amount per fiscal year. In past years, Congress appropriated money each fiscal year. This act, instead, provided specified amounts for subsequent fiscal years 2002 through 2011, obviating the need for new congressional action in each of those fiscal years unless the Congress determined that additional funding was necessary. Table 5 shows the Trust Fund appropriations established in law. According to estimates by CBO and RECA program officials, beginning in fiscal year 2003, higher funding levels will be necessary or millions of dollars in claims may be delayed. As shown in table 6, CBO estimates that there will be a shortfall of $101 million in the Trust Fund through fiscal year 2007, of which about $44 million will occur in fiscal year 2003. Overall, CBO estimates a net shortage of $78 million through 2011. Table 7 shows the RECA program estimate, which is similar to, but slightly higher than CBO’s estimate. Overall, RECA estimates a shortage of $107 million through 2011. Both organizations agree that most of the funding shortfall will occur over the next 3 years. Figure 3 shows the gap between the amount of funding currently appropriated to the Trust Fund and CBO’s estimate through fiscal year 2011. RECP officials’ estimates through fiscal year 2011 are similar to, but slightly higher than, that of CBO’s. According to program officials, recent trends indicate that projected claims will total about $762 million for fiscal years 2002 through 2011. This would exceed the current total of annual Trust Fund appropriations by a total of about $107 million and CBO’s overall estimate by $29 million. DOJ’s estimate agrees with that of CBO, in that most of the funding shortfall, about $72 million, will occur over the next 3 years. According to RECP officials, a shortfall of funding available in the Trust Fund in any given year can result in the claims going unpaid until funds become available the following year. For example, RECA officials said that in fiscal year 2002, funding was exhausted 3 weeks before the close of the fiscal year, and based on the shortfalls projected, funding is likely to be exhausted before the close of fiscal years 2003 through 2005. Table 7 shows RECP’s estimate of unfunded requirements for Radiation Exposure Compensation compared with the current Trust Fund appropriations as established in law. RECP officials told us that in addition to the significant increase in the number of claims submitted, RECP received an unprecedented number of telephone and written inquiries for forms and information, a development that has further stretched the program’s operational resources. According to a budget official, this has led to upward pressure on the overall costs to administer the program. In an effort to keep up with the demand, program officials began adding additional staff in fiscal year 2000. Table 8, shows that RECP’s full-time equivalent (FTE) staff levels and spending on program administration have increased in fiscal years 2001 and 2002 commensurate with a resurgence of claims. Since fiscal year 1993, funding for DOJ administration of the program has been provided in a separate appropriation account for Radiation Exposure Compensation administrative expenses. The administrative expense appropriation for the program was $1.996 million each for fiscal years 2001 and 2002. There is an outstanding issue with respect to the program’s administrative expenses for fiscal years 2001 and 2002 in that spending may have exceeded its appropriations for those years. The Antideficiency Act provides that an officer or employee of the U.S. government may not make or authorize an expenditure or obligation exceeding an amount available in an appropriation or fund, or enter into a contract or other obligation for payment of money before an appropriation is made. It is our understanding, on the basis of information provided to us during our review, that total administrative expenses were $2.1 million for fiscal year 2001 and $3 million for fiscal year 2002, while the appropriation for Radiation Exposure Compensation administrative expenses was $1.996 million for each of those fiscal years. Regarding fiscal year 2001, it is our understanding that following an increase in the number of RECA claims filed, around July 2001, the increase in spending arose from a task order that was issued for $1 million to hire contract staff during fiscal years 2001 and 2002. These expenses were paid with funds from DOJ’s Legal Activities, Salaries and Expenses, General Legal Activities account. The additional staff was reportedly used to assist in processing claims. According to DOJ, an investigation has been initiated to ascertain if possible Antideficiency Act violations occurred with respect to the Radiation Exposure Compensation administrative expenses account. Whenever an agency discovers evidence of a possible over obligation or over expenditure, it must investigate that evidence. If the investigation shows that the appropriation, in fact, is over obligated or over expended, the Antideficiency Act requires reporting the over obligation or over expenditure to the President and the Congress. OMB guidance on budget execution, including requirements contained in the Antideficiency Act, is included in OMB Circular A-11, Part 4 which requires, among other things, that agencies include in such reports the primary reason for the violation, a statement of any circumstances the agency believes to be extenuating, a statement of the adequacy of the agency’s funds control system, and a statement of whether any additional action need be taken to prevent recurrence of the same type of violation. Given that DOJ has initiated an investigation, we will monitor DOJ’s investigation of possible Antideficiency Act violations in fiscal years 2001 and 2002 relating to the Radiation Exposure Compensation administrative expenses account and take appropriate actions, if necessary, at the conclusion of DOJ’s investigation. Fiscal year 2003 appropriations contained several changes to the program’s administrative expenses appropriation. First, the Consolidated Appropriations Resolution, 2003, appropriated funds for the program’s administrative expenses in DOJ’s Legal Activities, Salaries and Expenses, General Legal Activities account rather than in a separate appropriation. Second, the language of the administrative expenses appropriation was changed from a specified amount to a specified minimum amount. Specifically, whereas fiscal year 2002 appropriations provided for “necessary administrative expenses in accordance with the Radiation Exposure Compensation Act, $1,996,000,” the fiscal year 2003 appropriation provides, in part, that “not less than $1,996,000 shall be available for necessary administrative expenses in accordance with the Radiation Exposure Compensation Act.” In accompanying Conference Report language, the conferees said that they “expect the Civil Division to absorb any additional requirements for processing RECA claims from other resources available to the Civil Division.” We provided a draft of this report to the Attorney General for review and comment. The Justice Department advised us they had no formal comments. The Civil Division and the Justice Management Division reviewed the report for accuracy and provided technical comments which have been incorporated in this report where appropriate. Funding available to pay claims under the RECA may be inadequate to meet projected needs. Since the end of fiscal year 2000, the number of unadjudicated claims has grown 300 percent from 653 to 2,654, and nearly 3,200 new claims are anticipated during fiscal year 2003. Both CBO and DOJ estimate that money in the Trust Fund will be insufficient to pay all the claims that are projected to be approved over the 2003-2011 period. For fiscal years 2001 and 2002, RECP officials spent more for administrative expenses than was appropriated. For fiscal year 2003, Congress authorized DOJ’s Civil Division to absorb any additional funding required for administrative expenses above the amount appropriated. However, the availability of additional funds, if needed, for administrative expenses is contingent on the Civil Division’s ability to absorb any additional costs. We recommend that the Attorney General consult with the congressional committees of jurisdiction to develop a strategy to address the gap between current funding levels and the amount of funding needed to pay claims projected to be approved over the 2003-2011 period. Copies of this report are being sent to the Attorney General; the Director, Office Management and Budget; and any other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact William Crocker or me at (202) 512-8777 or [email protected]. R. Rochelle Burns, Geoffrey R. Hamilton, and Leo M. Barbour made key contributions to this report. To determine the outcomes of the claims adjudication process, including the number of approved and denied claims, the timeliness of the claims adjudication process, and the amount of money awarded, we interviewed Radiation Exposure Compensation Program (RECP) officials and obtained RECA-related case information from the Department of Justice’s (DOJ) Civil Division’s case histories database for fiscal years 1992 through 2002. The Civil Division’s Office of Planning, Budget and Evaluation (OPB&E) provided financial information. We discussed the basis for any major fluctuations with RECP officials. We did not independently verify the accuracy of the RECA data extracted from the database. To determine the cost of administering RECP, we obtained data from OPB&E by object class for the end of fiscal years 1992 through 2002. The cost provided includes items such as personnel compensation and benefits, travel and transportation of persons, and printing and reproduction costs. To determine full-time equivalent (FTE) staffing levels, the office provided us with FTE staff levels for RECP at the end of fiscal years 1992 through 2002. To determine the nature of expenditures from the Trust Fund, we evaluated annual Trust Fund activity from fiscal years 1992 through 2002 provided by OPB&E. During our initial review of the RECP in 2001, we verified that payments made were consistent with data contained in DOJ’s Civil Divisions case histories database. We did not revalidate the information from the database during this review. To validate the estimates of future Trust Fund requirements, we met with Congressional Budget Office (CBO) officials and examined their source data, methodology, assumptions, calculations, and results. On the basis of our examination, we found that CBO’s estimates were sound and reasonable. RECA program officials said that they are confident in the data necessary to support improved estimates for the next 3 years (fiscal years 2003 through 2005); however, beyond that, their best educated guess is to extend the slope of the funding curve out another 5 or more years for RECP. We focused on DOJ’s administration of RECA from its inception in fiscal year 1992 through the end of fiscal year 2002. We conducted our review from August 2002 through February 2003, in accordance with generally accepted government auditing standards. Location Colorado, New Mexico, Arizona, Wyoming, South Dakota, Washington, Utah, Idaho, North Dakota, Oregon, and Texas. Examples of diseases covered Other Lung cancer and nonmalignant respiratory disease. A period of at least 2 years from January 21, 1951-October 31, 1958, or for the period between June 30 and July 31, 1962. Certain Utah, Nevada, and Arizona counties downwind from the Nevada test site. Certain types of leukemia, lung cancer, multiple myeloma, lymphomas, and primary cancer of the thyroid, male or female breast, esophagus, stomach, pharynx, small intestine, pancreas, bile ducts, gall bladder, salivary gland, urinary bladder, brain, colon, ovary, or liver. Victims must have been exposed to at least 40 working level months of radiation or determine employment in a mine for 1 full year. Aboveground miners are included. Additional states may apply for inclusion as a covered state. For those exposed prior to age 21, and subsequently contract any medically recognized form of acute or chronic leukemia, other than chronic lymphocytic leukemia, a period of only 1 year, from January 21, 1951 to October 31, 1958, is required atmospheric nuclear tests from July 16, 1945-December 31, 1962. Onsite testing areas include the Nevada, Pacific, Trinity, and the South Atlantic test sites. Certain types of leukemia, lung cancer, and lymphomas, multiple myeloma, and primary cancer of the thyroid, male or female breast, esophagus, stomach, pharynx, small intestine, pancreas, bile ducts, gall bladder, salivary gland, urinary bladder, brain, colon, ovary, or liver (certain types). The payment to the victim may be offset by payments received by the victim from the Department of Veterans Affairs based on the same radiation-related illness. Time periods Any time from January 1, 1942-December 31, 1971. Location Colorado, New Mexico, Arizona, Wyoming, South Dakota, Washington, Utah, Idaho, North Dakota, Oregon, and Texas. Examples of diseases covered Other Lung cancer, nonmalignant respiratory diseases, renal cancer, and other chronic renal disease, including nephritis and kidney tubal tissue injury. Victims must have worked for at least 1 year during the relevant time period. Any time from January 1, 1942-December 31, 1971. Colorado, New Mexico, Arizona, Wyoming, South Dakota, Washington, Utah, Idaho, North Dakota, Oregon, and Texas. Lung cancer, nonmalignant respiratory diseases, renal cancer, and other chronic renal disease, including nephritis and kidney tubal tissue injury. Victims must have worked for at least 1 year during the relevant time period. Also includes victim’s survivors. Appendix III: RECP’s Claims Adjudication Process The RECP attorney may request additional supporting information before making a recommendation (for approval or denial) to the Assistant Director. As of July 10, 2000, based on the 2000 amendments, an applicant can file a claim for consideration up to three times. Applicants whose claims have been denied are permitted to refile their claims if (1) they provide information to correct the deficiency that was the basis for the last denial under the original RECA legislation or (2) they believe that they are now eligible as a result of the 1999 regulatory changes and/or the 2000 amendments. The Appeals Officer may (1) reverse the denial (award compensation to the claimant), (2) affirm the denial (deny compensation to the claimant), or (3) remand the case to RECP. The decision is equivalent to a negative determination for the other two options.
On October 15, 1990, the Radiation Exposure Compensation Act (RECA) was enacted providing for payments to individuals who contracted certain cancers and other serious diseases presumably as a result of their exposure to radiation released during aboveground nuclear weapons tests or as a result of their employment associated with the uranium mining industry during the Cold War era. The RECA Amendments of 2000 required that GAO report to the Congress on the Department of Justice's administration of RECA not later than 18 months after the enactment of the amendments and every 18 months thereafter. GAO originally reported on the status of the program in September 2001. The objectives of this report are to update information on claims processing, payments from the Trust Fund, and administrative expenses. Since the enactment of the RECA Amendments of 2000, which expanded eligibility for benefits, the RECA program has experienced a significant increase in the number of claims filed. Claims also are taking longer to process, and the number of pending claims has grown sharply. Since we last reported in September 2001, claims have increased from 7,819 to 14,987. Pending claims have increased 300 percent, from 653 to 2,654. About 3,200 new claims are anticipated in fiscal year 2003. In addition, the average time to process claims increased for each category of claimant. Given these circumstances, current funding for the RECA program to pay claims may be inadequate to meet projected needs. In fiscal year 2002, RECA was appropriated funds to cover a 10-year period--fiscal years 2002 through 2011 up to a specified amount per year--totaling $655 million. The Congressional Budget Office (CBO) and the Department of Justice (DOJ) estimate that funding levels appropriated to the Trust Fund are insufficient to meet the projected claims. As a result, claims may be delayed, particularly through 2007. Since 1993, funding for DOJ administration of the program has been provided in a separate appropriation account for Radiation Exposure Compensation administrative expenses. There has been upward pressure on the program's administrative costs in recent years. For fiscal years 2001 and 2002, the RECA program may have exceeded its budget authority for administrative expenses. According to a program budget official, the RECA program spent about $100,000 in fiscal year 2001 and about $1 million for fiscal year 2002 in administrative expenses over the $1.996 million appropriated to the Radiation Exposure Compensation administrative expenses account in those fiscal years.
On March 19, 2003, the United States launched military operations in Iraq. As of the end of February 2005, an estimated 827,277 servicemembers had been deployed in support of OIF. Deployed servicemembers, such as those in OIF, are potentially subject to occupational and environmental hazards that can include exposure to harmful levels of environmental contaminants such as industrial toxic chemicals, chemical and biological warfare agents, and radiological and nuclear contaminants. Harmful levels include high- level exposures that result in immediate health effects. Health hazards may also include low-level exposures that could result in delayed or long- term health effects. Occupational and environmental health hazards may include contamination from the past use of a site, from battle damage, from stored stockpiles, from military use of hazardous materials, or from other sources. As a result of numerous investigations that found inadequate data on deployment occupational and environmental exposures to identify the potential causes of unexplained illnesses among veterans who served in the 1991 Persian Gulf War, the federal government has increased efforts to identify potential occupational and environmental hazards during deployments. In 1997, a Presidential Review Directive called for a report by the National Science and Technology Council to establish an interagency plan to improve the federal response to the health needs of veterans and their families related to the adverse effects of deployment. The Council published a report that set a goal for the federal government to develop the capability to collect and assess data associated with anticipated exposure during deployments. Additionally, the report called for the maintenance of the capability to identify and link exposure and health data by Social Security number and unit identification code. Also in 1997, Public Law 105- 85 included a provision recommending that DOD ensure the deployment of specialized units to theaters of operations to detect and monitor chemical, biological, and similar hazards. The Presidential Review Directive and the public law led to a number of DOD instructions, directives, and memoranda, which have guided the collection and reporting of deployment OEHS data. See table 1 for a list of selected DOD policies for collecting and reporting deployment OEHS data. DHSD makes recommendations for DOD-wide policies on OEHS data collection and reporting during deployments to the Office of the Assistant Secretary of Defense for Health Affairs. DHSD is assisted by the Joint Environmental Surveillance Working Group, established in 1997, which serves as a coordinating body to develop and make recommendations for DOD-wide OEHS policy. The working group includes representatives from the Army, Navy, and Air Force health surveillance centers, the Joint Staff, other DOD entities, and VA. Each service has a health surveillance center—CHPPM, the Navy Environmental Health Center, and the Air Force Institute for Operational Health—that provides training, technical guidance and assistance, analytical support, and support for preventive medicine units in theater in order to carry out deployment OEHS activities in accordance with DOD policy. In addition, these consulting centers have developed and adapted military exposure guidelines for deployment using existing national standards for human health exposure limits and technical monitoring procedures (e.g., standards of the U.S. Environmental Protection Agency and the National Institute for Occupational Safety and Health) and have worked with other agencies to develop new guidelines when none existed. (See fig. 1.) DOD policies and military service guidelines require that the preventive medicine units of each military service be responsible for collecting and reporting deployment OEHS data. Deployment OEHS data are generally categorized into three types of reports: baseline, routine, or incident- driven. Baseline reports generally include site surveys and assessments of occupational and environmental hazards prior to deployment of servicemembers and initial environmental health site assessments once servicemembers are deployed. Routine reports record the results of regular monitoring of air, water, and soil, and of monitoring for known or possible hazards identified in the baseline assessment. Incident-driven reports document exposure or outbreak investigations. There are no DOD-wide requirements on the specific number or type of OEHS reports that must be created for each deployment location because reports generated for each reflect the specific occupational and environmental circumstances unique to that location. CHPPM officials said that reports generally reflect deployment OEHS activities that are limited to established sites such as base camps or forward operating bases; an exception is an investigation during an incident outside these locations. Constraints to conducting OEHS outside of bases include risks to servicemembers encountered while in combat and limits on the portability of OEHS equipment. In addition, DHSD officials said that preventive medicine units might not be aware of every potential health hazard and therefore might be unable to conduct appropriate OEHS activities. According to DOD policy, various entities must submit their completed OEHS reports to CHPPM during a deployment. The deployed military services have preventive medicine units that submit OEHS reports to their command surgeons who review all reports and ensure that they are sent to a centralized archive that is maintained by CHPPM. Alternatively, preventive medicine units can be authorized to submit OEHS reports directly to CHPPM for archiving. (See fig. 2.) According to DOD policy, baseline and routine reports should be submitted within 30 days of report completion. Initial incident-driven reports should be submitted within 7 days of an incident or outbreak. Interim and final reports for an incident should be submitted within 7 days of report completion. In addition, the preventive medicine units are required to provide quarterly lists of all completed deployment OEHS reports to the command surgeons. The command surgeons review these lists, merge them, and send CHPPM a quarterly consolidated list of all the deployment OEHS reports it should have received. To assess the completeness of its centralized OEHS archive, CHPPM develops a quarterly summary report that identifies the number of baseline, routine, and incident-driven reports that have been submitted for all bases in a command. Additionally, this report summarizes the status of OEHS report submissions by comparing the reports CHPPM received with the quarterly consolidated lists from the command surgeons that outline each of the OEHS reports that have been completed. For OIF, CHPPM is required to provide a quarterly summary report to the commander of U.S. Central Command on the deployed military services’ compliance with deployment OEHS reporting requirements. During deployments, military commanders can use deployment OEHS reports completed and maintained by preventive medicine units to identify occupational and environmental health hazards and to help guide their risk management decision making. Commanders use an operational risk management process to estimate health risks based on both the severity of the risks to servicemembers and the likelihood of encountering specific hazards. The operational risk management process, which varies slightly across the services, includes risk assessment, including hazard identification, to describe and measure the potential hazards at a location; risk control and mitigation activities intended to reduce potential risk communication efforts to make servicemembers aware of possible exposures, any risks to health that the exposures may pose, the countermeasures to be employed to mitigate exposure or disease, and any necessary medical measures or follow-up required during or after the deployment. Commanders balance the risk to servicemembers of encountering occupational and environmental health hazards while deployed, even following mitigation efforts, against the need to accomplish specific mission requirements. Along with health encounter and servicemember location data, archived deployment OEHS reports are needed by researchers to conduct epidemiologic studies on the long-term health issues of deployed servicemembers. These data are needed, for example, by VA, which in 2002 expanded the scope of its health research to include research on the potential long-term health effects of hazardous military deployments on servicemembers. In a letter to the Secretary of Defense in 2003, VA said it was important for DOD to collect adequate health and exposure data from deployed servicemembers to ensure VA’s ability to provide veterans’ health care and disability compensation. VA noted in the letter that much of the controversy over the health problems of veterans who fought in the 1991 Persian Gulf War could have been avoided had more extensive surveillance data been collected. VA asked in the letter that it be allowed access to any unclassified data collected during deployments on the possible exposure of servicemembers to environmental hazards of all kinds. The deployed military services generally have collected and reported OEHS data for OIF, as required by DOD policy. However, the deployed military services have not used all of the same OEHS data collection standards and practices, because each service has its own authority to implement broad DOD policies. To increase data collection uniformity, the Joint Environmental Surveillance Working Group has made some progress in devising cross-service standards and practices for some OEHS activities. In addition, the deployed military services have not submitted all of the OEHS reports they have completed for OIF to CHPPM’s centralized archive, as required by DOD policy. However, CHPPM officials said that they could not measure the magnitude of noncompliance because they have not received all of the required quarterly consolidated lists of OEHS reports that have been completed. To improve OEHS reporting compliance, DOD officials said they were revising an existing policy to add additional and more specific OEHS requirements. OEHS data collection standards and practices have varied among the military services because each service has its own authority to implement broad DOD policies and the services have taken somewhat different approaches. For example, although one water monitoring standard has been adopted by all military services, the services have different standards for both air and soil monitoring. As a result, for similar OEHS events, preventive medicine units may collect and report different types of data. Each military service’s OEHS practices for implementing data collection standards also have differed, due to the varying levels of training and expertise among the service’s preventive medicine units. For example, CHPPM officials said that Air Force and Navy preventive medicine units had more specialized personnel with a narrower focus on specific OEHS activities than Army preventive medicine units, which included more generalist personnel who conducted a broader range of OEHS activities. Air Force preventive medicine units generally have included a flight surgeon, a public health officer, and bioenvironmental engineers. Navy preventive medicine units generally have included a preventive medicine physician, an industrial hygienist, a microbiologist, and an entomologist. In contrast, Army preventive medicine unit personnel generally have consisted of environmental science officers and technicians. DOD officials also said other issues could contribute to differences in data collected during OIF. DHSD officials said that variation in OEHS data collection practices could occur as a result of resource limitations during a deployment. For example, some preventive medicine units may not be fully staffed at some bases. A Navy official also said that OEHS data collection can vary as different commanders set guidelines for implementing OEHS activities in the deployment theater. To increase the uniformity of OEHS standards and practices for deployments, the military services have made some progress—particularly in the last 2 years—through their collaboration as members of the Joint Environmental Surveillance Working Group. For example, the working group has developed a uniform standard, which has been adopted by all the military services, for conducting environmental health site assessments, which are a type of baseline OEHS report. These assessments have been used in OIF to evaluate potential environmental exposures that could have an impact on the health of deployed servicemembers and determine the types of routine OEHS monitoring that should be conducted. Also, within the working group, three subgroups—laboratory, field water, and equipment—have been formed to foster the exchange of information among the military services in developing uniform joint OEHS standards and practices for deployments. For example, DHSD officials said the equipment subgroup has been working collaboratively to determine the best OEHS instruments to use for a particular type of location in a deployment. Another effort by the working group included devising a joint standard for the amount of OEHS data needed to sufficiently determine the severity of potential health hazards at a site. However, DOD officials estimated in late 2004 that it would take 2 years or more for this standard to be completed and approved. The deployed military services have not submitted all the OEHS reports that the preventive medicine units completed during OIF to CHPPM for archiving, according to CHPPM officials. Since January 2004, CHPPM has compiled four summary reports that included data on the number of OEHS reports submitted to CHPPM’s archive for OIF. However, these summary reports have not provided information on the actual magnitude of noncompliance with report submission requirements because CHPPM has not received all consolidated lists of completed OEHS reports that should be submitted quarterly. These consolidated lists were intended to provide a key inventory of all OEHS reports that had been completed during OIF. Because there are no requirements on the specific number or type of OEHS reports that must be created for each base, the quarterly consolidated lists are CHPPM’s only means of assessing compliance with OEHS report submission requirements. Our analysis of data supporting the four summary reports found that, overall, 239 of the 277 bases had at least one OEHS baseline (139) or routine (211) report submitted to CHPPM’s centralized archive through December 2004. DOD officials suggested several obstacles that may have hindered OEHS reporting compliance during OIF. For example, CHPPM officials said there are other, higher priority operational demands that commanders must address during a deployment, so OEHS report submission may be a lower priority. In addition, CHPPM officials said that some of the deployed military services’ preventive medicine units might not understand the types of OEHS reports to be submitted or might view them as an additional paperwork burden. CHPPM and other DOD officials added that some preventive medicine units might have limited access to communication equipment to send reports to CHPPM for archiving. CHPPM officials also said that while they had the sole archiving responsibility, CHPPM did not have the authority to enforce OEHS reporting compliance for OIF; this authority rests with the Joint Staff and the commander in charge of the deployment. DOD has several efforts under way to improve OEHS reporting compliance. CHPPM officials said they have increased communication with deployed preventive medicine units and have facilitated coordination among each service’s preventive medicine units prior to deployment. CHPPM has also conducted additional OEHS training for some preventive medicine units prior to deployment, including both refresher courses and information about potential hazards specific to the locations where the units were being deployed. In addition, DHSD officials said they were revising an existing policy (DOD Instruction 6490.3; see table 1) to add additional and more specific OEHS requirements. However, at the time of our review, a draft of the revision had not been released and, therefore, specific details about these revisions were not available. DOD has made progress using OEHS reports to address immediate health risks during OIF, but limitations remain in employing these reports to address both immediate and long-term health issues. During OIF, OEHS reports have been used as part of operational risk management activities intended to assess, mitigate, and communicate to servicemembers any potential hazards at a location. While there have been no systematic efforts by DOD or the military services to establish a system to monitor the implementation of OEHS risk management activities, DHSD officials said relatively low rates of disease and nonbattle injury in OIF were considered an indication of OEHS effectiveness. In addition, DOD’s centralized archive of OEHS reports for OIF is limited in its ability to provide information on the potential long-term health effects related to occupational and environmental exposures for several reasons, including limited access to most OEHS reports because of security classification, incomplete data on servicemembers’ deployment locations, and the lack of a comprehensive federal research plan incorporating the use of archived OEHS reports. To identify and reduce the risk of immediate health hazards in OIF, all of the military services have used preventive medicine units’ OEHS data and reports in an operational risk management process. A DOD official said that while DOD had begun to implement risk management to address occupational and environmental hazards in other recent deployments, OIF was the first major deployment to apply this process throughout the deployed military services’ day-to-day activities, beginning at the start of the operation. The operational risk management process includes risk assessments of deployment locations, risk mitigation activities to limit potential exposures, and risk communication to servicemembers and commanders about potential hazards. Risk Assessments. Preventive medicine units from each of the services have generally used OEHS information and reports to develop risk assessments that characterized known or potential hazards when new bases were opened in OIF. CHPPM’s formal risk assessments have also been summarized or updated to include the findings of baseline and routine OEHS monitoring conducted while bases are occupied by servicemembers, CHPPM officials said. During deployments, commanders have used risk assessments to balance the identified risk of occupational and environmental health hazards, and other operational risks, with mission requirements. Alternatively, some preventive medicine units have addressed hazards identified through risk assessments without initially involving a commander. A Navy official said that, for example, if a preventive medicine unit found elevated bacteria levels when monitoring a drinking water purification system, the unit would likely order that the system be shut down and corrected and then notify the commander of the action in a summary report of OEHS activities. Generally, OEHS risk assessments for OIF have involved analysis of the results of air, water, or soil monitoring. CHPPM officials said that most risk assessments that they have received characterized locations in OIF as having a low risk of posing health hazards to servicemembers. Risk Control and Mitigation. Using risk assessment findings, preventive medicine units have recommended risk control and mitigation activities to commanders that were intended to reduce potential exposures at specific locations. For OIF, risk control and mitigation recommendations at bases have included such actions as modifying work schedules, requiring individuals to wear protective equipment, and increasing sampling to assess any changes and improve confidence in the accuracy of the risk estimate. Risk Communication. Risk assessment findings have also been used in risk communication efforts, such as providing access to information on a Web site or conducting health briefings to make servicemembers aware of occupational and environmental health risks during a deployment and the recommended efforts to control or mitigate those risks, including the need for medical follow-up. Many of the risk assessments for OIF we reviewed recommended that health risks be communicated to servicemembers. The experience at Port Shuaiba, Kuwait, provides an illustration of the risk management process. Officials determined that Port Shuaiba, which had a moderate risk rating in numerous OEHS risk assessments, had the highest assessed risk for potential environmental exposures identified in OIF. The site is a deepwater port used for bringing in heavy equipment in support of OIF, and a large number of servicemembers have been permanently or temporarily stationed at this site. CHPPM officials said reported concerns about air quality problems, such as sulfur dioxide emissions and windblown dust and sand particles, and the concentration of a large number of industrial facilities at Port Shuaiba led to this risk characterization as a result of multiple OEHS risk assessments conducted before and during OIF. Risk mitigation recommendations that have been implemented at Port Shuaiba include increasing air monitoring to continuous, 24-hour sampling; implementing the use of standard protective equipment, such as goggles and face kerchiefs; and using dust suppression measures, such as laying gravel over the entire location to reduce dust. CHPPM officials said they were uncertain whether some other risk mitigation recommendations for Port Shuaiba had been implemented, such as requiring servicemembers to stay inside buildings or tents as much as possible when air pollution levels are high or increasing the number of servicemembers available for operations to reduce the duration of shifts. On the basis of recommendations from the risk assessments, military officials have been attempting to transfer the activities at Port Shuaiba to a nearby port that does not have industrial facilities, but servicemembers have continued to live and work at the site, though in greatly reduced numbers, CHPPM officials said. CHPPM officials said they have recommended extensive risk communication activities at Port Shuaiba, including providing information to servicemembers in town hall meetings and through posters and handouts in dining facilities. In addition, CHPPM officials said they have worked with commanders to allow CHPPM to provide briefings about the identified and potential health hazards as soon as new military units arrive at Port Shuaiba. While risk management activities have become more widespread in OIF compared with previous deployments, DOD officials have not conducted systematic monitoring of deployed military services’ efforts to conduct OEHS risk management activities. As of March 2005, neither DOD nor the military services had established a system to examine whether required risk assessments had been conducted, or to record and track resulting recommendations for risk mitigation or risk communication activities. In the absence of a systematic monitoring process, CHPPM officials said they conducted ad hoc reviews of implementation of risk management recommendations for sites where continued, widespread OEHS monitoring has occurred, such as at Port Shuaiba and other locations with elevated risks. DHSD officials said they have initiated planning for a comprehensive quality assurance program for deployment health that would address OEHS risk management, but the program was still under development. DHSD and military service officials said that developing a monitoring system for risk management activities would face several challenges. In response to recommendations for risk mitigation and risk communication activities, commanders may have issued written orders and guidance that were not always stored in a centralized, permanent database that could be used to track risk management activities. Additionally, DHSD officials told us that risk management decisions have sometimes been recorded in commanders’ personal journals or diaries, rather than issued as orders that could be stored in a centralized, permanent database. In lieu of a monitoring system, DHSD officials said the rates of disease and nonbattle injury in OIF are considered by DOD as a general measure or indicator of OEHS effectiveness. As of January 2005, OIF had a 4 percent total disease and nonbattle injury rate—in other words, an average of 4 percent of servicemembers deployed in support of OIF had been seen by medical units for an injury or illness in any given week. This rate is the lowest DOD has ever documented for a major deployment, according to DHSD officials. For example, the total disease and nonbattle injury rate for the 1991 Gulf War was about 6.5 percent, and the total rate for Operation Enduring Freedom in Central Asia has been about 5 percent. However, while this indicator provides general information on servicemembers’ health status, it is not directly linked to specific OEHS activities and therefore is not a clear measure of their effectiveness. Access to archived OEHS reports by VA, medical professionals, and interested researchers has been limited by the security classification of most OEHS reports. Typically, OEHS reports are classified if the specific location where monitoring activities occur is identified. VA officials said they would like to have access to OEHS reports in order to ensure appropriate postwar health care and disability compensation for veterans, and to assist in future research studies. However, VA officials said that they did not expect access to OEHS reports to improve until OIF has ended because of security concerns. Although access to OEHS reports has been restricted, VA officials said they have tried to anticipate likely occupational and environmental health concerns for OIF based on experience from the 1991 Persian Gulf War and on CHPPM’s research on the medical and environmental health conditions that exist or might develop in the region. Using this information, VA has developed study guides for physicians on such topics as health effects from radiation and traumatic brain injury and also has written letters for OIF veterans about these issues. DOD has begun reviewing classification policies for OEHS reports, as required by the Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005. A DHSD official said that DOD’s newly created Joint Medical Readiness Oversight Committee is expected to review ways to reduce or limit the classification of data, including data that are potentially useful for monitoring and assessing the health of servicemembers who have been exposed to occupational or environmental hazards during deployments. Linking OEHS reports from the archive to individual servicemembers will be difficult because DOD’s centralized tracking database for recording servicemembers’ deployment locations currently does not contain complete or comparable data. In May 1997, we reported that the ability to track the movement of individual servicemembers within the theater is important for accurately identifying exposures of servicemembers to health hazards. However, DMDC’s centralized database has continued to experience problems in obtaining complete, comparable data from the services on the location of servicemembers during deployments, as required by DOD policies. DMDC officials said the military services had not reported location data for all servicemembers for OIF. As of October 2004, the Army, Air Force, and Marine Corps each had submitted location data for approximately 80 percent of their deployed servicemembers, and the Navy had submitted location data for about 60 percent of its deployed servicemembers. Additionally, the specificity of location data has varied by service. For example, the Marine Corps has provided location of servicemembers only by country, whereas each of the other military services has provided more detailed location information for some of their servicemembers, such as base camp name or grid coordinate locations. Furthermore, the military services did not begin providing detailed location data until OIF had been ongoing for several months. DHSD officials said they have been revising an existing policy to provide additional requirements for location data that are collected by the military services, such as a daily location record with grid coordinates or latitude and longitude coordinates for all servicemembers. Though the revised policy has not been published, as of May 2005 the Army and the Marine Corps had implemented a new joint location database in support of OIF that addresses these revisions. During OIF, some efforts have been made to include information about specific incidents of potential and actual exposure to occupational or environmental health hazards in the medical records of servicemembers who may be affected. According to DOD officials, after preventive medicine units have investigated incidents involving potential exposure, they generally have developed narrative summaries of events and the results of any medical procedures for inclusion in affected servicemembers’ medical records. Additionally, rosters were generally developed of servicemembers directly affected and of servicemembers who did not have any acute symptoms but were in the vicinity of the incident. For example, in investigating an incident involving a chemical agent used in an improvised explosive device, CHPPM officials said that two soldiers who were directly involved were treated at a medical clinic, and their treatment and the exposure were recorded in their medical records. Although 31 servicemembers who were providing security in the area were asymptomatic, doctors were documenting this potential exposure in their medical records. In addition, the military services have taken some steps to include summaries of potential exposures to occupational and environmental health hazards in the medical records of servicemembers deployed to specific locations. The Air Force has created summaries of these hazards at deployed air bases and has required that these be placed in the medical records of all Air Force servicemembers stationed at these bases. (See app. II for an example.) However, Air Force officials said no follow-up activities have been conducted specifically to determine whether all Air Force servicemembers have had the summaries placed in their medical records. In addition, the Army and Navy jointly created a summary of potential exposure for the medical records of servicemembers stationed at Port Shuaiba. Since December 2004, port officials have made efforts to make the summary available to servicemembers stationed at Port Shuaiba so that these servicemembers can include the summary in their medical records. However, there has been no effort to retroactively include the summary in the medical records of servicemembers stationed at the port prior to that time. According to DOD and VA officials, no federal research plan that includes the use of archived OEHS reports has been developed to evaluate the long- term health of servicemembers deployed in support of OIF, including the effects of potential exposure to occupational or environmental hazards. In February 1998 we noted that the federal government lacked a proactive strategy to conduct research into Gulf War veterans’ health problems and suggested that delays in planning complicated researchers’ tasks by limiting opportunities to collect critical data. However, the Deployment Health Working Group, a federal interagency body responsible for coordinating research on all hazardous deployments, recently began discussions on the first steps needed to develop a research plan for OIF. At its January 2005 meeting, the working group tasked its research subcommittee to develop a complete list of research projects currently under way that may be related to OIF. VA officials noted that because OIF is ongoing, the working group would have to determine how to address a study population that changes as the number of servicemembers deployed in support of OIF changes. Although no coordinated federal research plan has been developed, there are some separate federal research studies under way that may follow the health of OIF servicemembers. For example, in 2000 VA and DOD collaborated to develop the Millennium Cohort study, a 21-year longitudinal study evaluating the health of both deployed and nondeployed military personnel throughout their military careers and after leaving military service. According to the principal investigator, the Millennium Cohort study was designed to examine the health effects of specific deployments if enough servicemembers in that deployment enrolled in the study. However, the principal investigator said that as of February 2005 researchers had not identified how many servicemembers deployed in support of OIF had enrolled in the study. Additionally, a VA researcher has received funding to study mortality rates among OIF servicemembers. According to the researcher, if occupational and environmental data are available, the study will include the evaluation of mortality outcomes in relation to potential exposure for OIF servicemembers. Since the 1991 Persian Gulf War, DOD has made progress in improving occupational and environmental health data collection through its development of a militarywide health surveillance framework for use during deployments. However, these efforts still could be strengthened. OEHS data that the deployed military services have collected during OIF may not always be comparable because of variations among the services’ data collection standards and practices. As a result of these variations, the amount and comprehensiveness of data for servicemembers from one military service may be more extensive than for servicemembers from another service. Additionally, the deployed military services’ uncertain compliance with OEHS report submission requirements casts doubts on the completeness of CHPPM’s OEHS archive. These data shortcomings, in conjunction with the incomplete data in DOD’s centralized tracking database of servicemembers’ deployment locations, limit CHPPM’s ability to respond to requests for OEHS information about possible exposure to occupational and environmental health hazards of those who are serving or have served in OIF. Other limitations may also impede the comprehensiveness of the archived OEHS reports, including the inability to collect OEHS data outside of base camps and a lack of knowledge of all potential health hazards. Nonetheless, these limitations do not outweigh the need to collect data on known or expected hazards in order to make every effort to address potential health issues. DHSD officials have said they are revising an existing policy on OEHS data collection and reporting to add additional and more specific OEHS requirements. However, unless the military services take measures to direct those responsible for OEHS activities to proactively implement the new requirements, the services’ efforts to collect and report OEHS data may not improve. DOD’s risk management efforts during OIF represent a positive step in helping to mitigate potential environmental and occupational risks of deployment. But the effects of such efforts are unknown without systematic monitoring of the deployed military services’ implementation activities. Rates of disease and nonbattle injury have been used as an overall surrogate outcome measure for risk management in OIF, but DOD and the military services currently are unable to ascertain how and to what extent risk management efforts have contributed to the relatively low disease and nonbattle injury rate for OIF. Although OEHS reports alone are not sufficient to identify the causes of potential long-term health effects in deployed servicemembers, they are an integral component of research to evaluate the long-term health of deployed servicemembers. However, efforts by a joint DOD and VA working group to develop a federal research plan for OIF that would include examining the effects of potential exposure to occupational and environmental health hazards have just begun, despite similarities in deployment location to the 1991 Persian Gulf War. Unless DOD addresses OEHS data collection and reporting weaknesses and develops a federal research plan for OIF with VA, the departments ultimately may face the same criticisms they faced following the first Gulf War over their inability to adequately address the long-term health issues of servicemembers. We are making recommendations aimed at improving the collection and reporting of OEHS data during deployments and improving OEHS risk management. To improve the collection and reporting of OEHS data during deployments and the linking of OEHS reports to servicemembers, we recommend that the Secretary of Defense ensure that cross-service guidance is created to implement DOD’s policy, once that policy has been revised, which addresses improvements to conducting OEHS activities and to reporting the locations of servicemembers during deployment. To improve the use of OEHS reports to address the immediate health risks of servicemembers during deployments, we recommend that the Secretary of Defense ensure that the military services jointly establish and implement procedures to evaluate the effectiveness of risk management efforts. To better anticipate and understand the potential long-term health effects of deployment in support of OIF, we recommend that the Secretary of Defense and the Secretary of Veterans Affairs work together to develop a federal research plan to follow the health of these servicemembers that would include the use of archived OEHS reports. We requested comments on a draft of this report from DOD and VA. Both agencies provided written comments that are reprinted in appendixes III and IV. DOD also provided technical comments that we incorporated where appropriate. In commenting on this draft, DOD did not concur with our recommendation that the military services jointly develop implementation guidance for DOD’s policy on OEHS during deployments, once that policy has been revised. However, DOD stated that officials are planning steps that will meet the intent of our recommendation to improve the collection and reporting of OEHS data during deployments. DHSD officials stated that cross-service implementation guidance for the revised policy on deployment OEHS would be developed by the Joint Staff instead of by the individual military services, as we originally recommended. We believe that the development of cross-service implementation guidance is a critical element needed to improve OEHS data collection and reporting during deployments, regardless of the entity responsible for developing this guidance. Therefore, we modified the wording of our recommendation to clarify our intent that joint guidance be developed. DOD partially concurred with our recommendation that the military services jointly establish and implement procedures to evaluate the effectiveness of risk management efforts. DOD stated that OEHS reports would be of no value for “immediate” health risks, except for incident- driven reports, and assumed that we were referring to health risks that may occur once servicemembers return from a deployment. However, our findings describe the OEHS operational risk management process that is specifically conducted during a deployment, including risk assessment, risk mitigation, and risk communication activities that are used to identify and reduce the risk of immediate health hazards. Additionally, DOD stated that it has procedures in place to evaluate OEHS risk management through a jointly established and implemented lessons learned process. Because the lessons learned process was not raised by agency officials during our review, we did not determine whether it would systematically monitor or evaluate the effectiveness of OEHS risk management activities. However, in further discussions, DHSD officials told us that they were not aware of any lessons learned reports related to OEHS risk management for OIF. DOD partially concurs with our recommendation that DOD and VA work together to develop a federal research plan to follow the health of servicemembers deployed in support of OIF that would include the use of archived OEHS reports. Although DOD states that it agrees with the importance of following the health of its servicemembers, its response focuses on initiatives for the electronic exchange of clinical health information with VA. In further discussions, DHSD officials explained that analysis of this clinical information could lead to the development of research hypotheses and, ultimately, research questions that would guide federal health research. Although DOD officials stated that they have not yet linked any occupational or environmental exposures to specific adverse health effects, there is no certainty that long-term health effects related to these types of exposures will not appear in veterans of OIF. Federal research has not clearly identified the causes of unexplained illnesses reported by servicemembers who served in the 1991 Persian Gulf War, and OIF servicemembers are serving in the same region for longer periods of time. Separately, VA concurred with our recommendation to work jointly with DOD to develop a federal research plan to follow the health of OIF servicemembers. VA confirmed that the Deployment Health Working Group, which includes DOD officials, had initiated steps in January 2005 toward developing a comprehensive joint federal surveillance plan to evaluate the long-term health of servicemembers returning from both OIF and Operation Enduring Freedom (OEF). However, more importantly, the difference in VA and DOD’s responses to this recommendation illustrates a disconnect between each agency’s understanding of whether and how such a federal research plan should be established. Therefore, continued collaboration between the agencies to formulate a mutually agreeable process for proactively creating a federal research plan would be beneficial in facilitating both agencies’ ability to anticipate and understand the potential long-term health effects related to OIF deployment versus taking a more reactive stance in waiting to see what types of health problems may surface. In its response, VA also contends that we overstate problems related to its ability to access DOD’s classified occupational and environmental health data. VA notes that it has staff with the necessary security clearances to examine classified OEHS reports, so that there is no barrier to access. However, during our review VA officials expressed concerns that they did not have OEHS data and that access to the data was difficult. Even if VA staff have security clearances that enable them to examine OEHS data, any materials that arise from the use of classified documents, such as research papers or other publications, would likely be restricted. Therefore, these results would have limited use, as they cannot be broadly shared with other researchers and the general public. Nonetheless, VA maintains that development of a systematic method to tabulate and organize the exposure data is needed, as is a complete roster of OIF and OEF veterans, pre- and post-deployment health screening data, and a complete roster of the most seriously injured veterans. We agree that a systematic method to organize and share OEHS data is important. This issue could be addressed within the efforts to develop a federal research plan. As arranged with your office, unless you release its contents earlier, we plan no further distribution of this report until 30 days after its issuance date. At that time, we will send copies of this report to the Secretary of Defense and the Secretary of Veterans Affairs. We will also provide copies to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff has any questions about this report, please call me at (202) 512-7119. Bonnie Anderson, Karen Doran, John Oh, Danielle Organek, and Roseanne Price also made key contributions to this report. To describe how the military services have implemented the Department of Defense’s (DOD) policies for collecting and reporting occupational and environmental health surveillance (OEHS) data for Operation Iraqi Freedom (OIF), we reviewed pertinent DOD policies and military services’ guidance that delineated the requirements for OEHS data collection and reporting. We interviewed officials at the Deployment Health Support Directorate (DHSD) and the Joint Staff to obtain a broad overview of DOD’s OEHS activities in OIF. We also interviewed officials at each of the military services’ health centers—the U.S. Army Center for Health Promotion and Preventive Medicine (CHPPM), the Navy Environmental Health Center, and the Air Force Institute for Operational Health—to obtain information about each service’s OEHS data collection standards and practices, training of preventive medicine units for OIF, obstacles that could hinder OEHS data collection and reporting, and efforts to improve reporting compliance. Additionally, we interviewed members of the Joint Environmental Surveillance Working Group to discuss the purpose and structure of the working group and efforts related to increasing the uniformity of OEHS standards and practices for deployments. To determine if the military services were submitting OEHS reports to CHPPM’s centralized archive, we obtained and reviewed CHPPM’s quarterly summary reports, which provided the total number of bases that have submitted at least one report in each of the categories of baseline, routine, or incident-driven reports for the U.S. Central Command’s (CENTCOM) area of responsibility, details about consolidated lists of reports, and information about other OEHS reporting compliance issues. The summary reports did not show report submission by individual bases or, other than for the first summary report, separately identify OIF bases from all others in the CENTCOM area of responsibility. For each of the summary reports, CHPPM provided us with supporting documents that included lists of the bases specific to OIF and, for each base, whether it had submitted baseline, routine, or incident-driven reports. We attempted to include only unique OIF bases in our analysis; however, CHPPM officials told us that a few duplicate OIF bases may be included in our analysis due to reasons such as frequent base openings and closures and base name changes. We used these supporting documents to identify the number and percentage of bases with and without baseline or routine reports during the reporting periods. Incident-driven reports reflect OEHS investigations of unexpected incidents and would not be submitted to CHPPM’s archive according to any identified pattern. Therefore, we did not review the services’ submission of incident-driven reports. Because OEHS reports generally are classified, we did not report on the specifics contained in these reports. We determined that the data from CHPPM’s OEHS archive were sufficiently reliable for the purposes of this report by (1) confirming the data included the elements that we requested and were consistent with provided documentation and (2) conducting detailed fact-finding interviews with CHPPM officials to understand how the data were obtained and to determine the limitations of the data. To characterize the OEHS reports for OIF submitted to CHPPM, we discussed the numbers of reports submitted and characterized the categories of reports using percentages. While the OEHS reports were contained in a computerized archive, there was no formal database in which the information from the reports could have been extracted into data fields. Instead, the archived reports were Microsoft Word documents, Microsoft Excel spreadsheets, Adobe Acrobat files, scanned images, or e-mail text that were organized by either military base or type of report. Therefore, there was no specific database with data fields that could be examined through a data reliability test. To identify the efforts to use OEHS reports to address the more immediate health issues of servicemembers deployed in support of OIF, we reviewed DOD policies and documents describing the operational risk management process. Additionally, we reviewed 28 risk assessment reports and the risk mitigation efforts and risk communication activities that resulted from these assessments. We also reviewed and summarized risk management activities for Port Shuaiba, Kuwait. We interviewed officials from CHPPM responsible for OEHS risk management activities at Port Shuaiba and discussed quality assurance efforts related to these activities. We also interviewed officials from DHSD about additional OEHS-related quality assurance programs. To identify the efforts under way to use OEHS reports to address the long- term health issues of servicemembers deployed in support of OIF, we interviewed Department of Veterans Affairs (VA) and DOD officials to examine access to OEHS reports and use of OEHS reports for VA, and reviewed laws relating to classification of documents. Additionally, we reviewed relevant VA documents to determine the ways in which VA can use OEHS reports and to determine its efforts to anticipate OEHS issues. To determine the difficulties in linking OEHS reports to the individual records of servicemembers, we interviewed officials and military representatives at DOD’s Defense Manpower Data Center (DMDC) regarding the status of the Contingency Tracking System, a centralized tracking database to identify deployed servicemembers and track their movements within the theater of operations. To help identify problems with this system, we asked DMDC to provide information about the amount of location data submitted by each military service to this database. To assess the reliability of the data submitted by each military service, we (1) interviewed DMDC officials about limitations of the system and (2) confirmed that the data included the elements we requested and were consistent with provided documentation. We tested the data electronically to ensure that the numbers were accurately calculated. Given our research questions and discussions with DMCD officials regarding the centralized system, we determined that these data were reliable for our purposes. We interviewed CHPPM officials to examine efforts to include information from investigations of potential exposures to occupational and environmental health hazards in servicemembers’ medical records, and reviewed summary documents related to potential occupational and environmental exposures. We also interviewed Army, Air Force, and Navy officials to discuss these summary documents and determine efforts in place to ensure that these documents were placed in the medical records. We also examined other documents, including DOD policies, federal laws, and interagency coordinating council meeting minutes relating to OEHS. We interviewed DOD and VA officials to determine whether a federal research plan using OEHS reports had been developed to evaluate the long- term health of servicemembers deployed in support of OIF. We also reviewed documents, including the meeting minutes of an interagency group and documents relating to a current collaborative study between DOD and VA. We performed our work from September 2004 through June 2005 in accordance with generally accepted government auditing standards. SYMPTOMS, DIAGNOSIS, TREATMENT, TREATING ORGANIZATION (Sign each entry) ENVIRO NMENTAL/OCCUPAT IONAL HEAL TH WORK PLACE EXPOSURE DATA This a sse ssm ent covers individuals deployed to BAGHDAD AIR B ASE (BDAB), IRAQ for the time period 15 DEC 03 to 30 APR 2004. Purpose: To comply with the deployment health surveillan ce requirements of Presidential Review Directive 5 and JCSM 0006-02, Updated Procedures for Deployment Health Surveillan ce and Readiness. CENTAF/SG officially sanctions use of this form and recommends it be maintained in the individual s permanent medical record with the DD Form 2796, Post Deployment Health Assessment, covering the same time period. Camps Sather and Griffin, the primary AF locations on Baghdad International A irport (BIAP), were part of the Iraqi Military Training portion of BIAP. However, this specific area was not heavily used. The small Iraqi terminal on site was for military guests and distinguished visitors. Base housing and training was on the other side of the main road outside Camp Sather. While there is farming around BIAP, we are not aware of any specific farming activities within Camp Sather; however, there is evidence of flooded fields in/around Camp Griffin. We are als o not aware of any major spills within the BIAP AF cantonment. BDAB refers to both Camps Sather and Griffin. SSN/ID NO. REGISTER NO. WARD NO. Rank/Grade.) CHRONOLOGICAL RECORD OF MEDICAL CARE STANDARD FORM 600 (REV. 6-97) SYMPTOMS, DIAGNOSIS, TREATMENT, TREATING ORGANIZATION (Sign each entry) ENVIRO NMENTAL/OCCUPAT IONAL HEAL TH WORK PLACE EXPOSURE DATA (continued) medical record, individual reported no adverse contact (i.e. bites). Feral cats and dogs have als o been noted in the area. Rats and mice have been a nuisance; one rat bite was reported in the summer of 2003. 6. Waste Sites/Was te Disposal: Hazar dous waste storage on BDAB is limited to used and off-spec POL products, and small s pill cleanup residue. Currently, proper handling, storage, and disposal of industrial waste generated on base (mainly oil, fuel and hydraulic fluid) are strictly enforced. Airborne exposure to base personnel from stored waste is assessed as minimal to nonexistent. No obvious signs of significant past spills or tank leakage were noted when coalition forces occupied BIAP, although POL personnel did drain and remove several ex tant tanks. Trash and garbage are containerized and routinely collected by contractors. Latrines are pumped out by trucks and waste is disposed off-BIAP. 7. Nuclear, Biological or Chemical (NBC) Weapon Exposure: There has been no evidence of any use, storage, release, or ex posure of NBC agents to personnel at this site. 8. Agricultural Emissions: Surrounding land is moderately agricultural. Many farms are within 1-2 miles of the perimeter fence, with numerous potentially flooded fields for rice cultivation. Aerial photos previous to May 2003 revealed that much of BIAP, including parts of the AF cantonment, were rice cultivation areas. While we haven t witnessed any significant application, herbicide/pesticide use probably routinely occurs just outside the base. However, airborne exposure to base personnel is assessed as minimal to nonexistent. . Depleted Uranium (DU): DU is a component of some aircraft present and/or transient on/through BDAB. There is no evidence of DU munitions having been expended at BIAP. Therefore, there is no potential air borne exposure to DU. Exposure is classified as far below permissible exposure levels. 10. Hazardous Materia ls: There are only a few permanent structures on BDAB. Both lead-based paint and potential as bestos- containing material have been tentatively identified in various locations on BIAP; however, personnel are not performing activities that involve routine exposure, thereby minimizin g health risk. There were multiple sites where Iraqi hazar dous materials caches were located; however, personnel exposures were minimize d/eliminated by removing or limiting access to the materials . Occupational Exposure Data and Risk Asse ssment: 1. Noise: Aircraft, aircraft ground equipment, generators and other equipment produce hazar dous noise. Workers routinely exposed to hazar dous noise are those working on or near the flight line and/or in selected industrial shops. These workers have comparable noise exposure at home station and are on the hearing conservation program. For all in dividuals, appropriate hearing protection is provided for protection again st hazar dous noise. Additionally, the whole of Camp Sather is within 300 yards of an extremely active flightline. 2. Heat Stress: Daily temperature range: Mar - Oct from 75 F to 125 F ; Nov - Feb from 55 F to 95 F. Personnel are continually educated on heat stress dangers, water intake and work/rest cycles. Unless separately documented, individual had no heat related injury. 3. Airborne Exposure to Chemical Hazards: Unless sp ecified in a duty-specific supplement, individual exposure to chemical inhalation is considered similar to duties performed at home station. On base industrial ac tivities include routine aircraft, equipment and installatio n maintenance. Generally, majority of the chemicals used on BDAB are oils, greases, lubricants, hydraulic fluids a nd fuel. Little to no corrosion control activities are performed and no solvent tanks exist on site. No industrial ac tivity is performed that generates, or has been expected to generate, airborne exposures above permissible exposure levels or medical action levels. 4. Chemical Contact and Eye Protection: Unless sp ecified in a job-specific supplement, individual exposure to chemical contact is considered similar to duties performed at home station. Workers are provided appropriate protective equipment (i.e. nitrile/rubber gloves, goggles, safety glasses and face s hields) when and where needed. 5. Radiation: Ionizing radiation is emitted from medical/dental x-ray and OSI operations, and low-level radioactive materials present in equipment such as chemical ag ent monitors and alar ms. No worker has been identified as exceed ing 10% of the 5 REM/year OSHA permissible exposure level. Radio frequency (RF) radiation is emitted from multiple radar systems and communication equipment. Systems are marked with warning signs and communication workers recei ve appropriate training. Unless otherwise documented, no worker has been identified as ex ceed ing RF-radiation permissible ex posure limits. Significant UV radiation from the sun is expected on exposed unprotected skin. BDAB personnel have been advised to minimize sun exposure through the use of sunscreen and wear of sleeves down. Additionally, BDAB is a high light level environment. Many cases of photosensitivity dermatitis were observed. Some were no doubt exacerbated by the use of doxycycline for malaria prophylaxis. Unless otherwise stated in medical record, individual reported no radiation/light related injuries. 6. Er gonomics: Individual exposure to ergonomic stress from job related duty is substantially similar to duties performed at home station, with potential moderate increase in lifting involved with unique deployment requirements such as erection of tents and shelters. Unless otherwise stated in medical record, individual reported no ergonomic stress related injuries. 7. Bloodborne Pathogens: Individual exposure to bloodborne pathogens from job related duty is considered similar to duties performed at home station. Applicable workers are provided appropriate protective equipment and have been placed on the bloodborne pathogen program. Unless otherwise stated elsewhere in the medical record, individual reported no significant unprotected ex posures.
Following the 1991 Persian Gulf War, research and investigations into the causes of servicemembers' unexplained illnesses were hampered by inadequate occupational and environmental exposure data. In 1997, the Department of Defense (DOD) developed a militarywide health surveillance framework that includes occupational and environmental health surveillance (OEHS)--the regular collection and reporting of occupational and environmental health hazard data by the military services. GAO is reporting on (1) how the deployed military services have implemented DOD's policies for collecting and reporting OEHS data for Operation Iraqi Freedom (OIF) and (2) the efforts under way to use OEHS reports to address both immediate and long-term health issues of servicemembers deployed in support of OIF. Although OEHS data generally have been collected and reported for OIF, as required by DOD policy, the deployed military services have used different data collection methods and have not submitted all of the OEHS reports that have been completed. Data collection methods for air and soil surveillance have varied across the services, for example, although they have been using the same monitoring standard for water surveillance. Variations in data collection have been compounded by different levels of training and expertise among service personnel responsible for OEHS. For some OEHS activities, a cross-service working group has been developing standards and practices to increase uniformity of data collection among the services. In addition, while the deployed military services have been conducting OEHS activities, they have not submitted all of the OEHS reports that have been completed during OIF, which DOD officials attribute to various obstacles, such as limited access to communication equipment to transmit reports for archiving. Moreover, DOD officials did not have the required consolidated lists of all OEHS reports completed during each quarter in OIF and therefore could not identify the reports they had not received to determine the extent of noncompliance. To improve OEHS reporting compliance, DOD officials said they were revising an existing policy to add additional and more specific OEHS requirements. DOD has made progress in using OEHS reports to address immediate health risks during OIF, but limitations remain in employing these reports to address both immediate and long-term health issues. OIF was the first major deployment in which OEHS reports have been used consistently as part of operational risk management activities intended to identify and address immediate health risks and to make servicemembers aware of the health risks of potential exposures. While these efforts may help reduce health risks, DOD has no systematic efforts to evaluate their implementation in OIF. In addition, DOD's centralized archive of OEHS reports for OIF has several limitations for addressing potential long-term health effects related to occupational and environmental exposures. First, access to the centralized archive has been limited due to the security classification of most OEHS reports. Second, it will be difficult to link most OEHS reports to individual servicemembers' records because not all data on servicemembers' deployment locations have been submitted to DOD's centralized tracking database. For example, none of the military services submitted location data for the first several months of OIF. To address problems with linking OEHS reports to individual servicemembers, the deployed military services have made efforts to include OEHS monitoring summaries in the medical records of some servicemembers for either specific incidents of potential exposure or for specific locations within OIF. Third, according to DOD and VA officials, no federal research plan has been developed to evaluate the longterm health of servicemembers deployed in support of OIF, including the effects of potential exposures to occupational or environmental hazards.
In 1990, the Congress enacted the Global Change Research Act. This act, among other things, required the administration to (1) prepare and at least every 3 years revise and submit to the Congress a national global change research plan, including an estimate of federal funding for global change research activities to be conducted under the plan; (2) in each annual budget submission to the Congress, identify the items in each agency’s budget that are elements of the United States Global Change Research Program (USGCRP), an interagency long-term climate change science research program; and (3) report annually on climate change “expenditures required” for the USGCRP. In response to the requirements of the 1990 act, the administration reported annually from 1990 through 2004 on funding for climate change science. From 1990 through 2001, the reports presented detailed science funding data for the USGCRP. Federal climate change science programs were reorganized in 2001 and 2002. In 2001, the Climate Change Research Initiative (CCRI) was created to coordinate short-term climate change research focused on reducing scientific uncertainty, and in 2002, CCSP was created to coordinate and integrate USGCRP and CCRI activities. CCSP is a collaborative interagency program designed to improve the government wide management of climate science and research. With respect to federal research, OMB, in annual reports and testimony before the Congress, reported climate change funding for 1993 through 2004 using four categories: Technology, which includes the research, development, and deployment of technologies and processes to reduce greenhouse gas emissions or increase energy efficiency. Funding for this category focuses on programs for energy conservation, renewable energy, and related efforts. Science, which includes research and monitoring to better understand climate change, such as measuring changes in forest cover and land use. International assistance, which helps developing countries address climate change by, for example, providing funds for energy efficiency programs. Tax expenditures related to climate change, which are federal income tax provisions that grant preferential tax treatment to encourage emission reductions by, for example, providing tax incentives to promote the use of renewable energy. Over the same time period, the administration also has reported annually on funding specifically for climate change science. CCSP is currently responsible for preparing these climate change science reports, which duplicate to some extent data provided by OMB in the science category. In 1992, the United States ratified the United Nations Framework Convention on Climate Change, which has as its objective the stabilization of greenhouse gas concentrations in the earth’s atmosphere but does not impose specific goals or timetables for limiting emissions. In response, federal agencies developed a plan for reducing greenhouse gas emissions, primarily through voluntary efforts by companies, state and local governments, and other organizations. Since that time, federal agencies have sponsored voluntary programs that encourage private and public sector entities to curb their greenhouse gas emissions by providing technical assistance, education, research, and information sharing. The administration has promoted such voluntary programs, along with other measures, as an alternative to mandatory emissions reductions. In February 2002, the president announced a Global Climate Change Initiative to reduce the rate of increase in greenhouse gas emissions in the United States. Specifically, he established the goal of reducing the emissions intensity of the United States by 18 percent between 2002 and 2012. Emissions intensity is a ratio calculated by dividing emissions in a given year by economic output for that year. In support of this goal, the president announced two new voluntary programs aimed at securing private sector agreements to voluntarily reduce greenhouse gas emissions or emissions intensity. Climate Leaders, an Environmental Protection Agency (EPA)-sponsored government-industry partnership established in February 2002, works with firms to develop long-term climate change strategies. According to EPA officials, as of November 2005, 74 firms were participating in the program. Climate VISION (Voluntary Innovative Sector Initiatives: Opportunities Now), introduced in February 2003 and coordinated by the Department of Energy (DOE) in cooperation with EPA and other federal agencies, works with trade groups to develop strategies to reduce their members’ greenhouse gas emissions intensity. Most industries participating in the program are represented by a single trade group. As of November 2005, 14 industry sectors and the Business Roundtable—an association of chief executive officers representing diverse sectors of the economy—were participating in the program. According to DOE, the trade groups participating in Climate VISION typically have high energy requirements. OMB reports indicated that federal funding on climate change increased from $2.35 billion in 1993 to $5.09 billion in 2004, or from $3.28 billion to $5.09 billion after adjusting for inflation, and that funding increased in three of the four categories between 1993 and 2004. However, changes in reporting methods limit the comparability of funding data over time, making it unclear whether total funding actually increased as reported. OMB reports also indicated that 12 of the 14 federal agencies receiving funding for climate change programs in 2004 received more funding in that year than they had in 1993, but again, unexplained modifications in the reports’ contents limit the comparability of agencies’ funding data, making it difficult to determine whether funding increased as OMB reported. We found that federal funding for climate change, as reported by OMB, increased from $2.35 billion in 1993 to $5.09 billion in 2004 (117 percent), or from $3.28 billion to $5.09 billion (55 percent) after adjusting for inflation, and reported funding increased for three of the four categories between 1993 and 2004. However, changes in reporting methods limit the comparability of funding data over time, and therefore it was unclear whether total funding actually increased as OMB reported. We were unable to compare changes in the fourth category–climate-related tax expenditures–because OMB reported estimates for proposed but not existing tax expenditures from 1993 to 2004. Specifically, for 1993 through 2004, we found the following: Technology funding, as reported by OMB, increased from $845 million to $2.87 billion (240 percent), or from $1.18 billion to $2.87 billion (143 percent) in inflation-adjusted dollars. The share of total climate change funding devoted to technology increased from 36 percent to 56 percent. However, we identified several ways that technology funding presented in OMB’s more recent reports may not be comparable to previously reported technology funding. For example, OMB added accounts to the technology category that were not reported before or were presented in different categories and did not explain whether these accounts reflected the creation of new programs or a decision to count existing programs for the first time. OMB also expanded the definitions of some accounts to include more activities without clarifying how the definitions were changed. Furthermore, OMB reports include a wide range of federal climate-related programs and activities, some of which–such as scientific research on global environmental change–are explicitly climate change programs, whereas others–such as technology initiatives promoting emissions reduction or encouraging energy conservation–are not solely for climate change purposes. Science funding increased from $1.31 billion to $1.98 billion (51 percent), according to both OMB and CCSP, or from $1.82 billion to $1.98 billion (9 percent) in inflation-adjusted dollars. However, science’s share of total climate change funding decreased from 56 percent to 39 percent. OMB and CCSP generally presented consistent climate change science funding totals from 1993 through 2004. CCSP reports also presented more detailed data, but these data were difficult to compare over the entire period because CCSP periodically introduced new categorization methods without explaining how the new methods related to the ones they replaced. Specifically, over the period CCSP used seven different methods to present detailed science funding data, making it impossible to develop consistent funding trends for the entire timeframe. International assistance funding reported by OMB increased from $201 million to $252 million (25 percent), but decreased from $280 million to $252 million (10 percent) in inflation-adjusted dollars. Moreover, its share of total climate change funding decreased from 9 percent to 5 percent. International assistance funding reported by OMB was generally comparable over time, although several new accounts were added without explanation. Tax expenditures were not fully reported by OMB for any year, even though climate-related tax expenditures amounted to hundreds of millions of dollars in forgone federal revenue in fiscal year 2004. Although not required to do so, OMB reported proposed climate-related tax expenditures. However, OMB did not report revenue loss estimates for existing climate change-related tax expenditures. Whereas OMB reported no funding for existing climate change-related tax expenditures in 2004, the federal budget for that year listed four tax expenditures related to climate change, including estimated revenue losses of $330 million for incentives to develop certain renewable energy sources. Table 1 shows federal climate change funding by category between 1993 and 2004. Table 2 shows funding data for the seven largest technology accounts, which accounted for 92 percent of technology funding in 2004. OMB and CCSP officials told us that time constraints and other factors contributed to changes in report structure and content over time. For example, OMB officials said that the short timeline for completing the report required by the Congress (within 45 days of submitting the upcoming fiscal year’s budget for the three most recent reports) limited OMB’s ability to analyze data submitted by agencies. OMB and CCSP officials also noted that each report was prepared in response to a one- time requirement and that they were not directed to use the same report format over time or to explain differences in methodology from one report to another. The director of CCSP told us that changes to climate change science reports, such as the creation and deletion of different categorization methods, were made because CCSP was changing towards a goals-oriented budget, and categorization methods changed as the program evolved. The director also said that future reports will explicitly present budget data as it was reported in prior reports to retain continuity, even if new methods are introduced. Regarding tax expenditures, OMB officials said that they consistently included in the reports those proposed tax expenditures where a key purpose was specifically to reduce greenhouse gas emissions. They also stated that they had not included existing tax expenditures that may reduce greenhouse gas emissions but that were enacted for other purposes, and that the Congress had not provided any guidance to suggest that additional tax expenditure data should be included in the annual reports. OMB reported that 12 of the 14 agencies receiving funding for climate change programs in 2004 received more funding in that year than they had in 1993. However, it is unclear whether funding changed as OMB reported because of, among other things, unexplained changes in what was defined as climate change funding. Reported funding for the Department of Energy (DOE), the agency with the most reported climate-related funding in 2004, increased from $963 million to $2.52 billion (162 percent), or from $1.34 billion to $2.52 billion (88 percent) after adjusting for inflation. DOE and NASA accounted for 81 percent of the reported increase in funding from 1993 through 2004. However, because agency funding totals are composed of individual accounts, changes in the reports’ contents, such as the unexplained addition of accounts to the technology category, limit the comparability of agencies’ funding data over time, making it difficult to determine if these are real or definitional increases. OMB stated that it consistently reported funding data for the 3 years presented in each of its reports and that there had been no requirement to use a consistent format from one report to the next or to explain differences in methodology from one report to another. We recommended that OMB and CCSP use the same format for presenting data from year-to-year, explain changes in report content or format when they are introduced, and provide and maintain a crosswalk comparing new and old report structures when changes in report format are introduced. We also recommended that OMB include data on existing climate-related tax expenditures in future reports. OMB agreed with the recommendations relating to report content and format and said it was studying the other recommendations. CCSP agreed with all of our recommendations. Both agencies appear to have taken actions in response to our recommendations, but we have not comprehensively reviewed the extent to which they may have done so. EPA and DOE expect participants in their respective programs to complete a number of actions within certain timeframes. However, participants’ progress toward completing those actions was mixed, and neither agency had a written policy for dealing with this situation. EPA estimated that the first fifty Climate Leaders participants accounted for at least 8 percent of U.S. emissions on average for the years 2000 through 2003, and DOE estimated that Climate VISION participants account for over 40 percent of U.S. greenhouse gas emissions; both agencies believe these to be conservative estimates. While EPA and DOE are participating in an interagency process to estimate the impact of their programs on emissions, we found that accurately attributing specific emissions reductions to either program would be difficult. EPA and DOE expect participants in their voluntary emissions reduction programs to complete a number of actions; however, participants’ progress toward completing those actions, as well as the agencies’ efforts to track accomplishments, varied. For example, within about 1 year of joining the program, EPA expects firms to enter into discussions with the agency to establish an emissions reduction goal and to complete these negotiations, generally within another year. As of November 2005, 38 of the 74 firms had established goals, while most of the other 36 firms, including 13 that joined in 2002, were still working to establish goals; most of the remaining firms had joined the program recently and had not yet established goals. EPA officials told us that they were developing a system for tracking firms’ progress in accomplishing the key steps associated with participating in the program, but were still in the process of obtaining and validating data from participants. While EPA officials told us that they would be willing to remove participants from the program if they were not progressing as expected, they had not specified the conditions under which they would do so. DOE asks that trade groups participating in its Climate VISION program develop a work plan for measuring and reporting emissions information within about 1 year after joining the program and report their emissions levels. As of November 2005, 11 of the 15 participating trade groups had completed their work plans and 5 groups had reported on emissions. As of November 2005, DOE officials said that the agency did not have a system for tracking how long each group takes to complete its work plan and report emissions data. Furthermore, while DOE officials said that the agency would remove groups from the program if they did not seem to be taking sufficient action, DOE had not yet established specific deadlines for reporting emissions. Because DOE did not have a system for tracking how long participants take to complete key program steps—and neither DOE nor EPA had established written policies for taking action against participants not progressing as expected—it will be difficult for them to ensure that all participants are meeting program expectations. We recommended that DOE develop a system for tracking participants’ progress in completing key steps associated with its Climate VISION Program, and that both EPA and DOE develop written policies establishing the actions the agencies will take if participants are not completing program steps on time. DOE and EPA appear to have taken steps to implement our recommendation regarding a written policy, but we have not conducted a comprehensive review to determine the extent to which the recommendations have been implemented. The specific types of emission reduction goals being established by Climate Leaders firms and Climate VISION groups varied. Of the 38 firms participating in Climate Leaders that had established emission reduction goals as of November 2005, 19 had committed to reduce their total greenhouse gas emissions, 18 had committed to reduce their emissions intensity (emissions per unit of output), and 1 firm had committed to reduce both its total emissions and its emissions intensity. Furthermore, firms’ goals differed in their geographic scope and the time period they covered. For example, Cinergy Corporation pledged to reduce its total U.S. domestic greenhouse gas emissions by 5 percent from 2000 to 2010, while Pfizer, Inc., pledged to reduce its worldwide emissions by 35 percent per dollar of revenue from 2000 to 2007. Table 3 presents information on the 38 firms’ goals. In contrast to EPA’s program, 14 of the 15 trade groups participating in DOE’s Climate VISION established an emissions-related goal in collaboration with DOE or another federal agency upon joining the program. (The remaining group, the Business Roundtable, did not establish a quantitative emissions goal because of the diversity of its membership). According to a DOE official, participants need not establish new goals as a condition of joining the program. Nine of the 14 groups had set goals to improve their emissions intensity, 2 groups had established a goal of reducing emissions of specific greenhouse gases, 2 groups had set goals to improve energy efficiency, and 1 group had established a goal of both reducing its total emissions and improving its energy efficiency. For example, the American Forest & Paper Association pledged to reduce emissions intensity by 12 percent between 2002 and 2012, while the American Iron and Steel Institute agreed to a 10-percent, sector wide increase in energy efficiency by 2012. Some of these groups stated that their goals would be difficult to achieve, however, without reciprocal federal actions, such as tax incentives or regulatory relief. Table 4 presents information on Climate VISION industry groups’ goals. EPA and DOE both estimated the share of total U.S. greenhouse gas emissions attributable to participants in their respective programs and were working to develop an estimate of the programs’ impacts. EPA estimated that Climate Leaders participants accounted for at least 8 percent of U.S. emissions. According to EPA, this was a conservative estimate, because it was based solely on emissions from the program’s first 50 participants. DOE estimated that Climate VISION participants accounted for over 40 percent of U.S. greenhouse gas emissions and noted that this was a conservative estimate. Both agencies were participating in an interagency process to estimate the effect of their programs on reducing emissions, which was expected to be completed in 2006. However, preparing accurate estimates of these programs’ impacts will be difficult. First, there is considerable overlap between these two programs and other voluntary programs. For example, 60 of the 74 Climate Leaders participants also participated in one or more other EPA programs, and 3 of the 14 Climate VISION participants with quantitative goals also participated in EPA voluntary programs. Such overlap makes it difficult to determine the effects that are attributable to a given program. Second, it will be difficult to determine how much of a firm’s or trade group’s emissions reductions can be attributed to its participation in the program because the level of a participant’s emissions in the absence of the program is unknown. For example, higher energy prices or changes in business operations could lead to emissions reductions, making it difficult to distinguish reductions attributable to participation in the program versus other causes. In conclusion, we found that the lack of consistency and clarity in OMB’s and CCSP’s reports made it difficult to identify trends in federal climate change funding. A better understanding of these expenditures is needed before it is possible to assess CCSP’s and other federal agencies’ progress towards their climate change goals. We therefore made a total of seven recommendations to OMB and three to CCSP to clarify how they present climate change funding information. OMB agreed with most of our recommendations and CCSP agreed with all of our recommendations. Both agencies appear to have taken steps to implement our recommendations, but we have not comprehensively reviewed the extent to which they have done so. We found that opportunities remain to improve the progress of both voluntary programs, since some industry participants in both programs appeared not to be progressing at the rate expected by the agencies. We also found that it will be difficult for the agencies to estimate the emissions reductions attributable to their programs, due to overlaps between organizations participating in more than one voluntary program and to the fact that it was difficult to know how much of a participant’s emissions reductions were a direct result of the program or other factors, such as higher energy prices, which generally lead to lower emissions. Therefore, we recommended that DOE develop a system for tracking participants’ progress in completing key steps associated with the program, and that both EPA and DOE develop written policies that establish the actions the agencies will take if participants are not completing program steps on time. EPA did not comment on our recommendation; DOE stated that it agreed with our recommendation regarding a tracking system and would consider our recommendation regarding establishing a written policy. We have not fully reviewed the extent to which the recommendations have been implemented. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions you or other Members of the Subcommittee may have. For further information regarding this testimony, please contact me at (202) 512-3841 or [email protected]. John Healey, Anne K. Johnson, and Vincent P. Price made key contributions to this testimony. John Delicath, Karen Keegan, and Charles Egan also made important contributions. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Office of Management and Budget (OMB) reports on federal funding for climate research and to develop technologies to reduce greenhouse gas emissions, among other things. The Climate Change Science Program (CCSP), which coordinates many agencies' activities, also reports on science funding. The Environmental Protection Agency's (EPA's) Climate Leaders and the Department of Energy's (DOE's) Climate VISION programs aim to reduce such emissions through voluntary industry efforts. This testimony is based on GAO's August 2005 report Climate Change: Federal Reports on Climate Change Funding Should Be Clearer and More Complete (GAO-05-461) and its April 2006 report Climate Change: EPA and DOE Should Do More to Encourage Progress Under Two Voluntary Programs (GAO-06-97), which addressed (1) reported changes in federal climate change funding and (2) the status and progress of two federal voluntary climate programs. Federal funding for climate change, as reported by OMB, increased from $2.35 billion in 1993 to $5.09 billion in 2004 (117 percent), or from $3.28 billion to $5.09 billion (55 percent) after adjusting for inflation. OMB reports show that, during this period, funding increased for technology, science, and--before adjusting for inflation--international assistance. CCSP, which reports only science funding, generally presented totals that were consistent with OMB's, but provided more detail. However, changes in reporting methods used by both OMB and CCSP limit the comparability of funding data over time, and therefore it was unclear whether total funding actually increased as reported. Furthermore, we were unable to compare changes in the fourth category (climate-related tax expenditures), because from 1993 to 2004 OMB reported estimates for proposed but not existing tax expenditures. With regard to individual agencies' funding, OMB reported that 12 of the 14 agencies receiving funding for climate change programs in 2004 received more funding in that year than they had in 1993, but it is unclear whether funding changed as OMB reported because of unexplained changes in what was defined as climate change funding. Reported funding for DOE, the agency with the most reported climate-related funding in 2004, increased from $963 million to $2.52 billion (162 percent), or from $1.34 billion to $2.52 billion (88 percent) after adjusting for inflation. DOE and the National Aeronautics and Space Administration accounted for 81 percent of the reported increase in funding from 1993 through 2004. However, because agency funding totals are composed of individual accounts, changes in the reports' contents, such as the unexplained addition of accounts to the technology category, limit the comparability of agencies' funding data over time, making it difficult to determine if these are real or definitional increases. EPA and DOE expected participants in their voluntary climate programs to complete several program steps within general time frames, but participants' progress in completing those steps within the time frames was mixed. Furthermore, DOE did not have a system for tracking groups' progress in completing program steps, and neither DOE nor EPA had a written policy specifying the consequences for participants not proceeding as expected. In addition, EPA and DOE had both estimated the share of total U.S. greenhouse gas emissions attributable to participants in their respective programs and were working through an interagency process to quantify emissions reductions attributable to their programs. However, determining reductions attributable to each program will be challenging because of the overlap between these programs and other voluntary programs and because it is difficult to determine how much of a participant's emissions reductions can be attributed to its participation in the program, since the participant's emissions in the absence of the program cannot be known.
The concept of “universal service” has traditionally meant providing residential telephone subscribers with nationwide access to basic telephone services at reasonable rates. Universal service programs traditionally targeted support to low-income customers and customers in rural and other areas where the costs of providing basic telephone service were high. The Telecommunications Act of 1996 broadened the scope of universal service to include, among other things, support for schools and libraries. The act instructed FCC to establish a universal service support mechanism to ensure that eligible schools and libraries have affordable access to and use of certain telecommunications services for educational purposes. In addition, Congress authorized FCC to “establish competitively neutral rules to enhance, to the extent technically feasible and economically reasonable, access to advanced telecommunications and information services for all public and nonprofit elementary and secondary school classrooms . . . and libraries. . . .” Based on this direction, and following the recommendations of the Federal-State Joint Board on Universal Service, FCC established the schools and libraries universal service mechanism that is commonly referred to as the E-rate program. The program is funded through statutorily mandated payments by companies that provide interstate telecommunications services. Many of these companies, in turn, pass their contribution costs on to their subscribers through a line item on subscribers’ phone bills. FCC capped funding for the E-rate program at $2.25 billion per year, although funding requests by schools and libraries can greatly exceed the cap. For example, schools and libraries requested more than $4.2 billion in E-rate funding for the 2004 funding year. In 1998, FCC appointed USAC as the program’s permanent administrator, although FCC retains responsibility for overseeing the program’s operations and ensuring compliance with the commission’s rules. In response to congressional conference committee direction, FCC has specified that USAC “may not make policy, interpret unclear provisions of the statute or rules, or interpret the intent of Congress.” USAC is responsible for carrying out the program’s day-to-day operations, such as maintaining a Web site that contains program information and application procedures; answering inquiries from schools and libraries; processing and reviewing applications; making funding commitment decisions and issuing funding commitment letters; and collecting, managing, investing, and disbursing E-rate funds. FCC permits—and in fact relies on—USAC to establish administrative procedures that program participants are required to follow as they work through the application and funding process. The FCC IG has noted that program participants generally consider USAC the primary source for guidance on the rules governing the E-rate program. See appendix III for a more detailed explanation of the structure of USAC. Under the E-rate program, eligible schools, libraries, and consortia that include eligible schools and libraries may receive discounts for eligible services. Eligible schools and libraries may apply annually to receive E-rate support. The program places schools and libraries into various discount categories, based on indicators of need, so that the school or library pays a percentage of the cost for the service and the E-rate program funds the remainder. E-rate discounts range from 20 percent to 90 percent. Schools and libraries in areas with higher percentages of students eligible for free or reduced-price lunches through the National School Lunch Program (or a federally approved alternative mechanism) qualify for higher discounts on eligible services. Schools and libraries located in rural areas also receive greater discounts in most cases, as shown in table 1. FCC has defined four classes of services that are eligible for E-rate support: telecommunications services, such as local, long-distance, and international telephone service as well as high-speed data links (e.g., T-1 lines); Internet access services, such as broadband Internet access and e-mail internal connections, such as telecommunications wiring, routers, switches, and network servers that are necessary to transport information to individual classrooms; and basic maintenance on internal connections. The list of specific eligible services within each class is updated annually and posted on USAC’s Web site. FCC’s rules provide that requests for telecommunications services and Internet access for all discount categories shall receive first priority for the available funding (Priority One services). The remaining funds are allocated to requests for support for internal connections and basic maintenance (Priority Two services), beginning with the most economically disadvantaged schools and libraries, as determined by the discount matrix. Because of this prioritization, not all requests for internal connections necessarily receive funding. Prior to applying for discounted services, an applicant must conduct a technology assessment and develop a technology plan to ensure that any services it purchases will be used effectively. The applicant submits a form to USAC setting forth its technological needs. Once the school or library has complied with the commission’s competitive bidding requirements and entered into agreements with service providers for eligible services, it must file a second form with USAC that details the types and costs of the services being contracted for, the vendors providing the services, and the amount of discount being requested. USAC reviews the forms and issues funding commitment decision letters (USAC could reduce the amount requested if the school or library has included ineligible services in its application or has calculated its discount category incorrectly). Generally, it is the service provider that seeks reimbursement from USAC for the discounted portion of the service. FCC established an unusual structure for the E-rate program but has never conducted a comprehensive assessment of which federal requirements, policies, and practices apply to the program, to USAC, or to the Universal Service Fund itself. FCC recently began to address a few of these issues, concluding that as a permanent indefinite appropriation, the Universal Service Fund is subject to the Antideficiency Act and its issuance of commitment letters constitutes obligations for purposes of the act. We agree with FCC’s determinations on these issues, as explained in detail in appendix II. However, FCC’s conclusions concerning the status of the Universal Service Fund raise further issues relating to the collection, deposit, obligation, and disbursement of those funds—issues that FCC needs to explore and resolve comprehensively rather than in an ad hoc fashion as problems arise. The Telecommunications Act of 1996 neither specified how FCC was to administer universal service to schools and libraries nor prescribed the structure and legal parameters of the universal service mechanisms to be created. The Telecommunications Act required FCC to consider the recommendations of the Federal-State Joint Board on Universal Service and then to develop specific, predictable, and equitable support mechanisms. Using the broad language of the act, FCC crafted an ambitious program for schools and libraries—roughly analogous to a grant program—and gave the program a $2.25 billion annual funding cap. To carry out the day-to-day activities of the E-rate program, FCC relied on a structure it had used for other universal service programs in the past—a not-for-profit corporation established at FCC’s direction that would operate under FCC oversight. However, the structure of the E-rate program is unusual in several respects compared with other federal programs: FCC appointed USAC as the permanent administrator of the Universal Service Fund, and FCC’s Chairman has final approval over USAC’s Board of Directors. USAC is responsible for administering the program under FCC orders, rules, and directives. However, USAC is not part of FCC or any other government entity; it is not a government corporation established by Congress; and no contract or memorandum of understanding exists between FCC and USAC for the administration of the E-rate program. Thus, USAC operates and disburses funds under less explicit federal ties than many other federal programs. Questions as to whether the monies in the Universal Service Fund should be treated as federal funds have troubled the program from the start. Even though the fund has been listed in the budget of the United States and, since fiscal year 2004, has been subject to an annual apportionment from OMB, the monies are maintained outside of Treasury accounts by USAC and some of the monies have been invested. The United States Treasury implements the statutory controls and restrictions involving the proper collection and deposit of appropriated funds, including the financial accounting and reporting of all receipts and disbursements, the security of appropriated funds, and agencies’ responsibilities for those funds. As explained below, appropriated funds are subject, unless specifically exempted by law, to a variety of statutory controls and restrictions. These controls and restrictions, among other things, limit the purposes for which federal funds can be used and provide a scheme of accountability for federal monies. Key requirements are in Title 31 of the United States Code and the appropriate Treasury regulations, which govern fiscal activities relating to the management, collection, and distribution of public money. Since the inception of the E-rate program, FCC has struggled with identifying the nature of the Universal Service Fund and the managerial, fiscal, and accountability requirements that apply to the fund. FCC’s Office of Inspector General first looked at the Universal Service Fund in 1999 as part of its audit of the commission’s fiscal year 1999 financial statement because FCC had determined that the Universal Service Fund was a component of FCC for financial reporting purposes. During that audit, the FCC IG questioned commission staff regarding the nature of the fund and, specifically, whether it was subject to the statutory and regulatory requirements for federal funds. In the next year’s audit, the FCC IG noted that the commission could not ensure that Universal Service Fund activities were in compliance with all laws and regulations because the issue of which laws and regulations were applicable to the fund was still unresolved at the end of the audit. FCC officials told us that the commission has substantially resolved the IG’s concerns through recent orders, including FCC’s 2003 order that USAC begin preparing Universal Service Fund financial statements consistent with generally accepted accounting principles for federal agencies (GovGAAP) and keep the fund in accordance with the United States Government Standard General Ledger. While it is true that these steps and other FCC determinations discussed below should provide greater protections for universal service funding, FCC has addressed only a few of the issues that need to be resolved. In fact, staff from the FCC’s IG’s office told us that they do not believe the commission’s GovGAAP order adequately addressed their concerns because the order did not comprehensively detail which fiscal requirements apply to the Universal Service Fund and which do not. FCC has, however, made some determinations concerning the status of the Universal Service Fund and the fiscal controls that apply. FCC’s determinations, and our analysis, in brief, are discussed below. (See app. II for our more thorough legal analysis of fiscal law issues involving the Universal Service Fund.) Status of funds as appropriated funds. In assessing the financial statement reporting requirements for FCC components in 2000, FCC concluded that the Universal Service Fund constitutes a permanent indefinite appropriation (i.e., funding appropriated or authorized by law to be collected and available for specified purposes without further congressional action). We agree with FCC’s conclusion. Typically, Congress will use language of appropriation, such as that found in annual appropriations acts, to identify a fund or account as an appropriation and to authorize an agency to enter into obligations and make disbursements out of available funds. Congress, however, appropriates funds in a variety of ways other than in regular appropriations acts. Thus, a statute that contains a specific direction to pay and a designation of funds to be used constitutes an appropriation. In these statutes, Congress (1) authorizes the collection of fees and their deposit into a particular fund, and (2) makes the fund available for expenditure for a specified purpose without further action by Congress. This authority to obligate or expend collections without further congressional action constitutes a continuing appropriation or a permanent appropriation of the collections. Because the Universal Service Fund’s current authority stems from a statutorily authorized collection of fees from telecommunications carriers and the expenditure of those fees for a specified purpose (that is, the various types of universal service), it meets both elements of the definition of a permanent appropriation. Decision regarding the Antideficiency Act. As noted above, in October 2003, FCC ordered USAC to prepare financial statements for the Universal Service Fund, as a component of FCC, consistent with GovGAAP, which FCC and USAC had not previously applied to the fund. In February 2004, staff from USAC realized during contractor-provided training on GovGAAP procedures that the commitment letters sent to beneficiaries (notifying them whether or not their funding is approved and in what amount) might be viewed as “obligations” of appropriated funds. If so, and if FCC also found the Antideficiency Act—which does not allow an agency or program to make obligations in excess of available budgetary resources—to be applicable to the E-rate program, then USAC would need to dramatically increase the program’s cash-on-hand and lessen the program’s investments to provide budgetary authority sufficient to satisfy the Antideficiency Act. As a result, USAC suspended funding commitments in August 2004 while waiting for a commission decision on how to proceed. At the end of September 2004—facing the end of the fiscal year—FCC decided that commitment letters were obligations, that the Antideficiency Act did apply to the program, and that USAC would need to immediately liquidate some of its investments to come into compliance with the Antideficiency Act. According to USAC officials, the liquidations cost the fund approximately $4.6 million in immediate losses and could potentially result in millions in foregone annual interest income. FCC was slow to recognize and address the issue of the applicability of the Antideficiency Act, resulting in the abrupt decision to suspend funding commitment decision letters and liquidate investments. In response to these events, in December 2004, Congress passed a bill granting the Universal Service Fund a one-year exemption from the Antideficiency Act. Nevertheless, FCC’s conclusion on this issue was correct: Absent a statutory exemption, the Universal Service Fund is subject to the Antideficiency Act, and its funding commitment decision letters constitute obligations for purposes of the act. The Antidefiency Act applies to “official or employee of the United States Government . . . mak or authorizing an expenditure or obligation . . . from an appropriation or fund.” 31 U.S.C. § 1341(a). As discussed above, the Universal Service Fund is an “appropriation or fund.” Even though USAC—a private entity whose employees are not federal officers or employees—is the administrator of the program and the entity that obligates and disburses money from the fund, application of the act is not negated. This is because, as recognized by FCC, it, and not USAC, is the entity that is legally responsible for the management and oversight of the E- rate program and because FCC’s employees are federal officers and employees of the United States subject to the Antideficiency Act. Thus, the Universal Service Fund will again be subject to the Antideficiency Act when the one-year statutory exemption expires, unless action is taken to extend or make permanent the exemption. An important issue that arises from the application of the Antideficiency Act to the Universal Service Fund is what actions constitute obligations chargeable against the fund. Under the Antideficiency Act, an agency may not incur an obligation in excess of the amount available to it in an appropriation or fund. Thus, proper recording of obligations with respect to the timing and amount of such obligations permits compliance with the Antideficiency Act by ensuring that agencies have adequate budget authority to cover all of their obligations. Our decisions have defined an “obligation” as a commitment creating a legal liability of the government, including a “legal duty . . . which could mature into a liability by virtue of actions on the part of the other party beyond the control of the United States. . . .” With respect to the Universal Service Fund, the funding commitment decision letter provides the school or library with the authority to obtain services from a provider with the commitment that the school or library will receive a discount and the service provider will be paid for the discounted portion with E-rate funding. Although the school or library could decide not to seek the services or the discount, so long as the funding commitment decision letter remains valid and outstanding, USAC and FCC no longer control the Universal Service Fund’s liability; it is dependent on the actions taken by the school or library. Consequently, we agree with FCC that a recordable obligation is incurred at the time of issuance of the funding commitment decision letter indicating approval of the applicant’s discount. While we agree with FCC’s determinations that the Universal Service Fund is a permanent appropriation subject to the Antideficiency Act and that its funding commitment decision letters constitute recordable obligations of the Universal Service Fund, there are several significant fiscal law issues that remain unresolved. We believe that where FCC has determined that fiscal controls and policies do not apply, the commission should reconsider these determinations in light of the status of universal service monies as federal funds. For example, in view of its determination that the fund constitutes an appropriation, FCC needs to reconsider the applicability of the Miscellaneous Receipts Statue, 31 U.S.C. § 3302, which requires that money received for the use of the United States be deposited in the Treasury unless otherwise authorized by law. FCC also needs to assess the applicability of other fiscal control and accountability statutes (e.g., the Single Audit Act and the Cash Management Improvement Act). Another major issue that remains to be resolved involves the extent to which FCC has delegated some functions for the E-rate program to USAC. For example, are the disbursement policies and practices for the E-rate program consistent with statutory and regulatory requirements for the disbursement of public funds? Are some of the functions carried out by USAC, even though they have been characterized as administrative or ministerial, arguably inherently governmental activities that must be performed by government personnel? Resolving these issues in a comprehensive fashion, rather than continuing to rely on reactive, case-by- case determinations, is key to ensuring that FCC establishes the proper foundation of government accountability standards and safeguards for the E-rate program and the Universal Service Fund. Although $13 billion in E-rate funding has been committed to beneficiaries during the past 7 years, FCC did not develop useful performance goals and measures to assess the specific impact of these funds on schools’ and libraries’ Internet access and to improve the management of the program, despite a recommendation by us in 1998 to do so. At the time of our current review, FCC staff was considering, but had not yet finalized, new E-rate goals and measures in response to OMB’s concerns about this deficiency in a 2003 OMB assessment of the program. One of the management tasks facing FCC is to establish strategic goals for the E-rate program, as well as annual goals linked to them. The Telecommunications Act of 1996 did not include specific goals for supporting schools and libraries, but instead used general language directing FCC to establish competitively neutral rules for enhancing access to advanced telecommunications and information services for all public and nonprofit private elementary and secondary school classrooms and libraries. As the agency accountable for the E-rate program, FCC is responsible under the Government Performance and Results Act of 1993 (Results Act) for establishing the program’s long-term strategic goals and annual goals, measuring its own performance in meeting these goals, and reporting publicly on how well it is doing. In testimony before the Senate Committee on Commerce, Science, and Transportation in July 1998, we stated that the E-rate program was beginning its first funding year without clear and specific goals and measures. FCC simply noted in its performance plan for fiscal year 1999 that it would “work to improve the connections of classrooms, libraries, and rural health care facilities to the Internet by the end of 1999.” This type of general statement, with no specific goals and measures for agency accountability, is not in accord with the Results Act. We recommended in our testimony that FCC develop specific E-rate goals and measures before the end of fiscal year 1998, in time to gauge the effect of the program’s first year of operations. As we stated at that time, performance measurement is critical to determining a program’s progress in meeting its intended outcomes. Without clearly articulated goals and reliable performance data, Congress, FCC, and USAC would have a difficult time assessing the effectiveness of the program and determining whether operational changes were needed. Although FCC responded that our recommendation was “reasonable,” we noted in our subsequent March 1999 report on the program that FCC had not acted on our recommendation and again stressed the importance of implementing it. FCC began including specific E-rate goals and measures in its fiscal year 2000 budget estimate submission to Congress and continued to set annual E-rate goals for fiscal years 2001 and 2002. No annual goals for fiscal years 2003 or 2004 were included in FCC’s performance reports, however. The goals and measures that FCC set for fiscal years 2000 through 2002 were not useful in assessing the impact of E-rate program funding. The goals focused on achieving certain percentage levels of Internet connectivity during a given fiscal year for schools, public school instructional classrooms, and libraries. For example, FCC set a fiscal year 2001 goal of having 90 percent of public school instructional classrooms connected to the Internet. FCC measured its performance in meeting these goals using nationwide survey data from the Department of Education’s National Center for Education Statistics (NCES) on the percentages of public schools and public school instructional classrooms that are connected to the Internet. The percentages are based on a nationally representative sample of approximately 1,000 public schools that are surveyed about Internet access and Internet-related topics. A fundamental problem with using these NCES percentages is that a nationally representative sample covers both public schools that received E-rate funding for internal connections and those that did not. The percentages, therefore, do not directly measure the impact of E-rate funds, as opposed to other sources of funding, on increases in the percentage of schools connected to the Internet. This is a significant problem because the applicants’ requests for E-rate funds for internal connections have exceeded the amounts available for that purpose by billions of dollars. As a result, while E-rate funds for internal connections have been provided on a priority basis to applicants eligible for very high discounts (generally 70 percent to 80 percent or higher), funding has typically not been available to meet the internal connections requests of the other applicants. Only in the second funding year (1999) were funds sufficient to cover eligible internal connections requests for applicants in all of the discount bands. The applicants who were denied E-rate support for internal connections have had to rely on other funding sources for their internal connections needs, such as state and local government. Even with these E-rate funding limitations, there has been significant growth in Internet access for public schools since the program issued its first funding commitments in late 1998. At the time, according to NCES data, 89 percent of all public schools and 51 percent of public school instructional classrooms already had Internet access. By 2002, 99 percent of public schools and 92 percent of public school instructional classrooms had Internet access. Yet although billions of dollars in E-rate funds have been committed since 1998, adequate program data was not developed to answer a fundamental performance question: How much of the increase since 1998 in public schools’ Internet access has been a result of the E-rate program, as opposed to other sources of federal, state, local, and private funding? Another problem is that FCC did not consistently set annual goals for the two other major groups of E-rate beneficiaries—libraries and private schools. For example, FCC’s budget submission to Congress in February 2000 included a fiscal year 2001 goal of having 90 percent of libraries connected to the Internet. But this goal was dropped from FCC’s subsequent performance reports and budget estimate submissions, and no other library connectivity goal was set. As for private schools, no specific Internet connectivity goal was set for them until early 2002, when FCC included a fiscal year 2003 goal of having 85 percent of private school instructional classrooms connected to the Internet in both its fiscal year 2003 budget estimate to Congress (dated February 2002) and its 2001 annual performance report (dated March 2002). But these were the only instances where this goal appeared. It was dropped from FCC’s subsequent budget estimate submissions and annual performance reports. In addition to these goal-setting shortcomings, no performance measurement data for either libraries’ or private schools’ Internet connectivity levels have been included in any of FCC’s annual budget estimate submissions or performance reports. The failure to measure the program’s impact on public and private schools and libraries over the past 7 years undercuts one of the fundamental purposes the Results Act: to have federal agencies adopt a fact-based, businesslike framework for program management and accountability. The problem is not just a lack of data for accurately characterizing program results in terms of increasing Internet access. Other basic questions about the E-rate program also become more difficult to address, such as the program’s efficiency and cost-effectiveness in supporting the telecommunications needs of schools and libraries. Performance goals and measures are used not only to assess a program’s impact, but also to develop strategies for resolving mission-critical management problems. Under the Results Act, managers should use performance data to identify performance gaps and determine where to target their resources to improve overall mission accomplishment. However, management-oriented goals have not been a feature of FCC’s performance plans, despite long-standing concerns about the program’s effectiveness in key areas. For example, E-rate applicants’ technology needs are posted on USAC’s Web site to allow service providers an opportunity to bid on them. FCC has maintained that absent competitive bidding, the prices charged by service providers could be needlessly high, unnecessarily depleting the program’s funds and limiting its ability to support other applicants. In the commission’s fiscal year 2000 budget estimate submission, FCC included a goal for ensuring that the program’s competitive bidding process led to bids by two or more service providers for the majority of applicants. However, this goal was dropped from FCC’s subsequent budget submissions and annual performance reports. No other goal was developed in its place to assess how well the competitive bidding process is working. In another example, FCC found that the E-rate participation rates for urban low-income school districts and rural school districts fell below the average participation rate for all eligible schools. In preparing our December 2000 report on the E-rate program, FCC officials told us they had finalized a new performance plan for the E-rate program that included tactical goals targeted at increasing participation by both of these groups, as well as rural libraries and libraries serving small areas. During our current review, when we asked FCC officials about the plan, we were told that it had not been implemented and that none of the FCC staff currently working on E-rate was familiar with the plan. Another ongoing program management issue is that a significant amount of funds committed annually go unused by the applicants that requested them. This is troubling because, as noted earlier, the demand for funding is high and there is typically not enough money each year to meet all funding requests for internal connections. In December 2000, we recommended that FCC ascertain and address the difficulties that applicants may be having in this regard. FCC responded that it would undertake an analysis, with USAC, of the factors leading to funds being committed to applicants but not used; and USAC responded that it would develop and pursue options for narrowing the gap between commitments and disbursements, and discuss the options with FCC. Here again was an opportunity to develop a performance goal and measure to address this program management problem, but none was developed. Similarly, no performance goals and measures have been included in FCC’s performance reports related to the management responsibility of identifying and mitigating fraud, waste, and abuse of program funds. OMB also has raised concerns about FCC’s lack of E-rate performance goals and measures. In its 2003 assessment of the E-rate program, OMB, using its Program Assessment Rating Tool (PART), noted that FCC discontinued specific E-rate program measures after fiscal year 2002. OMB’s overall PART rating for the E-rate program was “results not demonstrated.” This does not necessarily mean that the program is ineffective, but rather that its effectiveness is unknown. OMB observed that the program lacked long-term, outcome-oriented performance goals and efficiency measures against which to measure the program’s success in promoting connectivity and to improve and refine the program going forward. Because of this, OMB stated that it is not clear what the end goal of the E-rate program is or how to measure its effectiveness other than incremental increases in the number of classrooms and libraries connected to the Internet. While recognizing that E-rate funding is generally going to the intended beneficiaries of the program, OMB concluded that there was no way to tell whether the program has resulted in cost-effective deployment and use of advanced telecommunications services for schools and libraries. OMB also noted that there was little oversight to ensure that the program beneficiaries were using the funding appropriately and effectively. Among other things, OMB’s report recommended that for fiscal year 2005, FCC should develop a long-term outcome goal for the program, and consider reinstituting a connectivity measure and developing an efficiency measure. FCC officials told us they have been working with OMB to respond to the concerns raised in its PART assessment and that several FCC staff have recently received training in the development of performance measures. At the time of our review, FCC was considering goals that involve classroom connectivity and program efficiency. As we discussed earlier, any meaningful goals on connectivity would need to have associated measurement data that could isolate the impact of E-rate funding on changes in connectivity in order to assess the program’s impact. It should be noted that with 99 percent of public schools and 92 percent of public school instructional classrooms connected to the Internet in 2002 (according to the most current NCES report on public school connectivity at the time of our review), applicants are moving past achieving initial connectivity to maintaining and upgrading existing connections over the long term. As a result, simple measures of Internet connectivity will be much less useful indicators of the program’s performance than in past years. As for the program’s efficiency in providing support for telecommunications services, FCC staff told us they are considering a measure that would calculate and track the E-rate disbursements for each school (or school system) divided by the number of students, further broken down by the eligible services categories. An efficiency measure would be valuable, as there has been a long-standing concern about some applicants requesting funding for technology that greatly exceeds their needs (sometimes referred to as “goldplating”). While “E-rate dollars-per- student” ratios might be interesting data to assess in this regard, a performance measure needs to have a goal associated with it in order to be a meaningful tool for performance management. Currently, the program rules do not expressly establish a clear test for cost-effectiveness that could be used as a measurable goal, although in late 2003, FCC asked for comment on whether it would be beneficial or administratively feasible to develop such a test. At the time we concluded our review, FCC planned to finalize performance measures for the E-rate program and seek OMB approval in fiscal year 2005. As noted above, OMB’s PART assessment recommended that FCC develop a long-term outcome goal for the program. “Outcomes” are the results or benefits of the products or services provided by the program. A basic policy issue associated with the E-rate program involves assessing the extent to which the billions of dollars of support for telecommunications services are providing the sought-after return on investment: improvement in the quality of education. As we noted in our 2000 report on the program, the complex issue of measuring educational outcomes lies outside FCC’s expertise and comes under the purview of the Department of Education. FCC officials told us they have made initial contact with staff at the Department of Education to discuss the development of a long-term E-rate outcome measure. According to FCC’s current timetable, the collection and analysis of data for outcome measures would start with funding year 2006. FCC testified before Congress in June 2004 that it relies on three chief components in overseeing the E-rate program: rulemaking proceedings, beneficiary audits, and fact-specific adjudicatory decisions (i.e., appeals decisions). We found weaknesses with FCC’s implementation of each of these mechanisms, limiting the effectiveness of FCC’s oversight of the program and the enforcement of program procedures to guard against waste, fraud, and abuse of E-rate funding. As part of its oversight of the E-rate program, FCC is responsible for establishing new rules and policies for the program and making changes to existing rules, as well as for providing the detailed guidance that USAC requires to effectively administer the program. FCC carries out this responsibility through its rulemaking process. FCC’s E-rate rulemakings, however, have often been broadly worded and lacking specificity. Thus, USAC has needed to craft the more detailed administrative procedures necessary to implement the rules. However, in crafting administrative procedures, USAC is strictly prohibited under FCC rules from making policy, interpreting unclear provisions of the statute or rules, or interpreting the intent of Congress. We were told by FCC and USAC officials that USAC does not put procedures in place without some level of FCC approval. We were told that this approval is sometimes informal, such as e-mail exchanges or telephone conversations between FCC and USAC staff. This approval can come in more formal ways as well, such as when the commission expressly endorses USAC operating procedures in commission orders or codifies USAC procedures into FCC’s rules. However, two problems have arisen with USAC administrative procedures. First, although USAC is prohibited from making policy, some USAC procedures arguably rise to the level of policy decisions. Second, even though USAC procedures are issued with some degree of FCC approval, enforcement problems could arise when audits uncover violations of USAC procedures by beneficiaries or service providers. The FCC IG has expressed concern over situations where USAC administrative procedures have not been formally codified because commission staff have stated that, in such situations, there is generally no legal basis to recover funds from applicants that failed to comply with the USAC administrative procedures. Throughout the history of the program, USAC has found it necessary to create additional procedures to effectively and efficiently process more than 40,000 applications annually. However, these procedures sometimes deal with more than just ministerial details. For example, procedures that affect funding decisions arguably rise to the level of policy decisions. In June 2004, USAC was able to identify at least a dozen administrative procedures that, if violated by the applicant, would lead to complete or partial denial of the funding request even though there was no precisely corresponding FCC rule. The FCC IG stated in May 2004 in his Semiannual Report to Congress that he believes the distinction between FCC rules and USAC administrative procedures represents a weakness in program design, fails to give program participants a clear understanding of the rules and the consequences associated with rule violations, and complicates the design and implementation of effective program oversight. The critical nature of USAC’s administrative procedures is further illustrated by FCC’s repeated codification of them throughout the history of the program. For example, in 1999, USAC implemented a procedure known as “the 30-percent policy.” This procedure sought to avoid blanket denials of funding requests because of minor errors in the eligibility of the services requested, while at the same time prompting applicants to prepare their applications carefully and make a conscientious effort to exclude ineligible items. If more than 30 percent of the services for which discounts were requested were ineligible, USAC denied the funding request rather than undertake the administratively burdensome task of correcting the request and refiguring the amount based only on the eligible services requested. In April 2003, in the commission’s Second Report and Order in its E-rate docket, FCC codified USAC’s 30-percent policy, stating that the commission found the procedure “improves program operation and is important in reducing the administrative costs of the program.” In fact, the procedures put in place by USAC generally appear to be sensible and represent thoughtful administration of the E-rate program. Nonetheless, USAC is prohibited from making program rules. FCC’s codification of USAC procedures—after those procedures have been put in place and applied to program participants—raises concerns about whether these procedures are more than ministerial and are, in fact, policy changes that should be coming from FCC in the first place. Moreover, in its August 2004 order (in a section dealing with the resolution of audit findings), the commission directs USAC to annually “identify any USAC administrative procedures that should be codified in our rules to facilitate program oversight.” This process begs the question of which entity is really establishing the rules of the E-rate program and raises concerns about the depth of involvement by FCC staff with the management of the program. The other problem with USAC administrative procedures is the question of enforcement of those procedures through recovery of funds for procedural violations. FCC has generally held that funds can be recovered from a beneficiary or service provider only if an FCC rule was violated. In its August 2004 order, after several years of E-rate audits by USAC and the FCC IG, the commission attempted to clarify the rules of the program with relation to recovery of funds. In the order, the commission describes nine overall categories of statutory violations or FCC rule violations that would result in fund recovery being sought, in whole or in part, from beneficiaries or service providers. With respect to violations of USAC operating procedures, FCC said in its August 2004 order that it intends to evaluate whether there are USAC procedures that should be codified into the commission’s rules and whether violation of any of these codified procedures should also be a basis for recovery of funding. The commission noted that recovery of funds may not be appropriate for violations of procedural rules codified to enhance operations. Nevertheless, the commission stated that applicants will be required to comply with procedural rules and that applications that do not comply will be rejected. The commission noted, however, that if the codified procedural rule violation “is inadvertently overlooked during the application phase and funds are disbursed, the commission will not require that they be recovered, except to the extent that such rules are essential to the financial integrity of the program, as designated by the agency, or that circumstances suggest the possibility of waste, fraud, or abuse, which will be evaluated on a case-by-case basis.” Thus, even under the August 2004 FCC order, the commission did not clearly address the treatment of beneficiaries who violate a USAC administrative procedure that has not been codified. This creates a potentially unfair situation when the procedure is one that can lead to denial of an application. That is, if violation of the procedure is caught in the application process, funding will be denied. However, if the violation slips by in the application process, funding is granted, and the violation is later caught during a beneficiary audit, no recovery of funding can be attempted since there was no actual rule violation by the beneficiary. Also, as noted earlier, the FCC order also leaves to USAC the initial determination of which procedures should be codified rather than having FCC make that determination. Lastly, FCC did not establish a time frame for its review of USAC procedures. FCC’s use of beneficiary audits as an oversight mechanism has also had weaknesses, although FCC and USAC are now working to address some of these weaknesses. In December 2000, we recommended that USAC establish a quality assurance function responsible for ensuring that its funding decisions adhere to FCC’s program eligibility rules. In response to our recommendation, USAC increased both its in-house audit staff and the number of beneficiary audits conducted by outside accounting firms. Since 2000, there have been 122 beneficiary audits conducted by outside firms, 57 by USAC staff, and 14 by the FCC IG (2 of which were performed under agreement with the Inspector General of the Department of the Interior). Beneficiary audits are the most robust mechanism available to the commission in the oversight of the E-rate program, yet FCC generally has been slow to respond to audit findings and has not made full use of the audit findings as a means to understand and resolve problems within the program. First, audit findings can indicate that a beneficiary or service provider has violated existing E-rate program rules. In these cases, USAC or FCC can seek recovery of E-rate funds, if justified. In the FCC IG’s May 2004 Semiannual Report, however, the IG observes that audit findings are not being addressed in a timely manner and that, as a result, timely action is not being taken to recover inappropriately disbursed funds. The IG notes that in some cases the delay is caused by USAC and, in other cases, the delay is caused because USAC is not receiving timely guidance from the commission (USAC must seek guidance from the commission when an audit finding is not a clear violation of an FCC rule or when policy questions are raised). Regardless, the recovery of inappropriately disbursed funds is important to the integrity of the program and needs to occur in a timely fashion. Second, under GAO’s Standards for Internal Controls in the Federal Government, agencies are responsible for promptly reviewing and evaluating findings from audits, including taking action to correct a deficiency or taking advantage of the opportunity for improvement. Thus, if an audit shows a problem but no actual rule violation, FCC should be examining why the problem arose and determining if a rule change is needed to address the problem (or perhaps simply addressing the problem through a clarification to applicant instructions or forms). FCC has been slow, however, to use audit findings to make programmatic changes. For example, table 2 below shows audit findings from the 1998 program year that were only recently resolved by FCC’s August 2004 rulemaking. As table 2 illustrates, audit findings related to the lack of record retention by beneficiaries were a problem. Given that the E-rate program operates similarly in some ways to a grant program, FCC should have had in place a record retention policy at the start of the program as a basic accountability measure since record retention is fundamental to an audit trail. In fact, early in the program, FCC did create rules on beneficiary and service provider document retention, but the rules contained a potentially enormous loophole. Under FCC’s rules, program participants were required only to maintain “the kind of procurement records that they maintain for other purchases.” Thus, if a school or library had no record retention policy for other purchases, they did not need to retain records related to E- rate purchases. FCC proposed a more comprehensive record retention policy in December 2003 and released it for comment. In August 2004—7 years into the existence of the E-rate program—FCC adopted record retention rules that call for beneficiaries and service providers to retain E- rate program-related records for at least five years. In its August 2004 order, the commission concluded that a standardized, uniform process for resolving audit findings was necessary, and directed USAC to submit to FCC a proposal for resolving audit findings. FCC also instructed USAC to specify deadlines in its proposal “to ensure audit findings are resolved in a timely manner.” USAC submitted its Proposed Audit Resolution Plan to FCC on October 28, 2004. The plan memorializes much of the current audit process and provides deadlines for the various stages of the audit process. FCC released the proposed audit plan for public comment in December 2004. In addition to the Proposed Audit Resolution Plan, the commission instructed USAC to submit a report to FCC on a semiannual basis summarizing the status of all outstanding audit findings. The commission also stated that it expects USAC to identify for commission consideration on at least an annual basis all audit findings raising management concerns that are not addressed by existing FCC rules. Lastly, the commission took the unusual step of providing a limited delegation to the Wireline Competition Bureau (the bureau within FCC with the greatest share of the responsibility for managing the E-rate program) to address audit findings and to act on requests for waivers of rules warranting recovery of funds. These actions could help ensure, on a prospective basis, that audit findings are more thoroughly and quickly addressed. However, much still depends on timely action being taken by FCC, particularly if audit findings suggest the need for a rulemaking. In addition to problems with responding to audit findings, the audits conducted to date have been of limited use because neither FCC nor USAC have conducted an audit using a statistical approach that would allow them to project the audit results to all E-rate beneficiaries. Thus, at present, no one involved with the E-rate program has a basis for making a definitive statement about the amount of waste, fraud, and abuse in the program. Of the various groups of beneficiary audits conducted to date, all were of insufficient size and design to analyze the amount of fraud or waste in the program or the number of times that any particular problem might be occurring programwide. FCC’s IG and USAC are currently working to address this problem by following OMB’s guidance on the Improper Payments Information Act of 2002 (IPIA). IPIA requires that agencies annually estimate the amount of improper payments for programs and activities susceptible to significant improper payments. In response to IPIA, FCC and USAC are currently in the process of soliciting and evaluating responses to a Request for Proposals issued to procure the services of an independent auditor to conduct approximately 250 beneficiary audits in the E-rate program. We examined the methodology used by FCC’s IG and USAC for arriving at a sample size of 250, and it appears that they properly used OMB guidance under IPIA in determining the sample size. However, because the effort is still in the beginning stages, they were not able to provide additional information on the sample design, such as the method of sample selection, stratification criteria, and estimation methods. Sample design will be critical in determining the value of the information gained from the audits. In addition, FCC IG officials estimated the cost at approximately $50,000 per audit. With an anticipated total cost of $12.5 million (250 audits at $50,000 per audit), this is an expensive effort. If the cost of the 250 audits varies by the size of the grant, the sample design could be optimized based on variable cost, which may either yield a tighter precision of the estimate of the amount of improper payments or reduce the total cost of the audit. It should also be noted that because this represents a sizable increase from prior audits, FCC may face an even greater challenge in resolving the audit findings in a timely manner. Lastly, we were told by USAC officials that they have recently contracted with a consulting firm to conduct approximately 1,000 site visits a year to program beneficiaries beginning in mid-January 2005. Although these are not audits, USAC testified in June 2004 that the site visits will allow USAC to assess more fully, in real time, how E-rate funds are being used, to learn about and publicize best practices in education technology and program compliance, and to help ensure that products and services have in fact been delivered and are being used effectively. For each visit, the selected vendor will, among other things, conduct a physical inspection of equipment and services purchased with E-rate funds. A checklist, outlining the steps for review, is to be followed for each visit to ensure consistency. The deliverables will include a formal report on each beneficiary visited, a monthly report on best practices observed and outreach suggestions, and immediate notification to USAC in instances where significant noncompliance is discovered. Under FCC’s rules, program participants can seek review of USAC’s decisions, although FCC’s appeals process for the E-rate program has been slow in some cases. Because appeals decisions are used as precedent, this slowness adds uncertainty to the program and impacts beneficiaries. FCC rules state that FCC is to decide appeals within 90 days, although FCC can extend this period. There is currently a substantial appeals backlog at FCC (i.e., appeals pending for longer than 90 days). Out of 1,865 appeals to FCC from 1998 through the end of 2004, approximately 527 appeals remain undecided, of which approximately 458 (25 percent) are backlog appeals. Perhaps of most concern are the subset of appeals dealing with recovery of funding erroneously committed to schools and libraries. According to USAC, recovery has been slowed, in part, because FCC has not been timely in resolving these types of appeals from beneficiaries. In fact, through October 2004, of the approximately $36 million in E-rate funding for which USAC has brought recovery actions since the beginning of the program, only $3.2 million has been recovered and approximately $14.4 million is tied up in appeals with FCC. This is money that might be placed back into the E-rate program for disbursement to applicants. We were told by FCC officials that some of the backlog is due to staffing issues. FCC officials said they do not have enough staff to handle appeals in a timely manner. FCC officials also noted that there has been frequent staff turnover within the E-rate program, adding some delay to appeals decisions because new staff necessarily take time to learn about the program and the issues. (See app. IV for additional information on FCC staffing levels in support of the E-rate program.) Additionally, we were told that another factor contributing to the backlog is that the appeals have become more complicated as the program has matured. For example, applicants are increasingly appealing decisions concerning eligible services. These appeals can be difficult to resolve because the technology needs of participants in the program can be complex. Lastly, some appeals may be tied up if the issue is currently in the rulemaking process. The appeals backlog is of particular concern given that the E-rate program is a technology program. An applicant who appeals a funding denial and works through the process to achieve a reversal and funding two years later might have ultimately won funding for outdated technology. FCC has not done enough to proactively manage and provide a framework of government accountability for the multibillion-dollar E-rate program. FCC established an unusual structure for the E-rate program but has never conducted a comprehensive assessment of which federal requirements, policies, and practices apply to the program, to USAC, or to the Universal Service Fund. FCC has recently begun to address a few of these issues, concluding that the Universal Service Fund constitutes an appropriation and that the Fund is subject to the Antideficiency Act. Nevertheless, fundamental issues affecting the E-rate program remain to be resolved. Resolving these issues in a comprehensive fashion is key to ensuring that FCC applies the appropriate government accountability standards and safeguards to the E-rate program and to the Universal Service Fund. In managing the program, FCC has not developed specific and meaningful goals and measures to assess the impact of E-rate funding, address mission critical management problems, and establish the direction of the program as schools and libraries move beyond initial Internet connectivity to long- term maintenance concerns. Moreover, FCC has consistently shifted many important responsibilities onto USAC, such as identifying which administrative procedures should be adopted as commission rules and handling resolutions of audit findings. Combined with the weaknesses in FCC’s oversight mechanisms, these problems create barriers to enforcement, uncertainty about what the program’s requirements really are, and questions about the soundness of the program’s structure and accountability amid recent cases of fraud, waste, and abuse. This mixture of E-rate problems—related both to the structure of the program and to FCC’s shortcomings in carrying out key E-rate management responsibilities—indicates the need for corrective actions by FCC. Finally, regardless of the problems with the E-rate program, schools and libraries across the country use E-rate funds for their purchases of telecommunications services. Any reassessment of the program must take the needs of the beneficiaries into account. It is particularly important that efforts to protect the program from fraud, waste, and abuse do not result in a program that is excessively burdensome on program participants. Given the critical importance of telecommunications technologies to schools and libraries, we recommend that the Chairman of the Federal Communications Commission direct FCC staff to take the following three actions: 1. Conduct and document a comprehensive assessment to determine whether all necessary government accountability requirements, policies, and practices have been applied and are fully in place to protect the program and the funding. The assessment should include, but not be limited to the implications of FCC’s determination that the Universal Service Fund constitutes an appropriation by identifying the fiscal controls that apply and do not apply to the Universal Service Fund, including the collection, deposit, obligation, and disbursement of funds; and an evaluation of the legal authority for the organizational structure for carrying out the E-rate program, including the relationship between FCC and USAC and their respective authorities and roles in implementing the E-rate program. Because of the complexities posed by FCC’s arrangements with USAC and the questions that flow from these arrangements, FCC may want to request an advance decision from the Comptroller General under 31 U.S.C. § 3529. Section 3529 provides the heads of agencies and certifying and disbursing officers of the government an opportunity to request decisions from the Comptroller General on matters of appropriations law in order to ensure compliance with fiscal law. 2. Establish performance goals and measures for the E-rate program that are consistent with the Government Performance and Results Act. FCC should use the resulting performance data to develop analyses of the actual impact of E-rate funding and to determine areas for improved program operations. 3. Develop a strategy for reducing the E-rate program’s appeals backlog, including ensuring that adequate staffing resources are devoted to E- rate appeals resolution. We provided a draft of this report to FCC for review and comment. In its comments, which are reprinted in appendix V, FCC noted that it took a number of steps during 2004 to improve its management and oversight of the E-rate program. These included the adoption of new rules regarding the recovery of improperly disbursed funds; the implementation of new accounting requirements related to the Universal Service Fund; new efforts to deter waste, fraud, and abuse; and work with the FCC IG to develop a plan for conducting hundreds of additional beneficiary audits. FCC commented that it has strengthened its oversight and management of USAC through the establishment of a high-level working group to coordinate oversight and has adopted rules codifying certain USAC procedures. FCC also noted that it is currently evaluating USAC’s existing operations and administrative procedures to determine which should be codified into FCC rules. FCC reaffirmed its belief that the current structure of USAC is consistent with congressional intent and guidance, adding that it nevertheless intends to consider whether to modify the manner in which the Universal Service Fund is administered. During the coming year, FCC anticipates examining whether and how to modify its existing administrative structure and processes as they apply to the E-rate program. FCC intends to consider other administrative structures and their implications, including those relying on contractual arrangements. Other actions under consideration include initiating a notice-and-comment rulemaking proceeding to assess the management of the E-rate program and the Universal Service Fund; retaining an outside contractor to evaluate the program and make recommendations for improving its administration; and requiring certain beneficiaries to obtain an independent audit of their compliance with FCC rules. Regarding our recommendations, FCC officials told us they did not concur with our recommendation to conduct a comprehensive assessment concerning the applicability of government accountability requirements, policies, and practices. FCC maintains that it has conducted timely and extensive analysis of significant legal issues related to the status of the fund on a case-by-case basis, and provided examples. Although we recognize that FCC has engaged in internal deliberations and external consultations and analyses of a number of statutes, we do not believe this has been done in a timely manner or that it is appropriate to do so on a case-by-case basis. A definitive determination on the entire framework of laws that apply or do not apply to this program and to the Universal Service Fund itself would enable FCC to make proactive operational decisions on what steps it should take and what internal controls it should have in place. As noted in our report, we continue to believe that major issues remain unresolved such as defining the relationship between FCC and USAC and their respective authorities and roles in implementing the E-rate program and identifying whether other actions taken in the universal service programs constitute obligations and ensuring that those are properly recorded. FCC officials told us that they concurred with our recommendations for establishing performance goals and measures and developing a strategy for reducing the backlog of appeals, noting that the commission is already taking steps to address these recommendations. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies to interested congressional committees; the Chairman, FCC; the Chief Executive Officer, USAC; and other interested parties. We also will make copies available to others upon request. In addition, this report will be available at no cost on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Key contributors to this report are listed in appendix VI. Our objectives were to review and evaluate: (1) the effect of the current structure of the E-rate program on the Federal Communications Commission’s (FCC) management of the program, (2) FCC’s establishment of and use of goals and performance measures in managing the program, and (3) the effectiveness of FCC’s oversight mechanisms—rulemaking proceedings, beneficiary audits, and reviews of the Universal Service Administrative Company’s (USAC) decisions (appeals)—in managing the program. To provide information on the effect of the current structure of the E-rate program, we reviewed provisions of the Telecommunications Act, as well as documents and records used by FCC to implement and administer the E- rate program. We also assessed the extent to which FCC had established managerial and financial government accountability standards, safeguards, and legal relationships for the E-rate program and the Universal Service Fund. Additionally, we interviewed officials from FCC’s Wireline Competition Bureau, Office of General Counsel, Office of Managing Director, and Office of Inspector General. We also interviewed officials from the Office of Management and Budget (OMB) and USAC, the not-for- profit corporation that administers the E-rate program under FCC oversight. To respond to the second objective on FCC’s use of goals and performance measures in managing the program, we reviewed provisions of the Government Performance and Results Act of 1993, as well as documents and records used by FCC to establish goals and performance measures— budget justifications, performance plans, and strategic plans. We also reviewed OMB’s Program Assessment Rating Tool that assessed FCC’s performance goals and related measures for the E-rate program. In addition, we discussed this issue with officials from FCC’s Wireline Competition Bureau, Office of Managing Director, Office of Strategic Planning and Policy Analysis, and Office of Inspector General. We also interviewed officials from the Office of Management and Budget and Department of Education. Finally, to evaluate FCC’s oversight mechanisms for managing the program, we reviewed relevant documents relating to all three oversight mechanisms: (1) rulemaking proceedings, (2) beneficiary audits, and (3) fact-specific adjudicatory decisions (i.e., appeals decisions). Specifically, we reviewed FCC orders and provisions of the Code of Federal Regulations, which sets forth FCC’s rulemaking process. In addition, we reviewed relevant USAC documents and policies, including its procedures that are in place to aid in the administration of the program. To assess FCC’s oversight mechanism of auditing, we reviewed the FCC Inspector General’s (IG) Semi-Annual Reports to Congress, GAO’s Standards for Internal Controls in the Federal Government, recent FCC orders, and beneficiary audits used to assess program compliance. Our statistician also examined the methodology (based on interviews with and documentation provided by FCC and USAC) that the FCC IG and USAC have proposed for the next round of beneficiary audits. To gain an understanding of how FCC manages appeals, we reviewed relevant documents and gathered data from FCC and USAC regarding the number of outstanding appeals and USAC recovery actions tied up in FCC appeals. To assess the reliability of the FCC appeals data and USAC recovery actions tied up in FCC appeals, we (1) reviewed related documentation, (2) conducted electronic testing of the source databases, and (3) interviewed knowledgeable agency officials about the quality of the data. We found that one database was limited in producing reports that track historical trends. However, this limitation was minor in the context of our engagement. As a result, we determined that the data were sufficiently reliable for the purposes of this report. Finally, we discussed this issue with officials from FCC’s Wireline Competition Bureau, Office of General Counsel, Office of Managing Director, Office of Inspector General, and USAC. We also reviewed internal memorandums provided by FCC’s Office of General Counsel to determine how FCC has applied federal requirements, policies, and practices to the E-rate program and to the Universal Service Fund. We interviewed FCC officials to obtain their views concerning whether monies in the Universal Service Fund should be treated as federal funds and the effect of using government accounting standards on the fund. Funding commitments since the inception of the program, the number of USAC appeals, and USAC recoveries tied up in appeals to USAC were used only as background information in the report to provide context for our findings; therefore, the data were not verified for data reliability purposes. However, to assess the reliability of funding for which USAC has brought recovery actions, we (1) reviewed related documentation, (2) conducted electronic testing of the source databases, and (3) interviewed knowledgeable agency officials about the quality of the data. As a result, we determined that the data were sufficiently reliable for the purposes of this report. We also determined that other relevant documents and records that we gathered were sufficiently reliable for the purposes of our review. Our review was performed from December 2003 through December 2004 in accordance with generally accepted government auditing standards. There have been questions from the start of the E-rate program regarding the nature of the Universal Service Fund (USF) and the applicability of managerial, fiscal, and financial accountability requirements to USF. FCC has never clearly determined the nature of USF, and the Office of Management and Budget (OMB), the Congressional Budget Office (CBO), and GAO have at various times noted that USF has not been recognized or treated as federal funds for several purposes. However, FCC has never confronted or assessed these issues in a comprehensive fashion and has only recently begun to address a few of these issues. In particular, FCC has recently concluded that as a permanent indefinite appropriation, USF is subject to the Antideficiency Act and its funding commitment decision letters constitute obligations for purposes of the Antideficiency Act. As explained below, we agree with FCC’s determination. However, FCC’s conclusions concerning the status of USF raise further issues related to the collection, deposit, obligation, and disbursement of those funds—issues that FCC needs to explore and resolve. Universal service has been a basic goal of telecommunications regulation since the 1950s, when FCC focused on increasing the availability of reasonably priced, basic telephone service. See Texas Office of Public Utility Counsel v. FCC, 183 F.3d 393, 405-406 (5th Cir., 1999), cert. denied sub nom; Celpage Inc. v. FCC, 530 U.S. 1210 (2000). FCC has not relied solely on market forces, but has used a combination of explicit and implicit subsidies to achieve this goal. Id. Prior to 1983, FCC used the regulation of AT&T’s internal rate structure to garner funds to support universal service. With the breakup of AT&T in 1983, FCC established a Universal Service Fund administered by the National Exchange Carrier Association (NECA). NECA is an association of incumbent local telephone companies, also established at the direction of FCC. Among other things, NECA was to administer universal service through interstate access tariffs and the revenue distribution process for the nation’s local telephone companies. At that time, NECA, a nongovernmental entity, privately maintained the Universal Service Fund outside the U.S. Treasury. Section 254 of the Telecommunications Act of 1996 codified the concept of universal service and expanded it to include support for acquisition by schools and libraries of telecommunications and Internet services. Pub. L. No. 104-104, § 254, 110 Stat. 56 (1996) (classified at 47 U.S.C. § 254). The act defines universal service, generally, as a level of telecommunications services that FCC establishes periodically after taking into account various considerations, including the extent to which telecommunications services are essential to education, public health, and public safety. 47 U.S.C. § 254 (c)(1). The act also requires that “every telecommunications carrier that provides interstate telecommunications services shall contribute . . . to the specific, predictable, and sufficient mechanisms” established by FCC “to preserve and advance universal service.” Id., §254 (d). The act did not specify how FCC was to administer the E-rate program, but required FCC, acting on the recommendations of the Federal-State Joint Board, to define universal service and develop specific, predictable, and equitable support mechanisms. FCC designated the Universal Service Administrative Company (USAC), a nonprofit corporation that is a wholly owned subsidiary of NECA, as the administrator of the universal service mechanisms. USAC administers the program pursuant to FCC orders, rules, and directives. As part of its duties, USAC collects the carriers’ universal service contributions, which constitute the Universal Service Fund, and deposits them to a private bank account under USAC’s control and in USAC’s name. FCC has directed the use of USF to, among other things, subsidize advanced telecommunications services for schools and libraries in a program commonly referred to as the E-rate program. Pursuant to the E-rate program, eligible schools and libraries can apply annually to receive support and can spend the funding on specific eligible services and equipment, including telephone services, Internet access services, and the installation of internal wiring and other related items. Generally, FCC orders, rules, and directives, as well as procedures developed by USAC, establish the program’s criteria. USAC carries out the program’s day-to-day operations, such as answering inquiries from schools and libraries; processing and reviewing applications; making funding commitment decisions and issuing funding commitment decision letters; and collecting, managing, investing, and disbursing E-rate funds. Eligible schools and libraries may apply annually to receive E-rate support. The program places schools and libraries into various discount categories, based on indicators of need. As a result of the application of the discount rate to the cost of the service, the school or library pays a percentage of the cost for the service and the E-rate program covers the remainder. E-rate discounts range from 20 percent to 90 percent. Once the school or library has complied with the program’s requirements and entered into agreements with vendors for eligible services, the school or library must file a form with USAC noting the types and costs of the services being contracted for, the vendors providing the services, and the amount of discount being requested. USAC reviews the forms and issues funding commitment decision letters. The funding commitment decision letters notify the applicants of the decisions regarding their E-rate discounts. These funding commitment decision letters also notify the applicants that USAC will send the information on the approved E-rate discounts to the providers so that “preparations can be made to begin implementing . . . E-rate discount(s) upon the filing of . . . Form 486.” The applicant files FCC Form 486 to notify USAC that services have started and USAC can pay service provider invoices. Generally, the service provider seeks reimbursement from USAC for the discounted portion of the service, although the school or library also could pay the service provider in full and then seek reimbursement from USAC for the discount portion. The precise phrasing of the questions regarding the nature of USF has varied over the years, including asking whether they are federal funds, appropriated funds, or public funds and, if so, for what purposes? While the various fiscal statutes may use these different terms to describe the status of funds, we think the fundamental issue is what statutory controls involving the collection, deposit, obligation, and disbursement of funds apply to USF. As explained below, funds that are appropriated funds are subject, unless specifically exempted by law, to a variety of statutory provisions providing a scheme of funds controls. See B-257525, Nov. 30, 1994; 63 Comp. Gen. 31 (1983); 35 Comp. Gen. 436 (1956); B-204078.2, May 6, 1988. On the other hand, funds that are not appropriated funds are not subject to such controls unless the law specifically applies such controls. Thus, we believe the initial question is whether USF funds are appropriated funds. FCC has concluded that USF constitutes a permanent indefinite appropriation. We agree with FCC’s conclusion. Typical language of appropriation identifies a fund or account as an appropriation and authorizes an agency to enter into obligations and make disbursements out of available funds. For example, Congress utilizes such language in the annual appropriations acts. See 1 U.S.C. § 105 (requiring regular annual appropriations acts to bear the title “An Act making appropriations. . .”). Congress, however, appropriates funds in a variety of ways other than in regular annual appropriation acts. Indeed, our decisions and those of the courts so recognize. Thus, a statute that contains a specific direction to pay, and a designation of funds to be used, constitutes an appropriation. 63 Comp. Gen. 331 (1984); 13 Comp. Gen. 77 (1933). In these statutes, Congress (1) authorizes the collection of fees and their deposit into a particular fund, and (2) makes the fund available for expenditure for a specified purpose without further action by Congress. This authority to obligate or expend collections without further congressional action constitutes a continuing appropriation or a permanent appropriation of the collections. E.g., United Biscuit Co. v. Wirtz, 359 F.2d 206, 212 (D.C. Cir. 1965), cert. denied, 384 U.S. 971 (1966); 69 Comp. Gen. 260, 262 (1990); 73 Comp. Gen. 321 (1994). Our decisions are replete with examples of permanent appropriations, such as revolving funds and various special deposit funds, including mobile home inspection fees collected by the Secretary of Housing and Urban Development, licensing revenues received by the Commission on the Bicentennial, tolls and other receipts deposited in the Panama Canal Revolving Fund, user fees collected by the Saint Lawrence Seaway Development Corporation, user fees collected from tobacco producers to provide tobacco inspection, certification and other services, and user fees collected from firms using the Department of Agriculture’s meat grading services. It is not essential for Congress to expressly designate a fund as an appropriation or to use literal language of “appropriation,” so long as Congress authorizes the expenditure of fees or receipts collected and deposited to a specific account or fund. In cases where Congress does not intend these types of collections or funds to be considered “appropriated funds,” it explicitly states that in law. See e.g., 12 U.S.C. § 244 (the Federal Reserve Board levies assessments on its member banks to pay for its expenses and “funds derived from such assessments shall not be construed to be government funds or appropriated moneys”); 12 U.S.C. § 1422b(c) (the Office of Federal Housing Enterprise Oversight levies assessments upon the Federal Home Loan Banks and from other sources to pay its expenses, but such funds “shall not be construed to be government funds or appropriated monies, or subject to apportionment for the purposes of chapter 15 of title 31, or any other authority”). Like the above examples, USF’s current authority stems from a statutorily authorized collection of fees from telecommunications carriers, and expenditures for a specified purpose—that is, the various types of universal service. Thus, USF meets both elements of the definition of a permanent appropriation. We recognize that prior to the passage of the Telecommunications Act of 1996, there existed an administratively sanctioned universal service fund. With the Telecommunications Act of 1996, Congress specifically expanded the contribution base of the fund, statutorily mandated contributions into the fund, and designated the purposes for which the monies could be expended. These congressional actions established USF in a manner that meets the elements for a permanent appropriation and Congress did not specify that USF should be considered anything other than an appropriation. Appropriated funds are subject to a variety of statutory controls and restrictions. These controls and restrictions, among other things, limit the purposes for which they may be used and provide a scheme of funds control. See e.g., 63 Comp. Gen. 110 (1983); B-257525, Nov. 30, 1994; B- 228777, Aug. 26, 1988; B-223857, Feb. 27, 1987; 35 Comp. Gen. 436 (1956). A key component of this scheme of funds control is the Antideficiency Act. B- 223857, Feb. 27, 1987. The Antideficiency Act has been termed “the cornerstone of congressional efforts to bind the executive branch of government to the limits on expenditure of appropriated funds.” Primarily, the purpose of the Antideficiency Act is to prevent the obligation and expenditure of funds in excess of the amounts available in an appropriation or in advance of the appropriation of funds. 31 U.S.C. § 1341(a)(1). FCC has determined that the Antideficiency Act applies to USF, and as explained below, we agree with FCC’s conclusion. The Antideficiency Act applies to “officer or employee of the United States Government . . . mak or authoriz an expenditure or obligation . . . from an appropriation or fund.” 31 U.S.C. § 1341(a). As established above, USF is an “appropriation or fund.” The fact that USAC, a private entity whose employees are not federal officers or employees, is the administrator of the E-rate program and obligates and disburses funds from USF is not dispositive of the application of the Antideficiency Act. This is because, as the FCC recognizes, it, not USAC, is the entity that is legally responsible for the management and oversight of the E-rate program and FCC’s employees are federal officers and employees of the United States subject to the Antideficiency Act. Where entities operate with funds that are regarded as appropriated funds, such as some government corporations, they, too, are subject to the Antideficiency Act. See e.g., B-223857, Feb. 27, 1987 (funds available to Commodity Credit Corporation pursuant to borrowing authority are subject to Antideficiency Act); B-135075-O.M., Feb. 14, 1975 (Inter- American Foundation). The Antideficiency Act applies to permanent appropriations such as revolving funds and special funds. 72 Comp. Gen. 59 (1992) (Corps of Engineers Civil Works Revolving Fund subject to Antideficiency Act); B-120480, Sep. 6, 1967, B-247348, June 22, 1992, and B- 260606, July 25, 1997 (GPO revolving funds subject to Antideficiency Act); 71 Comp. Gen. 224 (1992) (special fund that receives fees, reimbursements, and advances for services available to finance its operations is subject to Antideficiency Act). Where Congress intends for appropriated funds to be exempt from the application of statutory controls on the use of appropriations, including the Antideficiency Act, it does so expressly. See e.g., B-193573, Jan. 8, 1979; B- 193573, Dec. 19, 1979; B-217578, Oct. 16, 1986 (Saint Lawrence Seaway Development Corporation has express statutory authority to determine the character and necessity of its obligations and is therefore exempt from many of the restrictions on the use of appropriated funds that would otherwise apply); B-197742, Aug. 1, 1986 (Price-Anderson Act expressly exempts the Nuclear Regulatory Commission from Antideficiency Act prohibition against obligations or expenditures in advance or in excess of appropriations). There is no such exemption for FCC or USF from the prohibitions of the Antideficiency Act. Thus, USF is subject to the Antideficiency Act. An important issue that arises from the application of the Antideficiency Act to USF is what actions constitute obligations chargeable against the fund. Understanding the concept of an obligation and properly recording obligations are important because an obligation serves as the basis for the scheme of funds control that Congress envisioned when it enacted fiscal laws such as the Antideficiency Act. B-300480, Apr. 9, 2003. For USF’s schools and libraries program, one of the main questions is whether the funding commitment decision letters issued to schools and libraries are properly regarded as obligations. FCC has determined that funding commitment decision letters constitute obligations. And again, as explained below, we agree with FCC’s determination. Under the Antideficiency Act, an agency may not incur an obligation in excess of the amount available to it in an appropriation or fund. 31 U.S.C. § 1341(a). Thus, proper recording of obligations with respect to the timing and amount of such obligations permits compliance with the Antideficiency Act by ensuring that agencies have adequate budget authority to cover all of their obligations. B-300480, Apr. 9, 2003. We have defined an “obligation” as a “definite commitment that creates a legal liability of the government for the payment of goods and services ordered or received.” Id. A legal liability is generally any duty, obligation or responsibility established by a statute, regulation, or court decision, or where the agency has agreed to assume responsibility in an interagency agreement, settlement agreement or similar legally binding document. Id. citing to Black’s Law Dictionary 925 (7th ed. 1999). The definition of “obligation” also extends to “ legal duty on the part of the United States which constitutes a legal liability or which could mature into a legal liability by virtue of actions on the part of the other party beyond the control of the United States. . . .” Id. citing to 42 Comp. Gen. 733 (1963); see also McDonnell Douglas Corp. v. United States, 37 Fed. Cl. 295, 301 (1997). The funding commitment decision letters provided to applicant schools and libraries notify them of the decisions regarding their E-rate discounts. In other words, it notifies them whether their funding is approved and in what amounts. The funding commitment decision letters also notify schools and libraries that the information on the approved E-rate discounts is sent to the providers so that “preparations can be made to begin implementing . . . E-rate discount(s) upon the filing of . . . Form 486.” The applicant files FCC Form 486 to notify USAC that services have started and USAC can pay service provider invoices. At the time a school or library receives a funding commitment decision letter, the FCC has taken an action that accepts a “legal duty . . . which could mature into a legal liability by virtue of actions on the part of the grantee beyond the control of the United States.” Id. citing 42 Comp. Gen. 733, 734 (1963). In this instance, the funding commitment decision letter provides the school or library with the authority to obtain services from a provider with the commitment that it will receive a discount and the provider will be reimbursed for the discount provided. While the school or library could decide not to seek the services or the discount, so long as the funding commitment decision letter remains valid and outstanding, USAC and FCC no longer control USF’s liability; it is dependent on the actions taken by the other party—that is, the school or library. In our view, a recordable USF obligation is incurred at the time of issuance of the funding commitment decision letter indicating approval of the applicant’s discount. Thus, these obligations should be recorded in the amounts approved by the funding commitment decision letters. If at a later date, a particular applicant uses an amount less than the maximum or rejects funding, then the obligation amount can be adjusted or deobligated, respectively. Additional issues that remain to be resolved by FCC include whether other actions taken in the universal service program constitute obligations and the timing of and amounts of obligations that must be recorded. For example, this includes the projections and data submissions by USAC to FCC and by participants in the High Cost and Low Income Support Mechanisms to USAC. FCC has indicated that it is considering this issue and consulting with the Office of Management and Budget. FCC should also identify any other actions that may constitute recordable obligations and ensure those are properly recorded. Various policies to promote universal service—providing residential customers with affordable, nationwide access to basic telephone service— have generally been around since the 1950s. Congress codified and made significant changes to universal service policy in the Telecommunications Act of 1996. However, Congress did not prescribe a structure for administering the universal service programs and instead called for a Federal-State Joint Board on Universal Service (Joint Board) to make recommendations to FCC. At the time of the act, the National Exchange Carrier Association (NECA) was responsible for administering the existing universal service mechanisms providing support for high-cost areas and low-income individuals. NECA is an association of incumbent local telephone companies that was established at FCC’s direction in 1983 (in anticipation of the breakup of the Bell System) to administer interstate access tariffs and the revenue distribution process for the nation’s nearly 1,000 local telephone companies. In November 1996, the Joint Board recommended that, in the interest of providing services to schools and libraries and health care providers quickly, FCC should appoint NECA as the temporary administrator of universal service to these groups, subject to changes in NECA’s governance to make NECA more representative of the telecommunications industry as a whole. Under the Joint Board’s recommendation, NECA would continue this role until a permanent administrator was appointed. The Joint Board recommended that FCC establish an advisory board to select and oversee a neutral third-party administrator for all universal service programs and suggested criteria to be used in that selection. The Joint Board further recommended that FCC allow NECA to change its membership and governance in a manner that would allow it to compete for the role of permanent administrator in the advisory board’s selection process. On the basis of the Joint Board’s recommendations, FCC agreed in a May 1997 order to appoint NECA as the temporary administrator, subject to changes in NECA’s governance. It also agreed to create a federal advisory committee, whose sole responsibility would be to recommend an administrator, and directed that the administrator should select a contractor to manage the application process for schools and libraries. NECA later determined that developing a satisfactory board structure to be able to bid for the permanent administrator role might not be possible. Thus, NECA proposed to FCC in January 1997 that it be allowed to establish a separate subsidiary to administer universal service. In July 1997, FCC issued an order directing NECA to create two independent nonprofit corporations—one to administer the program for schools and libraries (the Schools and Libraries Corporation) and one to administer the program for rural health care providers (the Rural Health Care Corporation). FCC’s order further specified that these corporations would continue to administer the programs even after the appointment of a permanent administrator. To carry out billing, collecting, and disbursement activities for these programs, FCC directed NECA to create a nonprofit subsidiary. FCC further directed that the subsidiary create a special committee of its board of directors to administer the universal service programs for high-cost areas and low-income individuals. NECA created the Universal Service Administrative Company (USAC) as the subsidiary. In November 1998, FCC changed the universal service structure in response to legal concerns about FCC’s authority to create the two independent corporations and Congress’s directive that a single entity administer universal service support. FCC appointed an existing body, USAC, as the permanent administrator of the program and directed the Schools and Libraries Corporation and the Rural Health Care Corporation to merge with USAC by January 1, 1999. Under this merger, the staff of the Schools and Libraries Corporation became part of a new Schools and Libraries Division (SLD) within USAC, carrying out essentially the same functions as before, such as processing and reviewing E-rate applications. However, SLD contracts out most of its billing, collecting, and disbursement activities to USAC. In addition, in 2000 NECA formed an unaffiliated, for-profit corporation, NECA Services Inc., to pursue new business opportunities. USAC later contracted most of its application processing, client support, and review functions to NECA Services Inc. See figure 1. The following are GAO’s comments on the Federal Communications Commission’s letter dated January 14, 2005. 1. As stated in our report, we have not addressed FCC’s authority to establish the current organizational structure. We recognize that FCC has reported to Congress on its implementation of the current organizational structure and it believes that structure is consistent with congressional intent and conforms to congressional guidance. However, at the time this structure was established by FCC, numerous issues such as the status of the Universal Service Fund as federal funds—specifically a permanent indefinite appropriation—and the applicability of fiscal statutes such as the Antideficiency Act had not been resolved. It is critical to the management of federal funds that the funds be properly collected, deposited, obligated, and expended by authorized parties in accordance with those determinations regarding the status of the funds. Thus, we believe FCC should consider whether the current organizational structure and roles and responsibilities of FCC and USAC are consistent with law and comply with fiscal and accountability requirements for federal funds. FCC states that it intends to consider whether to modify the manner in which the Universal Service Fund is administered, including possible changes to the underlying administrative structure. We believe this would be a positive step toward carrying out our recommendation. 2. FCC states that it has undertaken a timely and extensive analysis of the significant legal issues related to the status of the Universal Service Fund and has generally done so on a case-by-case basis. We recognize that FCC has engaged in internal deliberations and external consultations and analysis of a number of statutes. However, we do not believe this has been done in a timely manner or that it is appropriate to do so on a case-by-case basis. Addressing the applicability of the statutes on a case-by-case basis, as issues have arisen, has put FCC and the program in the position of reacting to problems as they occur rather than setting up an organization and internal controls designed to ensure compliance with applicable laws. The laws encompassing fiscal and accountability controls are not applied in isolation; rather, they are part of a framework that addresses issues of financial and general management of federal agencies and programs. The E-rate program was established over seven years ago, yet FCC is still analyzing whether certain statutes or requirements apply to the program and what actions it must take to implement those statutes and ensure compliance with them. The recent issues involving the Antideficiency Act best illustrate the problem with this case-by-case approach. As explained in our report, it was not until the fall of 2004 that the applicability and consequences of the Antideficiency Act were resolved. Moreover, this was not the first time issues regarding the Antideficiency Act had been raised. In July 1998, a question had been raised regarding USAC’s authority to commecially borrow funds. At that time, USAC was instructed to refrain from commercial borrowing while FCC was examining the applicability of the Antideficiency Act to USAC’s operations. While FCC determined that USAC should not borrow commercially in 1998, the question of whether there were other consequences for the E-rate program regarding the applicability of the Antideficiency Act was not addressed. Had FCC taken a comprehensive approach to the application of fiscal and accountability statutes such as the Antideficiency Act when the program was created or soon thereafter, FCC would have been in a position to determine what steps they should have taken and what internal controls they should have had in place to ensure compliance with those statutes. For example, with respect to the Antideficiency Act, they could have determined whether actions they were taking were obligations that needed to be recorded and, if so, made any necessary changes to the program to ensure that they had sufficient amounts in the Universal Service Fund to cover those obligations. Furthermore, while certain determinations may have been made internally, they have neither been analyzed nor definitively determined in FCC’s orders on the E-rate program. In addition, USAC has not always received instruction on how to carry out all of these requirements. For example, as noted in our report, in its October 2003 order applying GovGAAP to the Universal Service Fund, FCC stated that “the Funds may be subject to a number of federal financial and reporting statutes” (emphasis added) and “relevant portions of the Federal Financial Management Improvement Act of 1996,” but did not specify which specific statutes or the relevant portions or further analyze their applicability. 3. In our report, we list several examples of fiscal control and accountability statutes. FCC states in its letter that it has already made a determination of each statute’s applicability to the Universal Service Fund. We agree that FCC has made a determination involving the applicability of the Improper Payments Information Act, and we therefore deleted our references to this act. We recognize that FCC has consulted with other agencies such as OMB and Treasury regarding the applicability of the Miscellaneous Receipts Act, the Single Audit Act, and the Cash Management Improvement Act. However, we believe that where FCC has determined that fiscal controls and policies do not apply, the commission should reconsider these determinations in light of the status of universal service monies as federal funds. Such a reconsideration is particularly important in the case of the Miscellaneous Receipts Act, where OMB and FCC determined in 2000 that the act did not apply because the funds were not public monies for the use of the United States. Our recommendation focuses on a proactive, comprehensive analysis and determination of legal requirements rather than a continued approach of reactive case-by-case determinations. A definitive determination on the entire framework of laws that apply or do not apply to this program would enable FCC to make operational decisions on what steps they should take and what internal controls they should have in place to ensure compliance with applicable laws. 4. As stated in our report, due to the complexities posed by these issues, GAO remains available to provide an advance decision to FCC under 31 U.S.C. § 3529. 5. Our report does not note that “FCC had established some performance measures, but determined that it needed to establish better and more comprehensive ways of measuring E-rate performance.” It also does not note that the reason FCC stopped using the number of public schools connected to the Internet was that it was no longer a useful measure of the program. Our report states that prior to fiscal year 2000, FCC had no specific goals and measures for the program; that for fiscal years 2000 through 2002, the goals and measures set by FCC were not useful for assessing the impact of E-rate program funding because the measures used did not directly measure the impact of E-rate funding; and that since fiscal year 2002 there have been no E-rate performance goals and measures at all. In its letter, FCC states that it is actively working to re-establish performance goals and measures that are consistent with the Government Performance and Results Act. Our finding is that FCC never established E-rate goals and measures that were consistent with the act in the first place, despite our recommendation in 1998 (and reiterated in 1999) to do so. In a multibillion-dollar program now entering its eighth funding year, this is a serious management deficiency. In its letter, FCC notes that it needs to seek comment from stakeholders regarding performance measures. GAO’s guidance on implementing the Results Act supports this approach: Stakeholder involvement in defining goals is particularly important in a political environment, and the involvement of Congress is indispensable. While we understand the time involved in crafting useful performance goals and measures and complying with the notice-and-comment requirements of the Administrative Procedure Act, we urge FCC to move as quickly as possible in its efforts. 6. Our draft report included appeals numbers that were different from those in FCC’s letter. It appears that our numbers included waiver requests as well as appeals. We have changed our report to reflect the numbers included in FCC’s letter, which, according to FCC, are current as of January 1, 2005. This numerical difference does not reflect any material change. 7. We are encouraged that FCC has begun redirecting staff and hiring additional attorneys to Universal Service Fund oversight and program management, including the resolution of E-rate appeals. It is a particularly positive step that FCC has established a measurable goal of resolving all backlogged E-rate appeals by the end of calendar year 2005. In addition to those named above, Carol Anderson-Guthrie, Andy Clinton, Derrick Collins, Sandra DePaulis, Edda Emmanuelli-Perez, Chad Factor, Moses Garcia, Lynn Gibson, Karen O’Conor, Mindi Weisenbloom, and Alwynne Wilbur made key contributions to this report.
Since 1998, the Federal Communications Commission's (FCC) E-rate program has committed more than $13 billion to help schools and libraries acquire Internet and telecommunications services. Recently, however, allegations of fraud, waste, and abuse by some E-rate program participants have come to light. As steward of the program, FCC must ensure that participants use E-rate funds appropriately and that there is managerial and financial accountability surrounding the funds. GAO reviewed (1) the effect of the current structure of the E-rate program on FCC's management of the program, (2) FCC's development and use of E-rate performance goals and measures, and (3) the effectiveness of FCC's oversight mechanisms in managing the program. FCC established the E-rate program using an organizational structure unusual to the government without conducting a comprehensive assessment to determine which federal requirements, policies, and practices apply to it. The E-rate program is administered by a private, not-for-profit corporation with no contract or memorandum of understanding with FCC, and program funds are maintained outside of the U.S. Treasury, raising issues related to the collection, deposit, obligation, and disbursement of the funding. While FCC recently concluded that the Universal Service Fund constitutes an appropriation and is subject to the Antideficiency Act, this raises further issues concerning the applicability of other fiscal control and accountability statutes. These issues need to be explored and resolved comprehensively to ensure that appropriate governmental accountability standards are fully in place to help protect the program and the fund from fraud, waste, and abuse. FCC has not developed useful performance goals and measures for assessing and managing the E-rate program. The goals established for fiscal years 2000 through 2002 focused on the percentage of public schools connected to the Internet, but the data used to measure performance did not isolate the impact of E-rate funding from other sources of funding, such as state and local government. A key unanswered question, therefore, is the extent to which increases in connectivity can be attributed to E-rate. In addition, goals for improving E-rate program management have not been a feature of FCC's performance plans. In its 2003 assessment of the program, OMB noted that FCC discontinued E-rate performance measures after fiscal year 2002 and concluded that there was no way to tell whether the program has resulted in the cost-effective deployment and use of advanced telecommunications services for schools and libraries. In response to OMB's concerns, FCC is currently working on developing new E-rate goals. FCC's oversight mechanisms contain weaknesses that limit FCC's management of the program and its ability to understand the scope of any waste, fraud, and abuse within the program. According to FCC officials, oversight of the program is primarily handled through agency rulemaking procedures, beneficiary audits, and appeals decisions. FCC's rulemakings have often lacked specificity and led to a distinction between FCC's rules and the procedures put in place by the program administrator--a distinction that has affected the recovery of funds for program violations. While audits of E-rate beneficiaries have been conducted, FCC has been slow to respond to audit findings and make full use of them to strengthen the program. In addition, the small number of audits completed to date do not provide a basis for accurately assessing the level of fraud, waste, and abuse occurring in the program, although the program administrator is working to address this issue. According to FCC officials, there is also a substantial backlog of E-rate appeals due in part to a shortage of staff and staff turnover. Because appeal decisions establish precedent, this slowness adds uncertainty to the program.
TSA is responsible for securing all modes of transportation while facilitating commerce and freedom of movement for the traveling public. In performing its responsibilities, TSA is guided by risk-based planning, which generally involves a consideration of threats, vulnerabilities, and the criticality or consequence of an attack if it were to be carried out. Specifically, in its approach to securing the domestic aviation sector, TSA maintains numerous programs that provide a layered approach to security, including intelligence gathering and analysis, checking passenger manifests against watch lists, and assigning undercover air marshals to certain flights. The general public associates TSA mainly with its security effort at airport passenger checkpoints. One primary goal of the passenger checkpoint screening program is to provide for the safety and security of persons and property on an aircraft against the introduction of an unauthorized weapon, explosive, or incendiary. As we reported in April 2007, TSA continues to modify its checkpoint screening program based on a number of factors including passenger feedback, risk-based planning, and its own internal review and testing process. TSA’s well-publicized recent policy change in response to the alleged transatlantic bomb plot of August 2006 is an important example of risk-based planning. Known as the 3-1-1 rule, this procedural change prohibits liquid, gel, or aerosol items over 3.4 fluid ounces in carry-on luggage; in addition, all liquid and gels should be placed in a 1-quart bag, and only one 1-quart bag is allowed per passenger. TSA focuses on the checkpoint screening process as a primary means of detecting prohibited items. Items that TSA has prohibited passengers from bringing aboard an aircraft include, among other things, firearms and knives; gasoline and lighter fluid; disabling chemicals, including chlorine and liquid bleach; and many additional items that may be seemingly harmless but could be used as weapons. During the passenger screening process, transportation security officers follow standard operating procedures and utilize technology such as walk-through metal detectors and X-ray machines to detect prohibited items either on a passenger’s person or in his or her carry-on luggage. The passenger checkpoint screening process is composed of the following three elements: Transportation security officers (also known as TSOs) screen all passengers and their carry-on luggage prior to allowing passengers access to their departure gates. Among other responsibilities, transportation security officers attempt to detect prohibited items that passengers may try to carry beyond the security checkpoint. Technology is used during the screening process, which primarily consists of walk-through metal detectors, X-ray machines, handheld metal detectors, and explosive trace detection (ETD) equipment. Standard operating procedures establish the process and standards by which transportation security officers are to screen passengers and their carry-on items at screening checkpoints. The process of screening a passenger who continues to alarm the walk- through metal detector provides an example of how these three elements intersect. According to TSA’s Screening Checkpoint Standard Operating Procedures manual, a passenger who continues to alarm the walk-through metal detector must be screened using a hand-wand search. Passengers may alternatively request a full-body pat-down search. The manual describes the process that transportation security officers are to follow during the additional screening, which includes the use of ETD swabbing and a pat-down of the passenger to detect any irregularities in their body contour that could represent concealed items. TSA faces a significant challenge in balancing security concerns with efficient passenger movement. In our April 2007 report, we described how TSA monitors transportation security officer compliance with passenger checkpoint screening procedures through its performance accountability and standards system and through testing. Compliance assessments include quarterly observations of transportation security officers’ ability to perform particular screening functions in the operating environment, quarterly quizzes to assess their knowledge of procedures, and an annual knowledge and skills assessment. TSA conducts tests to evaluate, in part, the extent to which transportation security officers are able to detect simulated threat items hidden in accessible property or concealed on a person. TSA modifies its standard operating procedures based on the professional judgment of TSA senior-level officials and program-level staff, daily experiences of airport staff, complaints and concerns raised by the traveling public, and an analysis of risks to the aviation system. For example, in December 2005, TSA modified its prohibited items list to allow passengers to carry certain scissors and tools as long as they did not exceed a certain length. TSA’s stated purpose in removing certain scissors and tools from the prohibited items list was to shift the focus of transportation security officers from items considered by TSA to pose a low threat to items considered to pose a high threat. Investigators found instructions on the Internet for creating both an IED and IID and purchased the components from the Internet and from a local store for approximately $150. The IED was conceived as a two-part device—a detonator component that, on its own, could function as an IED, and a mixture of fuel and oxidizer that would require the explosion of the detonator. Although the detonator component could be considered an IED, for the purposes of this report, we are referring to the combination of the detonator and the liquid explosive as a single IED. Information about liquid explosives was publicly available on several Web sites and discussed in media articles related to various terror plots, including the failed London subway bombing of July 21, 2005, and the transatlantic bomb plot of August 2006. In addition, we obtained information about creating an IID from the Internet. We also found videos on the Internet of the intense fire resulting from an IID. One of the components for the IID is a liquid that TSA prohibits passengers from bringing through security checkpoints. Specific details regarding the device components and the methods of concealment we used during our covert testing are classified by TSA; as such, they are not discussed in this testimony. A group of tests conducted in February 2006 and July 2007 show that the IED proposed for this investigation functions as intended. In 2006, within the scope of our original covert testing report, we worked with a law enforcement organization in the Washington, D.C., metro area to confirm that the detonator would function as an IED. A test performed by local law enforcement officials confirmed that the detonator would cause damage to an aircraft and threaten the safety of passengers. Because our proposed IED for this investigation was composed of two parts (the detonator and the liquid explosive), in July 2007 we sought assistance to confirm that this more complex IED would function as intended. Several tests conducted at a national laboratory demonstrated that this IED can function as intended, with the initial explosion by the detonator successfully causing the liquid explosive to detonate in several tests. Explosion data indicate that this device exploded with a force sufficient to cause severe damage to an aircraft. The IID is a far simpler device. Our work with a law enforcement organization in the Washington, D.C., metro area in February 2006 confirmed that the components of the IID (one of which is a liquid) could function as intended, causing damage to an aircraft and threatening the safety of passengers. Our investigators devised methods that would allow them to conceal the prohibited components for these devices from transportation security officers. During this planning phase, they considered publicly advertised TSA policies related to liquids and other items, including prohibited items. They also judged that some components could be hidden in either their carry-on luggage or on their persons. They developed covert test procedures to challenge TSA screening measures using these components and methods. Specific details regarding the methods of concealment we used are classified by TSA; as such, these details are not discussed in this testimony. By using various concealment methods, our investigators demonstrated that it is possible to bring the components for several functioning IEDs and one functioning IID through checkpoints and onto airline flights without being challenged by transportation security officers. In most cases, transportation security officers appeared to follow TSA procedures and used technology appropriately; however, we uncovered weaknesses in TSA screening procedures and other vulnerabilities as a result of these tests. For example, although transportation security officers generally enforced TSA’s 3-1-1 rule, we were able to bring a liquid component of the IID undetected through checkpoints by taking advantage of weaknesses we identified in TSA’s policies based on a review of public information. TSA determined that specific details regarding these weaknesses are sensitive security information and are therefore not discussed in this testimony. We did not notice any difference between the performance of private screeners and transportation security officers during our tests. From March 19 through March 23, 2007, two investigators tested the TSA checkpoint screening process at a number of U.S. airports. Transportation security officers did not interact with our investigators at every airport. Interactions that did occur included the following: On March 19 and March 20, 2007, transportation security officers advised our investigators to use a 1-quart clear plastic bag rather than the larger bags they were using, but did not require them to do so before passing through the checkpoint. Also at another airport, on March 23, 2007, a transportation security officer did not allow one investigator to bring a small, unlabeled bottle of medicated shampoo through the checkpoint. This was a legitimate toiletry item used by one of our investigators. The officer cited TSA policy and stated that since the bottle was not labeled, “it could contain acid.” She did not allow our investigator to bring the unlabeled medicated shampoo bottle through the checkpoint. However, a liquid component of the IID—despite being prohibited by TSA—was allowed to pass undetected through the checkpoint. We had identified this weakness based on a review of public information before performing our tests. From May 7 through May 9, 2007, two investigators tested the TSA checkpoint screening process at a number of U.S. airports. Transportation security officers did not interact with our investigators aside from the following: On May 8, 2007, one investigator deliberately placed coins in his pockets to ensure that he would receive a secondary inspection. The transportation security officer used a hand-wand and performed a pat- down search of our investigator. However, the transportation security officer did not detect any of the prohibited items our investigator brought through the checkpoint. From June 5 through June 8, 2007, two investigators tested the TSA checkpoint screening process at a number of U.S. airports. Transportation security officers did not interact with our investigators at every airport. Interactions that did occur included the following: Inclement weather forced our investigators to change their flight plans at one airport. After changing their plans, they were selected for secondary inspection at the TSA security checkpoint. Transportation security officers performed pat-downs at the checkpoint. However, the transportation security officers did not detect any of the prohibited items our investigators brought through the checkpoint. We briefed TSA officials on August 16, 2007, and September 5, 2007, to discuss our findings. Officials from TSA’s Security Operations Office were present during our second briefing. At these briefings, we suggested that TSA consider how the results of our covert testing should affect its risk- based approach to airport security. This could include implementing one or more measures to reduce the likelihood that terrorists could successfully bring IED and IID components through checkpoints using a similar methodology to ours in the future. The specific nature of our suggestions to TSA is considered sensitive security information. Put generally, we suggested that, among other things, TSA (1) establish, depending on airport capacity, one or more special passenger screening lines to screen individuals based on risk and individuals with special needs; (2) introduce more aggressive, visible, and unpredictable deterrent measures into the passenger screening process at airports nationwide, to potentially include the implementation of enhanced individual search procedures (e.g., pat-downs and hand-wand screening) to detect concealed components; and (3) continue to develop and deploy new technology to be used at passenger screening checkpoints that would be able to better detect concealed components. TSA officials indicated that they did not disagree with our suggestions in principle and that they would examine them closely to determine whether and how they should be implemented. They acknowledged vulnerabilities in human capital, processes, and technology. They also indicated that they are deploying additional specialized personnel to enhance security at existing checkpoints and that they are exploring methods for enhancing transportation security officer training and transforming the culture of their workforce. Regarding standard operating procedures, officials said that they are continuously revisiting and revising their policies. They also indicated that they were moving forward to develop a “checkpoint of the future” that would incorporate new and emerging technology to address terror threats. Such technology could include innovative imaging techniques. Our tests clearly demonstrate that a terrorist group, using publicly available information and few resources, could cause severe damage to an airplane and threaten the safety of passengers by bringing prohibited IED and IID components through security checkpoints. Given our degree of success, we are confident that our investigators would have been able to evade transportation security officers at additional airports had we decided to test them. We understand the challenges TSA faces in balancing security risks with the efficient movement of passengers; however, from a strict security standpoint, current policies allowing substantial carry-on luggage and related items through TSA checkpoints increases the risk of a terrorist successfully bringing an IED, an IID, or both onto an aircraft undetected. Even if current carry-on luggage policies are left unchanged, our testing shows that risks can be reduced through improvements in human capital, improved processes, and continued advances in technology. GAO is currently performing a more systematic review of these issues and expects to issue a comprehensive public report with recommendations for TSA in early 2008. Mr. Chairman and Members of the committee, this concludes our statement. We would be pleased to answer any questions that you or other members of the committee may have at this time. For further information about this testimony, please contact Gregory D. Kutz at (202) 512-6722 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In August 2006, the Transportation Security Administration (TSA) substantially modified its passenger screening policies based on the alleged transatlantic bomb plot uncovered by British authorities. With the aim of closing security gaps revealed by the alleged plot, the revised policies severely restricted the amount of liquids, gels, and aerosols TSA allowed passengers to bring through the checkpoint. At the Committee's request, GAO tested whether security gaps exist in the passenger screening process. To perform this work, GAO attempted to (1) obtain the instructions and components needed to create devices that a terrorist might use to cause severe damage to an airplane and threaten the safety of passengers and (2) test whether GAO investigators could pass through airport security checkpoints undetected with all the components needed to create the devices. GAO conducted covert testing at a nonrepresentative selection of 19 airports across the country. After concluding its tests, GAO provided TSA with two timely briefings to help it take corrective action. In these briefings, GAO suggested that TSA consider several actions to improve its passenger screening program, including aspects of human capital, processes, and technology. GAO is currently performing a more systematic review of these issues and expects to issue a comprehensive public report with recommendations for TSA in early 2008. GAO investigators succeeded in passing through TSA security screening checkpoints undetected with components for several improvised explosive devices (IED) and an improvised incendiary device (IID) concealed in their carry-on luggage and on their persons. The components for these devices and the items used to conceal the components were commercially available. Specific details regarding the device components and the methods of concealment GAO used during its covert testing were classified by TSA; as such, they are not discussed in this testimony. Using publicly available information, GAO investigators identified two types of devices that a terrorist could use to cause severe damage to an airplane and threaten the safety of passengers. The first device was an IED made up of two parts--a liquid explosive and a low-yield detonator. Although the detonator itself could function as an IED, investigators determined that it could also be used to set off a liquid explosive and cause even more damage. In addition, the second device was an IID created by combining commonly available products (one of which is a liquid) that TSA prohibits in carry-on luggage. Investigators obtained the components for these devices at local stores and over the Internet for less than $150. Tests that GAO performed at a national laboratory in July 2007, in addition to prior tests in February 2006 that GAO performed in partnership with a law enforcement organization in the Washington, D.C., metro area, clearly demonstrated that a terrorist using these devices could cause severe damage to an airplane and threaten the safety of passengers. Investigators then devised methods to conceal the components for these devices from TSA transportation security officers, keeping in mind TSA policies related to liquids and other items, including prohibited items. By using concealment methods for the components, two GAO investigators demonstrated that it is possible to bring the components for several IEDs and one IID through TSA checkpoints and onto airline flights without being challenged by transportation security officers. In most cases, transportation security officers appeared to follow TSA procedures and used technology appropriately; however, GAO uncovered weaknesses in TSA screening procedures and other vulnerabilities as a result of these tests. For example, although transportation security officers generally enforced TSA's policies, investigators were able to bring a liquid component of the IID undetected through checkpoints by taking advantage of weaknesses identified in these policies. These weaknesses were identified based on a review of public information. TSA determined that specific details regarding these weaknesses are sensitive security information and are therefore not discussed in this testimony. GAO did not notice any difference between the performance of private screeners and transportation security officers during our tests.
The advanced information network is the heart of the Army’s FCS concept and is intended to allow fielded FCS Brigade Combat Teams (BCT) to see the enemy first, understand the situation first, act first, and finish decisively. The FCS network management system to be deployed to the Army’s BCT is envisioned to: (1) plan and manage multi-technology mobile tactical communication; (2) encompass satellite, aerial and ground communication assets that provide multi-media voice, data, and video services to all elements of the FCS BCT; and (3) interface with terrestrial, aerial, and satellite assets of an Army division. If the FCS network works as intended, all commanders in the BCT and throughout areas of operations will have a common set of data that will allow for the synchronization of many BCT activities including the integration of fire and maneuver, intelligence collection, fusion, and dissemination, and sustainment of the force. The Army envisions that the network architecture would also permit connectivity with other military services, thus allowing additional situational awareness and understanding, and synchronized operations that are unachievable by current systems. FCS-equipped BCTs are to have significant warfighting capabilities that differ substantially from the large division-centric structure. The survival and combat effectiveness of FCS BCTs are critically dependent on the ability to see first, understand first, act first, and finish decisively. Through an advanced information network, the concept is to replace mass with superior information that will allow soldiers to see and hit the enemy first rather than to rely on heavy armor to withstand a hit. This new way of fighting solely depends on developing an information network that can successfully link the people, platforms, weapons, and sensors seamlessly together in a system of systems. This new way of fighting can be achieved only if the data can be made available in near real-time at sensor processors, at the battle command nodes, and at lethal systems. For example, FCS’s survivability depends on the brigade-wide availability of the network-based situational awareness plus the inherent survivability of the FCS platforms. Elements of the FCS information network will include the software and technology (applications, computers, and radios) that will link the people, platforms, weapons, and sensors together. These elements are expected to provide delivery of voice, data, video, still images, and network control services wirelessly over a mobile ad hoc network. In contrast to traditional wireless systems such as cellular phones that connect to a fixed station or permanent access point, FCS’s ad hoc network will not have access to such an infrastructure. Thus, the quality of service—the capability to transport information across the network while satisfying communication performance requirements such as low delay, low loss, or high throughput—becomes critically important and challenging due to limited available bandwidth. Essentially, tasks like mission planning, platform and soldier logistics management, battlespace analysis, collaboration, fire and effect controls, and network management will be done on the move. All of the 14 FCS platform types—manned ground vehicles, unmanned ground vehicles, unmanned air vehicles, and unattended ground sensors—are expected to have network elements that will enable them to share information and coordinate with one another. These elements include: Sensors. Sensors are the hardware and software that will provide FCS with the ability to “see first” and achieve situational awareness and understanding of the battlefield. These sensors will include such functions as search and detection of enemy fire, personnel, munitions, minefields, and terrain. The intelligence, surveillance and reconnaissance sensors will be integrated onto all manned and unmanned ground vehicles and aerial platforms, and will be capable of accomplishing a variety of missions that include, among others, surveillance over wide areas and target detection, enabling survivability. The unmanned aerial vehicles will be able to maneuver to an area of attack and the on-board sensors will provide surveillance of targets and terrain, among other functions. There are two types of unattended ground sensor systems that FCS will use—the tactical unattended ground sensors will provide intelligence, surveillance, and reconnaissance awareness to the BCTs, while urban unattended ground sensors will support clearing operations in confined spaces or urban chokepoints. According to the Army, complex data processing, filtering, aided target recognition, and fusion will be supported by software to provide warfighters with vital information. For example, the sensor data management software will organize the sensor data and track the information received from sensors. Figure 1 shows some types of FCS sensors. Software. Software is expected to control about 95 percent of FCS’s functionality and will be included in all FCS platforms. In its simplest form, software is the collection of computer programs and procedures that perform some task on a computer system or platform. It includes: (1) system software such as operating systems, which interface with hardware to provide the necessary services for application software; (2) middleware, which controls and coordinates distributed systems; and (3) application software such as word processors, which perform productive tasks for users. Overall development of FCS software is being managed by the LSI in cooperation with the Army’s FCS Program Office. There are over 100 software vendors involved in the development of software programs for FCS, including the LSI, 14 first-tier contractors, and other sub-contractors. Over 75 percent of software being developed for FCS is to operate the network. Network software is expected to integrate the collection of individual systems into a system of systems. This software will include the System of Systems Common Operating Environment (SOSCOE), Network Management System, Battle Command and Mission Execution, Sensor Data Management, Warfighter Machine Interface, and others. These will be included on all the FCS platforms and will perform a variety of functions. For example, software on platforms is to control the individual systems, such as radios and air and ground vehicle communications. SOSCOE is the operating environment that serves as the middleware between the operating systems and the software applications, integrating all other FCS software. The Battle Command software is to provide functions such as mission planning and preparation, situational understanding, and battle management and mission execution. Warfighter Machine Interface software is expected to provide the visual interface of the network to the warfighter. According to the Army, Warfighter Machine Interface is “the face of the FCS network,” which includes the display of services, touch screens, and buttons. It will provide a visual picture of the battlespace and allows the ability to collaborate across the forces. Figure 2 shows how the warfighter may see the battlefield through the network. Integrated Computing System. The integrated computing system is the on-board computer that will fit into the various FCS platforms. There are eight types of Integrated Computing Systems that vary in size to fit into the various FCS platforms—manned ground vehicles, unmanned aerial vehicles, and unattended ground vehicles. The computing system is expected to provide an integrated common operating environment to manage processing, secure the system, and allow access to the network on the move. It is also envisioned to support battle command applications, sensor processing, communications, weapons and platform management, and have embedded security and safety features that will help ensure a secure operating environment with certified firewall and network intrusion protection. Joint Tactical Radio System (JTRS)/Warfighter Information Network- Tactical (WIN-T). The Army plans to use the JTRS and WIN-T radios that employ “software-defined radio” technology in which many functions are performed by computer processing and technology. These and other critical software-intensive technologies are being developed outside of FCS control—termed complementary programs—and are expected to interoperate with existing systems and provide additional communications capability. The JTRS family of software-based radios is to provide the high-capacity, high-speed information link to vehicles, weapons, aircraft, sensors, and soldiers, while WIN-T is to provide high bandwidth connectivity to Army units on the move with higher levels of command to other forces, and provide the Army’s tactical extension to the Global Information Grid. Such capabilities include access to maps and other visual data, communication via voice and video with other units and levels of command, and the acquisition of information directly from battlefield sensors. The JTRS family of programs includes the Ground Mobile Radios that are being developed for vehicles. Smaller JTRS Handheld, Manpack, and Small Form Factors radios are being developed that will be carried by soldiers and embedded in several FCS core systems. Software will be used to control how JTRS radios will work. For example, JTRS radios will use two software waveforms called the Wideband Networking Waveform and Soldier Radio Waveform. The function of the Wideband Networking Waveform software is to provide communications signals, routing, transport, network management, quality of service, information assurance, transport, and mobility. The Soldier Radio Waveform is being developed for JTRS radios—ground mobile radio, and handheld manpack and small form factors radios—and will primarily be used for tactical networking by soldiers, unattended systems, and embedded radios in munitions. Because FCS has unique applications and networking needs, the program is responsible for integrating these into their distributed applications that are running on SOSCOE. Figure 3 shows the JTRS radios. The Army is faced with significant management and technological challenges that place development of the FCS network at risk. All of the projected FCS capabilities are heavily dependent on wide availability and high performance of the network. Further, preliminary design of the network is still being matured and much development and integration of the network hardware and software remains. It has taken almost 5 years for the Army and LSI to develop an understanding of what the network needs to be, what may be technically feasible, how to build it, and how to demonstrate it. In addition, the definition of the detailed network requirements is still not complete and there are numerous risks that must be overcome, such as the constraints imposed by a mobile ad hoc network, gaps between FCS network design and complementary program requirements, and interoperability issues with strategic networks of the Global Information Grid. While progress has been reported on software development, the continued growth in software code and underestimation of what it will take to develop and integrate software poses risk to the successful development of the network. Although maturity of network design is still a work in progress (i.e., numerous high risks remain and full network demonstration is years away), the Army has achieved an understanding of what the network needs to be, what may be technically feasible, how to build it, and how to demonstrate it. However, in addition to challenges and risks that need to be addressed, much learning and work remains before the Army and LSI can mature the network. For example, the Army and LSI are still determining what network management means in terms of: (1) what is needed to support each specific mission (radios, routers, satellites, computers, information assurance devices, and policies); (2) how to allocate network resources to the mission spectrum; and (3) how to fuse, process, and present extensive FCS sensor data to appropriate users. They are also learning how to maintain the network, such as monitoring the status and performance of the network (hardware faults, network quality of service, and overall performance); managing the spectrum to ensure connectivity; avoiding interference; and reconfiguring the network in real- time based on changing network conditions and mission critical traffic. To provide managed communication services between the soldiers, platforms, and sensors to complete military missions successfully, the Army must decide what information the individual users will need and its priority, where that information may reside, and how and when to get it to the user. For example, current plans call for the network supporting a BCT to include more than 5,000 nodes on over 1,500 radio sets running at least four different advanced networking waveforms, supporting networks and sub-networks interconnected by gateways, and carrying 3 million identified, point-to-point information exchange requirements. The Army’s FCS program office provides that the primary interface types for FCS will include discovery, publish/subscribe, and multi-cast methods. Given the reality that the amount of traffic to be sent over the network may exceed its capacity, assuring end-to-end quality of service over the network presents a major challenge. The Army and LSI have undertaken studies to better understand it. The Army and LSI are in the midst of developing the next generation of wireless communications, referred to as the mobile ad hoc network, which is a fundamentally new capability that presents a host of technical challenges. For example, the mobile ad hoc network will operate with lower network capacity and have fewer options for increasing capacity due to limitations on the amount of radio frequency that is available. Performance of the ad hoc network is expected to decrease as more radios or nodes are added and eventually can reach an unacceptable level. That is, the size of the network may reach a maximum when all fixed capacity is consumed for routing traffic from other radios or nodes and no capacity is available for local consumption. In a network of limited capacity, decisions need to be made on how to control admission to the network, account for network resources, ensure end-to-end services basis, and do so in a mobile ad hoc network environment with varying routes and link capacities. As a result, the Army and LSI are working on how best to allocate functions throughout the FCS system of systems. Further, unlike common wireless systems that have access to the Internet—such as cellular and wireless networking protocols where every node is connected directly to the network by a single local wireless link— the FCS information network will change dynamically as the mobile nodes are expected to be able to communicate with each other, while on the move. In the FCS information network, most network nodes will not have local access to the network. Thus, each radio must also be a router, meaning that it is responsible for passing traffic (voice, data, and video) from other radios as well as traffic local to the radio. As a result, networking becomes extremely difficult for the following reasons: The FCS information network is wireless and, consequently, the bandwidth limits the availability of the radio frequency spectrum. A mobile ad hoc network has known characteristics that pose difficulties in providing quality of service such as, among others, the lack of precise information about network performance, lack of central control, and insecure media over the network. The research community is still studying various approaches and trade- offs to these open problems because they are not yet fully understood. Because these problems have not been solved and are not supported by an existing and proven technology base, there is serious concern whether the Army and LSI can overcome them within the current schedule. While some progress is being made to understand what the network needs to be, how to build it, and how to demonstrate it, the Army and LSI have identified major technical and integration risks to be addressed in order to meet overall FCS requirements. In July 2007, the Army and LSI reported their findings from a network review that identified 7 high-risks and 16 medium-risks, totaling 23 risks specific to the FCS network. Although Army and LSI officials are confident that such risks can be addressed, the scale and complexity of what is involved is without precedent. Among others, network risks include: Enterprise network performance and scalability. There is a high likelihood that the FCS network performance will be affected because ad hoc networks have limited scalability, and performance decreases as more radios are added. End-to-end quality of service on mobile ad hoc networks. The probability is high that the FCS network will not be able to ensure that the information with the highest value is delivered on time to the intended recipients. Failure to support the warfighter in defining and implementing command intent for information management will result in substantially reduced force effectiveness. These capabilities are dependent on actual performance of JTRS and WIN-T, both of which have their own technology, development, and programmatic difficulties and are at risk of being delayed or delivering incomplete capabilities. The FCS Program Office and LSI are working closely with program offices responsible for managing these complementary programs, but synchronization of the detailed requirements is still problematic. End-to-end interoperability with strategic networks of the Global Information Grid. The requirements of interoperability with strategic networks of the Grid will be another challenge. Given the already stressed conditions envisioned for FCS tactical networks, interoperability with strategic networks will be technically challenging. Soldier radio waveform availability. The soldier radio waveform provides functional capability that is needed to support many FCS systems but may not be completed in time to support FCS. These capabilities facilitate interoperability functions between the FCS family of systems. The development of waveforms remains a technically challenging and lengthy effort, which involves complex software development and integration work. The program has already experienced schedule delays, cost increases, and requirements changes. As such, these functional capabilities are critical to FCS’s performance and these delays will negatively impact the schedule. System of Systems Common Operating Environment availability and maturity. There is recognized risk that SOSCOE may not reach the necessary maturity level required to meet program milestones. There are also recognized risks associated with interoperability of the software and dissemination of data to the mobile ad hoc network. Software productivity. There is recognized high risk that the LSI and its contractors may not be able to build, test, and integrate as much software as planned in the projected times. If software productivity falls short of planned efforts, the overall software build schedules will be impacted by 2 to 4 months, and integration will also be correspondingly impacted. The amount of estimated software code required for FCS has recently increased to 95.1 million lines. This is nearly triple the original estimate made in 2003 and the largest software effort by far for any weapon system. Software code is difficult to estimate and underestimation is not unique to FCS. Compounding this inherent difficulty on FCS were the program’s poorly defined requirements, indicative of its immaturity. Lines of code have grown as requirements have become better understood. While the Army believes the latest increases will not command higher costs, independent estimates suggest otherwise. The Army and LSI continue to underestimate the size of software needed for FCS. Studies show this is a common mistake made by defense and private industry that develop software-intensive systems, which can lead to longer schedules and increased cost. Apart from the sheer difficulty of writing and testing such a large volume of complex code, a number of risks face the FCS software development effort. As requirements have become better understood, the number of lines of code has grown significantly since the program began in 2003. Table 1 shows FCS code growth for total source lines of code (SLOC) and the effective source lines of code (ESLOC). Since May 2003, projected SLOCs have increased by 61.4 million to an estimated 95.1 million lines of computer software code, almost triple in size compared to original estimates. Similarly, ESLOCs increased by 6.8 million to 19.6 million lines of computer software code, a 53 percent increase. Since January 2006, both SLOC and ESLOC estimates have significantly increased. For example, SLOC estimates increased by 31.3 million lines of computer code, or about 50 percent, while ESLOC estimates increased by 2.5 million lines of computer code, or about 15 percent. Army officials attributed this surge to operating system software that was greatly underestimated in 2003 when the program began. These latest estimates now include operating system software that will be used on the integrated computer system. While the Army and LSI have completed the first software build—and were close to completing the second of five total software builds at the time of our review—each build required more “actual” software coding than was originally estimated, further indicating that efforts on what it will take to develop and integrate software may be more than planned. For example, the ESLOCs for Build “0” increased 6 percent from an estimated 0.96 million to 1.02 million actual source lines of computer code. Similarly, at the time of our review, ESLOCs for Build “1” increased 17 percent from an estimated 5.3 million lines of code to 6.2 million lines of computer code. Army officials maintain that these increases will not have a major impact on the program. However, experiences of other organizations that develop software-intensive systems suggest otherwise, according to leading experts who conducted extensive research of over 20,000 software development projects spanning 18 years. For example, poor size estimation is one of the main reasons that major software-intensive acquisition programs ultimately fail. In fact, the defense industry, private sector, and academia note that software size is a critical factor in determining cost, schedule, and effort, and failure to accurately predict software size results in cost overruns and late deliveries. According to guidance made available by the Software Technology Support Center at Hill Air Force Base for defense organizations that develop software, deviations in software size data indicate problems with faulty software productivity estimates; requirements stability, design, coding, and process; unrealistic interpretation of original requirements and resource estimates; and rationale used to develop the estimates. A contributing factor for the Army and LSI’s inaccurate software sizing estimates is that system-level requirements have not been fully defined, which makes it difficult to determine what will be needed in terms of software. In May 2003, the Army and LSI estimated that it would take about 34 million lines of code at a time when they were still trying to identify and understand the high-level requirements. Despite not fully understanding those high-level requirements, the Army proceeded with efforts to develop software for FCS. To date, estimating accuracy continues to be hampered by evolving requirements, immature architecture, and insufficient time to thoroughly analyze software subsystems sizing. The difficulties associated with accurate software estimating is an indication that complexity increases as the design is better understood and this serves to increase the level of effort. The potential consequences are longer development time and greater costs. Taking the latest code estimate into consideration, the total size of FCS’s software is about four times larger than the next largest software-intensive defense programs. Figure 4 compares FCS’s software SLOC size estimate to the next two largest software intensive defense programs—the Navy’s P-8A Multi-mission Maritime Aircraft, and the Joint Strike Fighter aircraft. Independent cost analyses done for FCS have cited software as a likely source of cost growth. According to a June 2006 report issued by the Office of the Secretary of Defense’s Cost Analysis Improvement Group, the FCS program was found to be at risk of higher costs due to, among other things, the size and complexity of the FCS software development program. The Cost Analysis Improvement Group also said that the network is at risk because it is tied to the JTRS and WIN-T programs that could cause delays in FCS’s development schedule. Another study issued in April 2007 by the Institute for Defense Analyses for the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics on FCS costs found that Army plans for developing FCS, including the network, were optimistic with regard to time and money needed for the program. The Institute projected at least $3 billion in additional FCS development costs due to unplanned software effort including code growth, software integration difficulties, and longer development schedules. The Army does not agree with the Institute’s assessment and believe these issues can be offset. The Army and LSI have adopted a number of disciplined software practices, but their effective implementation at the software developer level has been hampered by evolving system-level requirements. In accordance with CMMI and under the advisory of the Software Engineering Institute, the Army and LSI have adopted software practices that are known to be successful in fostering quality software development, such as disciplined processes, structured management review processes, and an “evolutionary” development process. In our analysis of five FCS software developers, we found that requirements management was the cause of most problems, indicating that a key practice for managing and developing requirements has not been effectively implemented for the five software packages reviewed. For FCS software development, the Army and LSI are jointly in charge of oversight and decision-making, and have attempted to do so effectively through the use of disciplined processes, structured management review processes, and an “evolutionary” development process. Seventy-five percent of the FCS software is being developed by 14 software developers (all certified at CMMI level 3 or above) who are developing 52 major software packages. Detailed information about those software developers and what they are responsible for delivering is provided in appendix III. Through the use of disciplined processes, the Army and LSI have strived to organize and synchronize the large amount of concurrent software development that is taking place. In keeping with the spiral model for development, software development is divided into five builds, and each build has an “early” and “final” stage. Furthermore, each build has four phases—requirements, design, code, and test. Essentially, the spiral model condenses all four phases into builds so that certain interim capabilities can be provided and “spun out” before the entire program is completed. Figure 5 shows a traditional spiral model. The LSI’s structured management review processes involve the management of network and software development through the use of several mechanisms that keep track of a series of weekly and monthly program meetings, agendas, progress, and issues. In addition, key metrics are tracked by the software developers and reported to the LSI such as defect age, process compliance, product defects, progress, requirement stability, software development environment, software lines of code, code reuse, and staffing. In the event these metrics reveal a problem or undesirable trend, the LSI takes action to attempt to remedy the situation. Anchor points are also used by the LSI to maintain structured management review. At a minimum, three software development reviews will be performed for software within a build—life cycle objectives, life cycle architecture, and test readiness reviews. Developers conduct life cycle objective anchor point reviews (or software requirements reviews) to communicate their detailed understanding of the functionality and performance to be provided by the software item(s) for a given build. Life cycle architecture anchor point reviews (or preliminary design reviews) demonstrate the feasibility of implementing the planned functionality for the software item(s) for a given build within the planned architecture, requirements, cost, and schedule. Successful completion of a formal test readiness review will mean that the developer is ready to start software item formal qualification testing for the applicable software items for a given build. The Army and LSI also use the evolutionary development process, in which software builds are begun with the understanding that the user need is not fully understood and all requirements cannot be defined up front. In this strategy, user needs and system requirements are partially defined up front and then refined in each succeeding build. The way in which all 52 software packages are being developed at the same time has been called concurrent engineering, which has pros and cons. A pro is that the concurrent development aims to keep the program as a whole on schedule. But software developers reported that when requirements are late or ambiguous, the concurrent engineering approach has a negative cascading effect as all of the software efforts are interrelated. The Army and LSI are also using modeling and simulation, which takes place in System Integration Labs, and in Huntington Beach, California, at the System of Systems Integration Lab (SoSIL). Since integration and interoperability will be the major challenge in building the FCS, the SoSIL is intended to provide a distributed capability to develop and integrate not only the software but also early hardware and system prototypes to assess and ultimately verify the integration and interoperability of the FCS system of systems and also give program management critical feedback from the user. Our analysis of the LSI’s software practices and the effect they are having on five subcontracted software developers revealed key problem areas that may be indicators of broader software development problems. We focused mainly on the following areas: Agreement Management, Acquisition Requirements Development, Project Monitoring and Control, Project Planning, and Requirements Management. Of these areas, Requirements Management was found to be the cause of most problems, indicating that a key practice for managing and developing requirements has not been effectively implemented for the five software packages reviewed. In practice, phases within a build are becoming concurrent, and the completion of one build is overlapping the start of the next build. Software developers stated that additional time, cost, and deferred functionality were the most common results of poorly defined, late, or unstable requirements. The continuing evolution of FCS system-level requirements, including that caused by Army decisions on what it can afford to develop, and the aggressive pace of the program, are causing disruptions at the software developer level. In an effort to control overall FCS development costs, the Army is reviewing many areas of FCS development, including software, to potentially eliminate areas that are not absolutely essential or critical. Whereas it is a good practice to eliminate these non-essential areas, the drawback is that this causes change in requirements, thereby directly impacting the design and writing of software code. According to LSI officials, changes at the Operational Requirements Document level are not major or frequent, and requirements at that level have actually decreased. Even so, requirements growth and changes are occurring at the system level, which has a cascading effect on the detailed requirements all the way down to the software developer level. The growth results in requirements provided to software developers that are poorly defined, late, or unstable. For example, developers at iRobot told us they received poorly defined requirements which specified that the small unmanned ground vehicle have a fire extinguisher onboard and be able to withstand direct lightning strikes. Since the small unmanned ground vehicle is a small man-packable robot, these requirements were not practical, but the Army and LSI failed to realize the fundamental differences between this small robot and its other unmanned ground vehicles such as the Multifunction Utility/Logistics Equipment vehicle, which is a 2-1/2 ton vehicle, compared to the small unmanned ground vehicle, which weighs less than 30 pounds. The developer of Battle Command and Mission Execution told us that additional requirements were received after the life cycle architecture review, which is considered late in development. The SOSCOE developers also told us they received late requirements for build 1.8, which caused problems for many other software developers since the late requirements caused them to deliver build 1.8 late and with missing functionality that many developers had expected and were counting on for their own work packages. SOSCOE developers stated that this happened because of misaligned schedules from the top down, and indicated that they too had experienced problems with requirements. Unstable requirements have also been a problem for developers of the Network Management Systems who reported requirements that changed have caused rework in many cases. Table 2 summarizes problems experienced by the software developers we visited. As shown in table 2, four of the five software developers (and SOSCOE) that we met with report that the problems with requirements have resulted in functionality being deferred to future builds, or waived altogether, for the sake of keeping to the existing schedule. Deferring work into the future means that the associated software code writing and testing will take place later than planned, meaning that more code will be written later and the associated functionality will not be testable until later. These events help partially explain the growth of software estimates already recorded for the early builds. Furthermore, this indicates that less functionality than planned has been delivered and that software estimates will only grow larger in future builds. Overall, software developers told us that these problems could have been avoided if they had been allowed sufficient time to understand and analyze the requirements. This is why the aggressive pace of the program presents such a problem for the development effort. The current FCS practice is to overlap builds more than the traditional spiral model does, as is seen in figure 6. Before the testing phase is complete on one build, the requirements phase of the next build will start. Program officials told us that the purpose of this is to set requirements so that the next build is ready for design by the time the former build has completed testing. In practice, however, this has been an issue because software developers report that evolving requirements have caused them to interpret and implement changes in requirements well into the design and code phases, compromising the amount of time allotted for testing. This is not to say that the requirements should have been defined more quickly; the state of requirements accurately reflects the maturity of the FCS program. Rather, it is the relative immaturity of the program, coupled with its aggressive pace, that amplify requirements instability, the pronounced overlap of the FCS builds and the cascading effect on software developers. It is unclear if network requirements, including software to be developed, will be adequately defined and designs completed at the preliminary design review scheduled for February 2009. To date, only some elementary concepts of the FCS network, such as connecting and exchanging information among limited network nodes, have been demonstrated (Experiment 1.1). The first major demonstration of the FCS network is the limited user test scheduled for fiscal year 2012, which will be at least a year after the critical design review and only about a year before the start of FCS low-rate initial production. One of the key objectives of that test will be to identify the contributions and limitations of the network on the ability of the FCS brigade combat team to conduct missions across the full spectrum of operations. The Army hopes that test will be enough to meet the congressional requirement to conduct a network demonstration prior to obligating any funds for FCS low-rate initial production of manned vehicles. A substantial amount of development work remains before the Army and LSI can demonstrate the full expected capability of the network. Modeling and simulation are being employed as key parts of the FCS network and software development process. While modeling and simulation is a cost effective approach for proving out technological advances incrementally, this approach has limitations in predicting the performance of first-of-kind systems. For example, commercial firms in the past have learned that modeling and simulation is very reliable for predicting the performance of products that are evolutionary advances over existing products, for which there is a large base of experience to draw from. However, it is generally understood that without sufficient data on past behavior and a better understanding of assumptions, the results of modeling and simulation may not entirely reflect the workings of the new or advanced systems. A number of limited demonstrations have been scheduled within the FCS system development and demonstration phase to help move the Army toward a network-centric environment. To date, only basic network concepts, such as connecting and exchanging information among limited network nodes have been demonstrated (Experiment 1.1). The Army plans to demonstrate some network functions, such as linkage with remote sensors, during the spinout demonstration in 2008. Other demonstrations are scheduled in 2010 and 2012. However, the fully automated battle command system is not expected until 2013 when the Army envisions 100 percent of network capabilities such as the full networked joint and multi- national battle command, full interoperability and network integration with platforms, full sensing and targeting, full networked logistics, and planning and training services. This event will occur near the time of the FCS production decision, after the designs on manned ground vehicles have been established. At the time of the FCS milestone review in 2009, the extent of network demonstration is expected to be very limited. For example, the Army plans to demonstrate, among other basic things, sensor control, terrain analysis, and unmanned platform planning and operations in 2008. As mentioned earlier, network design and maturity are in the early stages as the Army and LSI are still determining what network management means in terms of what is needed to support each specific mission, how to allocate network resources to the mission spectrum, how to fuse, process, and present extensive FCS sensor data to appropriate users, and how to maintain the network. The Army is still in the midst of stabilizing the network and software requirements, and hardware and software designs are still maturing. Further, there is uncertainty about when the network requirements will be fully defined. More importantly, it is unclear, if not doubtful, that recognized technical risks will be reduced to acceptably low levels by the 2009 review. The first major demonstration of the FCS network is limited user test 3 scheduled for fiscal year 2012, which will be at least a year after critical design review and about a year before the start of low-rate initial production for the core FCS program scheduled to begin in 2013. By then, billions will have been spent and it may be too late to fix any network problems revealed in this significant test before production begins. At critical design review in 2011, the Army expects that the FCS network capabilities will be completed on the manned platform planning and operations. In section 211 of the recently enacted National Defense Authorization Act for Fiscal Year 2008, Congress directed that a network demonstration be conducted prior to obligation of funds for low-rate initial production (Milestone C) or full-rate production of FCS manned ground vehicles. One of the key objectives of that test will be to use FCS prototypes to identify the contributions and limitations of the network on the ability of the FCS brigade combat team to conduct missions across the full spectrum of operations. The limited user test 3 will be pivotal to the FCS program because it is the first test event to incorporate each of the 14 FCS platforms, and it serves as a seminal event to generate system-of- systems test data to underpin the modeling and simulation environment used to support the test. However, the fully automated battle command system is not expected until 2013 when all the software application capabilities are expected, including the full networked joint and multinational battle command, interoperability, integration of all platforms, integrated training, sensing and targeting, and other functions. Even if the demonstration of the network takes place in 2012 as planned, it will follow the design reviews of the other FCS systems. The design of these systems depends significantly on the performance of the network, such as its delivered quality of service. There are a number of FCS systems or platforms, such as manned ground vehicles, that are scheduled to have their critical design reviews in fiscal years 2009-2010, about 2 years before the first major demonstration of the network in fiscal year 2012. For manned ground vehicles, most developmental prototypes will be in testing and the Army will have begun preparation for low-rate initial production for these platforms before the network is demonstrated. This is a significant risk as the software, which supports the information network, is critical to the design and performance of the platforms and is expected to control about 95 percent of FCS’s functionality. If the network underperforms, it could affect the lethality and survivability of the vehicles. Because of this sequence of events, there will be little opportunity for the vehicle designs to compensate for any shortfalls in network performance. The advanced information network is the linchpin to the Army’s FCS concept; yet, it is unclear whether, how, or when the Army will be able to demonstrate that the network performs as needed. The Army and the LSI have focused a great amount of attention on the network and software, evidenced by the sound development practices they have attempted to put in place. However, network and software requirements are not yet stable at the system level and below, which has caused rework and deferred functionality. Such instability may be expected given the relative immaturity of FCS, but the program is halfway through development and the remaining schedule is very ambitious; program decisions have been and will continue to be made in advance of acquisition knowledge. Demonstrations to date have been small and not sufficient—nor intended—to prove the network’s performance. Large-scale demonstrations of the network’s ability to deliver the quality of service essential to the FCS fighting concept will not come until 2012, the year before the low-rate initial production decision, assuming the remainder of development goes as planned. Even if this date is met, it will trail the critical design reviews of the individual FCS systems by 2 years. This is disquieting because the designs of the systems—including the manned ground vehicles—depend on the quality of service delivered by the network. Finally, the overall magnitude of the FCS software effort has nearly tripled to 95 million lines of code. This growth gives credence to the higher cost estimates put forth by the Cost Analysis Improvement Group and the Institute for Defense Analysis—both of which concluded that the FCS software effort would be more extensive than the Army envisioned. For these reasons, it is essential that the software and network efforts be held to meeting clear performance criteria at key junctures that are linked to the network’s needed quality of service. These junctures include the 2009 milestone review, the 2009-2010 vehicle critical design reviews, the 2011 FCS critical design review, and the 2012 network demonstration. Allowing demonstrations or network functions to be deferred past these junctures on the basis that modeling and simulation results are promising will not suffice. Given the difficulty of predicting performance in full-scale operations, testing must be the primary basis for judging the sufficiency of progress. Because the performance of the network and the success of the software effort are not assured, decision makers should allow for the possibility that full success will not be achieved. Thus, it will be wise to keep alternative courses of action viable to guard against such an eventuality. We recommend that the Secretary of Defense: Direct the FCS program to stabilize network and software requirements on each software build to enable software developers to follow disciplined software practices, including having realistic and synchronized test schedules. Establish a clear set of criteria for acceptable network performance at each of the key program events including the 2009 milestone review, platform and system-of-system critical design reviews, major network demonstration in 2012, and Milestone C for core FCS program. We further recommend that the Secretary of Defense, in setting expectations for the 2009 milestone review, include a thorough analysis of network technical feasibility and risks, synchronization of network development and demonstration with that of other elements of FCS such as the manned ground vehicles, and a reconciliation of the differences between independent and Army estimates of network and software development scope and cost. DOD concurred with our recommendations and stated that growing the networking capability of the ground forces is a priority and network development for FCS is a critical element in the Army’s effort to modernize its tactical network. In recognizing FCS’s network development and importance to the Army’s efforts to modernize the tactical network, DOD stated that criteria for network performance would be established. The criteria for network performance would be documented in the FCS acquisition strategy, the system engineering plan, and test plans. However, because these documents will not be updated until the 2009 milestone review, that could leave in question what will be expected in terms of network performance by the time of the 2009 milestone review itself. DOD should establish in advance the network criteria that will be applied at the 2009 milestone review, such as at the time of the annual review to be held in 2008. In concurring with our recommendation on setting expectations for the 2009 milestone review, DOD stated that an analysis of network technical feasibility and risks will inform the FCS 2009 review. DOD further stated that manned ground vehicle and network development and demonstration will be synchronized and that the 2009 FCS review will evaluate the network and software cost estimates and cost risks identified for the development, integration, and testing of the FCS network and software. These are constructive steps that will contribute to the FCS milestone review in 2009. However, we believe that DOD needs to go beyond evaluating cost estimates and risks. The differences between the Army’s estimate and independent cost estimates have been substantial. The lower Army estimate has been allowed to prevail, without a determination that it is the better estimate—that is, the one more likely to accurately predict the actual cost of FCS. DOD, in determining the official cost estimate for FCS, should provide the rationale for its position. Heretofore, the Army’s estimates have been constrained by available funding and Army officials have stated that they will reduce program scope if costs are higher than expected. If FCS is found to be worth doing in its entirety during the 2009 milestone review, its most likely cost should be understood. We also received technical comments from DOD which have been addressed in the report, as appropriate. We are sending copies of this report to the Secretary of Defense; the Secretary of the Army; and the Director, Office of Management and Budget. Copies will also be made available to others on request. In addition, the report will be made available at no charge on the GAO Web site at http://www.gao.gov. If you, or your staff, have any questions concerning this report, please contact me at (202) 512-4841. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. The major contributors are listed in appendix IV. To develop the information on the Future Combat System program’s network and software challenges and technological risks, assess whether disciplined software practices have been effectively implemented, and determine whether the Army will have the necessary network and software at key program events, we interviewed the Assistant Secretary of the Army (Acquisition, Technology, and Logistics); the Program Manager for the Future Combat System (Brigade Combat Team); the Future Combat System Lead Systems Integrator; officials from the Army’s Software Engineering Directorate; and Lead Systems Integrator One Team contractors. We selected 5 of 52 software packages and conducted detailed structured interviews to determine how the use of the LSI’s software best practices affected the developers at various levels within FCS. In consultation with the Army, LSI, University of Maryland (Fraunhofer Center for Experimental Software Engineering) and experts from GAO’s Applied Research and Methods group, we selected software packages that are critical to FCS’s network and those that would provide a good cross section of the development efforts being conducted by contractors under LSI’s direction. This software included the Battle Command and Mission Execution, Combat Identification, Network Management System, Small Unmanned Ground Vehicle, and Training Common Components. Limited work was conducted on SOSCOE. We reviewed, among other documents, the Future Combat System’s Integrated Master Schedule and CMMI Evolution, Test and Evaluation Master Plan, Software Configuration Management, Development, Integration, Quality Assurance, Risk Mitigation, and Measurement Plans. In addition to CMMI for Acquisition, Version 1.2, we also reviewed individual software developers’ Software Development, Configuration Management, Integration, Quality Assurance, System Engineering Management, Risk and Opportunity Management, and Test Plans, Software Architecture Description Documents, and Software Requirements Specifications. We attended FCS Board of Director’s meetings and the Delta Engineering Iteration 2 Definition Anchor Point and System of Systems Build 2 Definition Checkpoint Review. In our assessment of the FCS network and software development, we used the knowledge-based acquisition practices drawn from our large body of past work as well as DOD’s acquisition policy and the experiences of other programs. We discussed the issues presented in this report with officials from the Army and the Secretary of Defense and made changes as appropriate. We performed our review from July 2007 to March 2008 in accordance with generally accepted auditing standards. Appendix III: List of FCS Software (Network & Non-network) Packages Developed by Contractors (as of July 2007) In addition to the individual named above, William R. Graveline, Assistant Director; John M. Ortiz Jr.; Letisha T. Watson; Helena Brink; Noah B. Bleicher; Robert S. Swierczek; and Senior Technologists Madhav S. Panwar and Dr. Hai V. Tran made key contributions to this report. Defense Acquisitions: Role of Lead Systems Integrator on Future Combat Systems Program Poses Oversight Challenges. GAO-07-380. Washington, D.C.: June 6, 2007. Defense Acquisitions: Future Combat System Risks Underscore the Importance of Oversight. GAO-07-672T. Washington, D.C.: March 27, 2007. Defense Acquisitions: Key Decisions to Be Made on Future Combat System. GAO-07-376. Washington, D.C.: March 15, 2007. Defense Acquisitions: The Army Faces Challenges in Developing a Tactical Networking Strategy. GAO-07-10SU. Washington, D.C.: October 4, 2006. Defense Acquisitions: Restructured JTRS Program Reduces Risk, but Significant Challenges Remain. GAO-06-955. Washington, D.C.: September 11, 2006. Defense Acquisitions: Improved Business Case Key for Future Combat System’s Success. GAO-06-564T. Washington, D.C.: April 4, 2006. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-07-406SP. Washington, D.C.: March 30, 2007. Defense Acquisitions: Improved Business Case is Needed for Future Combat System’s Successful Outcome. GAO-06-367. Washington, D.C.: March 14, 2006. Defense Acquisitions: Business Case and Business Arrangements Key for Future Combat System’s Success. GAO-06-478T. Washington, D.C.: March 1, 2006. Defense Acquisitions: Resolving Development Risks in the Army’s Networked Communications Capabilities is Key to Fielding Future Force. GAO-05-669. Washington, D.C.: June 15, 2005. Defense Acquisitions: Future Combat Systems Challenges and Prospects for Success. GAO-05-428T. Washington, D.C.: March 16, 2005. Defense Acquisitions: Future Combat Systems Challenges and Prospects for Success. GAO-05-442T. Washington, D.C.: March 16, 2005. Defense Acquisitions: The Global Information Grid and Challenges Facing Its Implementation. GAO-04-858. Washington, D.C.: July 28, 2004. Defense Acquisitions: The Army’s Future Combat Systems’ Features, Risks, and Alternatives. GAO-04-635T. Washington, D.C.: April 1, 2004. Defense Acquisitions: Stronger Management Practices Are Needed to Improve DOD’s Software-Intensive Weapon Acquisitions. GAO-04-393. Washington, D.C.: March 1, 2004. Issues Facing the Army’s Future Combat Systems Program. GAO-03-1010R. Washington, D.C.: August 13, 2003. Defense Acquisitions: Army Transformation Faces Weapon Systems Challenges. GAO-01-311. Washington, D.C.: May 2001. Best Practices: Better Matching of Needs and Resources Will Lead to Better Weapon System Outcomes. GAO-01-288. Washington, D.C.: March 8, 2001.
The Army's Future Combat System (FCS) requires a software-based advanced information network to meld people, sensors, and weapons into a cohesive fighting force. As software controls 95 percent of FCS's functionality, it determines the success or failure of the program. The Army contracted with the Boeing Company as a lead systems integrator (LSI) to define, develop and integrate FCS, including software development. GAO must by law report annually on FCS. This is one of two reports to meet this requirement. It addresses risks facing the development of network and software, the practices being used to manage software, and the timing of key network demonstrations. In conducting our work, GAO has contacted numerous DOD, Army, and contractor offices; reviewed technical documents on software and network development and plans; attended meetings; and spoken to Army and other officials on various aspects of FCS network and software development. GAO also performed detailed work at five FCS software developers. Almost 5 years into the program, it is not yet clear if or when the information network that is at the heart of the FCS concept can be developed, built, and demonstrated by the Army and LSI. Significant management and technical challenges have placed development of the network and software at risk. These risks include, among others, network performance and scalability, immature network architecture, and synchronization of FCS with Joint Tactical Radio System and Warfighter Information Network Tactical programs that have significant technical challenges of their own. Software being developed for the network and platforms is projected to total 95.1 million lines of computer code, almost triple the size since the program began in 2003. FCS's software is about four times larger than the next two largest software-intensive defense programs. Although several disciplined practices are being used to develop FCS's network and software, the program's immaturity and aggressive pace during development have delayed requirements development at the software developer level. For example, software developers for 5 major software packages that GAO reviewed report that high-level requirements provided to them were poorly defined, late, or omitted in the development process. This caused the software developers to do rework or defer functionality out to future builds. In turn, these poor or late requirements had a cascading effect that caused other software development efforts to be delayed. It is unclear when or how it can be demonstrated that the FCS network will work as needed, especially at key program junctures. For example, in 2009, network requirements, including software, may not be adequately defined nor designs completed at the preliminary design review; and at the FCS milestone review later that year, network demonstration is expected to be very limited. The first major FCS network demonstration--the limited user test in 2012--will take place at least a year after the critical design review and only a year before the start of FCS production. That test will seek to identify the impact of the contributions and limitations of the network on the ability to conduct missions. This test will be conducted after the designs have been set for the FCS ground vehicles, which poses risks because the designs depend on the network's performance. A full demonstration of the network with all of its software components will not be demonstrated until at least 2013 when the fully automated battle command system is expected to be ready.
The Navajo Generating Station (NGS) is a 2,250-megawatt coal-fired power plant located near Page, Arizona. The plant, which became fully operational in 1976, is located approximately 12 miles from the northern boundary of the Grand Canyon National Park. The Salt River Project Agricultural Improvement and Power District (Salt River Project) operates the plant and owns 21.7 percent. Other owners and their shares are the Department of the Interior’s Bureau of Reclamation, 24.3 percent; Los Angeles Department of Water and Power, 21.2 percent; Arizona Public Service Company, 14 percent; Nevada Power Company, 11.3 percent; and Tucson Electric Power Company, 7.5 percent. The 1977 amendments to the Clean Air Act set as a national goal “the prevention of any future, and the remedying of any existing, impairment of visibility” in certain parks and wilderness areas where such impairment results from man-made air pollution. The amendments include a requirement that sources with emissions “which may reasonably be anticipated to cause or contribute to any impairment of visibility in any such area, shall procure, install, and operate” the best available retrofit technology. In determining the emissions limit that reflects the best available technology, several factors are to be taken into account, including the costs of compliance, the energy impacts and impacts besides those on air quality, the remaining life of the power plant, and the degree of improvement in visibility that may reasonably be anticipated to result from the use of the technology. EPA’s final rule to limit emissions from NGS relied on the details of a negotiated agreement, between the power plant owners and environmental groups, which EPA expects to result in greater emissions reductions at a lower cost than EPA’s initial proposal. The agreement increased the level of emissions reductions from EPA’s proposed 70 percent to 90 percent, with estimated annual costs dropping from between $91.9 million and $128.3 million to $89.6 million. The amount of emissions that would be removed annually is expected to increase from about 50,000 tons of sulfur to about 64,000 tons. The negotiations included officials representing the owners of the plant, environmental groups, the state of Arizona, and EPA. These officials recommended the negotiated agreement to EPA as an alternative to the agency’s initial proposal to reduce the emissions from the power plant. In February 1991, EPA solicited comments on a proposed rule laying out a variety of strategies to reduce emissions from the power plant. EPA explained that, because of the uncertainty in determining the improvement in visibility expected as a result of limiting the emissions, it was considering and sought comments on four options to limit these emissions—a 50-percent reduction, a 70-percent reduction, a 90-percent reduction, and allowing the plant owners to test alternative technologies and select one if it met minimum emissions reductions at a set cost. In addition to the four options, EPA also solicited comments on any other appropriate alternative to limit sulfur dioxide emissions, such as controls used only on a seasonal basis. EPA’s proposed 70-percent emissions limit was the same as the standard the agency used at the time for new facilities. EPA estimated that a 70-percent emissions reduction would eliminate about 50,000 tons of sulfur from the power plant’s emissions annually and that the cost would range from $91.9 million to $128.3 million. Following EPA’s initial proposal, representatives of the plant owners and environmental groups (Grand Canyon Trust and Environmental Defense Fund) met, at the recommendation of EPA, to discuss the most cost-effective control option. This led EPA, in early 1991, to facilitate discussions between these representatives to find a mutually acceptable control option. According to EPA, its participation included assisting in drafting documents to support a potential agreement between the parties and providing technical assistance. These parties met repeatedly during a 3-month period to discuss control options and their related costs in an attempt to clarify all options and their costs. As a result of these discussions, the parties reached a negotiated agreement to, among other things, reduce sulfur dioxide emissions from the power plant by 90 percent. According to EPA, its final decision, issued in October 1991, substantially adopted the terms of this agreement. The agreement specified the time frames in which the emission control technology should become operational and also the manner in which it is to be operated. The agreement specified that the three primary pieces of equipment (“scrubber” modules) should become operational over a 3-year period—the first unit by November 1997, the second by November 1998, and the third by August 1999. The emissions from all three units will be subject to a 90-percent emissions reduction that will be averaged on a 365-day plant operation basis to determine compliance. The agreement also specified that the maintenance schedule for the plant would shift so that some planned maintenance would occur in the winter, thereby shutting down some of the plant’s equipment and further reducing wintertime sulfur dioxide emissions. According to Salt River Project officials, two factors account for the lower expected project costs. First, the agreement allows the power plant to determine its compliance with EPA’s emissions limit on an annual rather than a monthly rolling average basis as was initially proposed. Determining compliance on an annual basis is a less stringent requirement (than determining compliance on a monthly basis) because it gives the plant more days over which it can average the short-term increases in emissions that would occur when one of the scrubbers is malfunctioning or being repaired. As such, the plant operators can still comply with EPA’s emissions limit without installing the expensive backup equipment they would have to otherwise operate on days when the primary equipment is not operating. According to a project engineer for the Salt River Project, with its compliance determined on an annual basis, the plant can operate its emission control equipment most days at a rate greater than that needed to cut emissions by 90 percent to make up for those days on which emissions are not controlled because the equipment is not operating. Second, the agreement delays the initial installation of emission control equipment by almost 3 years, from January 1995 to November 1997, which allows the plant operators to complete the project in a more cost-effective manner. According to the plant operators, the additional time allows them to, among other things, better plan the engineering. That is, the operators have had more time to study emission control technologies and select what they consider to be the best technology at the lowest cost. Salt River Project officials also told us that staging construction over a longer period would allow them to reduce labor costs as compared to those with an accelerated construction schedule. Despite this almost 3-year delay, EPA concluded that the terms of the final rule would result in greater visibility improvement than the proposed rule. In fact, EPA estimated that the emissions limit in its final rule would reduce by two-thirds the amount of pollution that would have been allowed under the proposed rule. EPA’s estimate of an approximately 7 percent improvement in the winter seasonal average visibility results primarily from significant improvements expected to occur during certain winter weather conditions. Other less substantial improvements are expected on other winter days. EPA initially estimated an approximately 14 percent improvement primarily on the basis of a study by the National Park Service, although the agency revised its estimate to approximately 7 percent to reflect the results of other analyses and studies. EPA noted that its revised estimate may be understated because it does not take into account other visibility improvements (1) below the rim of the Grand Canyon, (2) in seasons other than winter at the Grand Canyon, and (3) year round at other nearby national parks. Appendix II provides additional details on studies of visibility impairment in and around the Grand Canyon. EPA’s initial estimate of an approximately 14 percent visibility improvement relied primarily on data from a Park Service study—the National Park Service Report on the Winter Haze Intensive Tracer Experiment (WHITEX)—of visibility impairment in the vicinity of the Grand Canyon. The study was designed to evaluate a variety of modeling approaches to attribute visibility impairment from a single source—NGS. Specifically, various models were to be evaluated for their ability to link NGS’ emissions to winter visibility impairment at the Grand Canyon and other nearby national parks. In conducting this study, researchers released a traceable chemical from NGS’ smokestack and tracked its movement to monitoring stations in the region, including at the Grand Canyon. The study concluded that NGS contributes approximately 40 percent on average to wintertime visibility impairment in the canyon and approximately 60 to 70 percent during the winter weather conditions when NGS has the most severe effect. After considering information received following its proposed rule, EPA revised its estimate of the winter seasonal average visibility improvement to approximately 7 percent. This estimate translates into an increase in the average visual range from about 124 miles to about 133 miles. In revising its estimate, EPA relied on the WHITEX study, air monitoring information, and a visibility study conducted by the plant owners. EPA estimated that the largest improvements from reducing emissions from NGS would occur during certain winter weather conditions. These conditions are, according to EPA officials, (1) high relative humidity, which facilitates the conversion of the plant’s gaseous sulfur dioxide emissions to visibility-impairing sulfate particles, and (2) wind patterns that transport the emissions to the Grand Canyon. EPA estimated that these conditions occur between 10 and 15 times per winter, lasting from 3 to 5 days each occurrence. However, a Park Service official who was a principal investigator on the WHITEX study told us that the effect of emissions from NGS on impaired visibility at the Grand Canyon during these episodes can be mitigated by local weather patterns. The official explained that, due in part to local weather conditions, the most severe effects occur approximately two to three times per winter, lasting from 5 to 7 days each time. This official explained that visibility can be impaired during these winter weather conditions because of both naturally occurring impairment—mist, fog, clouds—and man-made sources, primarily NGS. However, the official noted that photographic and air monitoring data show that the impairment from man-made sources can continue for several days after the naturally occurring conditions have dissipated. In addition, the evidence also indicates that impairment from man-made sources is perceptible even on some days that include natural impairment. In addition to improvements during certain winter weather conditions, EPA also estimated visibility improvements on other winter days. These estimated improvements were measured in terms of “changes in contrast,” which, like visual range, is another method of measuring visibility improvements. EPA defined “contrast” as the percentage difference between the brightness of a scenic element and its background. Using this method, EPA estimated that reducing NGS’ sulfur dioxide emissions by 90 percent could result in at least a “perceptible” change in visibility conditions (defined as a 4-percent change in contrast) on approximately 100 days during the winter. EPA later dropped these estimates due to an error in the calculations. The plant owners attempted to correct this error and estimated 54 days of at least a perceptible change. Later, using the results of their own visibility study, the owners reduced this estimate to 6 days. EPA also relied on other studies and analyses in calculating the degree of visibility improvement that could result from reducing NGS’ sulfur dioxide emissions. These studies included a review of the WHITEX study by a committee established by the National Academy of Sciences’ National Research Council and a separate visibility study conducted by the plant owners. After reviewing the techniques and data used in the WHITEX study, the committee concluded that, on some days during the study period, NGS contributes significantly to visibility impairment in the Grand Canyon. However, the committee also concluded that the WHITEX study was not sufficient to make a quantitative determination of the exact fraction of visibility impairment at the Grand Canyon that is attributable to NGS. The power plant owners’ study found a lesser impact on visibility in the canyon. The study estimated that the average wintertime visual range would improve by no more than 2 percent as a result of reducing NGS sulfur dioxide emissions by 90 percent. In reviewing this information, EPA concluded that there was reasonable agreement between the plant owners’ study and the WHITEX study. EPA noted that the major difference is that the WHITEX study led to the conclusion that peak impairment conditions occur more frequently and that nonpeak impairment conditions are greater than zero more often than found during the plant owners’ study. EPA identified additional benefits from reducing sulfur dioxide emissions by 90 percent that suggest there may be more than a 7-percent improvement in the winter seasonal average visibility. These benefits include a greater visibility improvement that would occur below the rim of the Grand Canyon and improvements during seasons other than winter at the Grand Canyon and year round at other nearby national parks. First, EPA’s estimated 7-percent improvement in the winter seasonal average visibility did not reflect the more pronounced improvement expected below the rim of the canyon because the air below the rim may be more affected by NGS’ emissions. The National Research Council committee’s review of the WHITEX study noted that meteorological evidence, still photographs, and time-lapse video suggested that sulfur concentrations (indicative of plant emissions) in the canyon might have been considerably greater than those that were observed at the monitoring station used during the WHITEX study—a monitoring station located at the rim of the canyon. The Park Service subsequently established an air monitoring station within the canyon, and, from its results, EPA found that visibility impairment was worse in the canyon than was measured at the rim of the canyon. EPA said that it did not quantify the additional visibility improvement expected below the rim of the canyon due to the limited amount of data available and a limited understanding of the air movements below the rim. Second, EPA’s estimated 7-percent improvement in the winter seasonal average visibility did not reflect benefits in seasons other than winter at the Grand Canyon and throughout the year at other nearby national parks. EPA explained that, on the basis of information received during its public comment period, emissions from NGS may significantly impair visibility year round at the Grand Canyon as well as at other national parks in the region. For example, a study prepared by the Grand Canyon Trust, which modeled emissions from NGS over a 5-year period, indicated visibility impairment at the Grand Canyon in seasons other than winter. Furthermore, the study suggested that the emissions could impair visibility in surrounding national parks between 60 and 80 percent of the time year round. EPA said that the emissions controls, required by the final rule, would significantly reduce if not eliminate NGS’ contribution to visibility impairment in nearby national parks. Both EPA and the plant owners estimated, using contingent valuation, the monetary value of visibility improvements from reducing sulfur dioxide emissions from NGS. EPA estimated annual nationwide values ranging from $90 million to $200 million. The plant owners estimated a nationwide value of $2.3 million. Although relying on the same methodology, the studies were technically different. EPA’s estimate was extracted from existing research because EPA was under a court-ordered deadline to complete the rulemaking. Therefore, it did not have time to conduct original research to estimate the monetary value of visibility improvements at the Grand Canyon National Park. Unlike EPA in its reliance on existing research, the owners specifically designed contingent valuation research to estimate the visibility improvements they expected from emissions controls at the plant. Nonetheless, the owners did not complete their study for several reasons, including time constraints. Instead, they used the pilot study results to estimate an annual nationwide value of the visibility improvements they expected to occur. Neither studies’ results were used as a basis for EPA’s final rule that established an emissions limit because, as a result of the negotiated agreement, project costs dropped below the $100 million threshold requiring such an estimate. EPA set out to estimate the monetary value of visibility improvements to comply with the terms of Executive Order 12291. This order provided that, to the extent permitted by law, agencies should not take regulatory action unless the potential benefits to society outweighed the potential costs to society. The order required agencies, including EPA, to prepare a regulatory impact analysis that included a cost-benefit analysis. Agencies were to do this for proposed rules that, among other things, were likely to result in an annual effect on the economy of at least $100 million. In such cases, an agency’s analysis was required to describe the benefits—expressed in monetary terms, if possible—as well as potential costs. If the analysis did not show that benefits exceeded costs, the agency was to explain any legal reasons why the regulation should still be promulgated. When EPA first proposed the rule requiring emissions controls at NGS, it believed that the cost-benefit analysis was required, as the annual cost was thought likely to exceed $100 million (estimates ranged from $91.9 million to $128.3 million). By the time EPA issued its final rule, however, the estimated annual cost—as a result of the negotiated agreement—had decreased to $89.6 million. Accordingly, the Office of Management and Budget exempted EPA from the requirements for a regulatory impact analysis, including a cost-benefit analysis. When EPA began the cost-benefit analysis, it was faced with court-ordered deadlines to complete this rulemaking effort. As a result of the deadlines, EPA effectively had less than 6 months to complete its analysis and did not have time to conduct original research to estimate the monetary value of limiting the plant’s emissions. Instead, EPA estimated the value of limiting these emissions by extrapolating from the results of earlier contingent valuation research that sought to value the benefit of reducing air pollution at national parks across the country, including those in the Southwest. EPA in its proposed rule, estimated that the monetary value of visibility improvements would range from $1.30 to $2.50 annually per U.S. household. Later, to reflect the revised estimate of visibility improvement from approximately 14 percent to approximately 7 percent, EPA decreased its annual household value to $0.75 to $1.75. EPA estimated the monetary value, nationwide, would range from $90 million to $200 million in the year 2000. The owners also used contingent valuation to estimate the monetary value of visibility improvements in response to EPA’s use of the existing study and monetary value estimate. Unlike EPA in its reliance on existing related research, the owners specifically designed their study to value visibility improvements they expected from emissions controls at the plant. Nonetheless, the owners did not complete their research because they did not see value in doing so and because of time and resource constraints. Therefore, the owners’ estimated value of expected visibility improvements was based on the results of a pilot test of a proposed survey instrument. The owners’ study estimated the national value of visibility benefits to be $2.3 million. This equates to about $0.023 per U.S. household. Appendix III discusses similarities in the two contingent valuation studies and their specific technical differences. We provided a draft of this report to the Department of the Interior and to the Environmental Protection Agency for their review and comment. In written comments, Interior officials said that they found the report to be generally accurate and a fairly balanced summary of certain technical aspects of EPA’s decision to require emissions reductions at NGS. (See app. IV.) We received comments from directors of two EPA offices: the Director of the Office of Policy Analysis and Review, representing the Acting Assistant Administrator of the Office of Air and Radiation, and the Director of the Office of Economy and Environment, representing the Assistant Administrator of the Office of Policy, Planning, and Evaluation. EPA’s Office of Air and Radiation said that the report was generally accurate and complete. EPA’s Office of Policy, Planning, and Evaluation raised concerns about our discussion of contingent valuation methodology (see app. I) and our comparison of the contingent valuation studies conducted by EPA and the plant owners (see earlier in this letter and app. III). Both offices also suggested technical clarifications, which we incorporated as appropriate. Office of Policy, Planning, and Evaluation officials said that appendix I of our report gives undue attention to the guidelines of a blue-ribbon advisory panel convened by the National Oceanic and Atmospheric Administration, which they believe implies that the recommendations have some relation to EPA’s use of contingent valuation. The officials also said that our report does not give a balanced view of contingent valuation and places too much emphasis on arguments critical of contingent valuation. The officials suggested that we include a reference to a specific article by a prominent researcher in support of contingent valuation, a reference to comments EPA has made on using contingent valuation to assess natural resource damages, and arguments to counter the advisory panel’s guidelines regarding surveys and formats used in eliciting information from survey respondents. As we state in our report, we are not taking a position on the appropriateness of contingent valuation. Appendix I provides a brief overview of the contingent valuation method, including public policy uses, historical development, characteristics of a contingent valuation study, criticisms, and some further issues. In this context, we summarize the advisory panel’s guidelines because we believe that the panel’s deliberations represent valuable critical and impartial thinking related to contingent valuation. The appendix does not evaluate the merits of the advisory panel’s guidelines or of various arguments for or against the use of contingent valuation methodology. However, to make our presentation more complete, we made minor modifications to the text, added a reference to the article recommended by EPA, and expanded our discussion of alternative survey modes. We did not add the other information suggested by EPA because it is beyond the scope of this appendix. Office of Policy, Planning, and Evaluation officials also said that appendix III of our report does not suitably explain the reasons for differences in EPA’s and the owners’ contingent valuation studies and the appropriate interpretations of these differences. Without such explanation, the officials believe that a reader may erroneously conclude that there is something wrong with the reliability of the method. EPA suggested that we not directly compare the studies because neither was pursued to the point where any useful comparisons could be made, and that we emphasize what EPA believes to be the more important problems with the owners’ study, such as incomplete documentation, questionable statistical techniques, and a sample size that was too small. As we note in our report, appendix III describes the similarities in the two contingent valuation studies and specific technical differences between them—in their purpose, design, and implementation—which led to their different estimates of nationwide values. Because neither study used sampling strategies that would allow nationwide projections, we question the certainty of both studies’ estimates of nationwide values. Some of the differences in these studies added uncertainty to their estimates of nationwide values. Wherever information was available, we point out the reasons for these differences and the impact they had on both studies’ results; however, in some instances, neither study had sufficient information, and further testing would be needed to determine the effects of each difference on the estimates. It was not our intent to complete or refine either study to provide a valid nationwide projection, but merely to point out how each study was conducted and why they produced different results. Nevertheless, to clarify that the studies were done separately, we made minor revisions to the text of this letter. To obtain information for this report, we reviewed EPA’s documents on its NGS regulatory action. The information included numerous analyses of the plant’s effects on visibility at the Grand Canyon and the visibility improvements that might be expected from the addition of emissions controls. The information also included analyses on the economic costs and benefits of emissions controls. We supplemented this information through discussions with officials of various federal agencies: EPA, the Department of the Interior and its Bureau of Reclamation and National Park Service, the National Oceanic and Atmospheric Administration, and the Department of Energy’s Western Area Power Administration. We reviewed and compared two contingent valuation studies that estimated the monetary value of expected visibility improvements from emissions controls. One of the studies was conducted by RCG/Hagler, Bailly, Inc., and was the basis for EPA’s estimates. The other was conducted for the NGS owners by Decision Focus, Incorporated. We also interviewed officials of the Salt River Project; Decision Focus, Incorporated; RCG/Hagler, Bailly, Inc.; the Navajo Nation; the Environmental Defense Fund; the Grand Canyon Trust; the Grand Canyon Visibility Transport Commission; Air Resource Specialists, Inc.; Northern Arizona University; and others. To describe the contingent valuation methodology, we searched and reviewed economic literature. We conducted our review from January through December 1997 in accordance with generally accepted government auditing standards. While we did not independently verify or test the reliability of data provided by the agencies or the plant owners, EPA used this information in reaching its regulatory decision. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies to the Ranking Minority Member of the Committee; the Administrator, EPA, the Secretary of the Interior, and other interested parties. We will also make copies available to others on request. If you or your staff have any questions, please call me at (202) 512-3841. Major contributors to this report are listed in appendix V. The contingent valuation method uses surveys to ask respondents for information that can be used to provide estimates of how much they—and often, by extension, society—are willing to pay for a certain program or policy, such as those designed to improve the quality of some environmental or natural resource amenity. Proponents of contingent valuation methodology believe that it is a valuable technique for making inferences about these values, particularly in cases in which consumer behavior is not (easily) observed. However, the use of the contingent valuation method has been the subject of controversy, particularly in applications involving non-use values. This appendix provides a brief overview of the contingent valuation method. The first section describes the public policy uses of the method and aspects of its historical development. The second section describes characteristics of a contingent valuation study and presents the suggestions intended to improve contingent valuation practice made by a blue-ribbon panel of social scientists. The third section discusses some of the criticisms that have been leveled at the contingent valuation method. The final section discusses some further issues related to the use of contingent valuation, including some aspects related to its application to regulatory proceedings. Contingent valuation studies use surveys to elicit information about how much people would be willing to pay for particular goods or services. These values can be important in estimating the benefits applicable to a wide variety of public policy contexts, including those that require regulatory or environmental impact analyses. While in many instances, economic benefits can be estimated using information on market prices and quantities—because under certain conditions price and quantity data can be used to estimate underlying values held by consumers—in other cases, often involving natural resources or environmental goods, complete market information may not be available. This could be because markets do not exist at all, as in the case of public goods, or because consumers combine their time with purchases in markets for complementary goods needed to undertake a recreational experience, for example. If the values people do have for these goods are not considered in policy decisions, then less desirable resource management outcomes may occur. As a general proposition, asking people a question about how much they value a particular item seems a direct way of getting estimates of their value for it. However, economists have generally been skeptical of this approach and have historically viewed market-based methods, or so-called “revealed preference” methods in which actual spending decisions can be observed, as inherently superior to “stated preference” methods. Nevertheless, there are many instances in which no behavioral patterns exist through which consumers reveal the values they hold. In such instances, the contingent valuation method can be thought of as a valuation exercise in which a “contingent,” or hypothetical, market is described for the purpose of replicating the consumer choice framework that is used to generate values for traditional market goods. That is, the approach attempts to create a market-based choice context for goods without (complete) markets, such as public or quasi-public goods, so that through their choices people will reveal their preferences much as they do when making actual spending decisions. Contingent valuation practice developed using theory and practice from different disciplines, especially economics and survey research. A prominent resource economist, Ciriacy-Wantrup, is generally credited with the suggestion of asking people directly for the values they placed on natural resource programs with public good aspects. The first practitioner of what is now known as contingent valuation was Robert K. Davis who used questionnaires as one way to estimate values people placed on recreational experiences in Maine. The theory and practice of contingent valuation continued to develop in the 1960s and 1970s, and most of the first applications were to resource and environmental issues. During this period, many contingent valuation studies also examined underlying research issues. Some of this research worked toward grounding contingent valuation within the economic theory of consumer behavior. For example, economic theory includes many well understood theoretical relationships involving a consumer’s utility, income, expenditures, and the conditions under which the concept of willingness to pay is an appropriate measure of underlying value. Also, advances in cognitive psychology contributed to understanding the possible biases in a respondent’s answers that may result from such things as the choice of wording or order of questions. Furthermore, researchers gained practical experience in designing, implementing, and analyzing contingent valuation studies. By the 1990s, researchers had performed hundreds of such studies. The federal government sponsored many of these studies, as various federal agencies performed and funded contingent valuation studies and general research on contingent valuation. These included the U.S. Army Corps of Engineers, the Department of the Interior, and the Environmental Protection Agency (EPA). EPA in particular was interested in the analytical potential of contingent valuation in a variety of environmental regulatory contexts in light of Executive Order 12291 (and its successor), which required executive branch agencies to more systematically examine the costs and benefits of certain of their proposed regulations. In EPA’s case, this involved the use of contingent valuation to estimate the benefits associated with various pollution control regulations. Although contingent valuation is a methodology that can be used for different purposes, it has become inextricably linked with the measurement of non-use values. Interest in non-use values has been heightened in part because of the possibility that they may be considered in resource damage assessment contexts. The federal government in its role as trustee may include non-use values when calculating damages to be recovered through litigation. The Comprehensive Environmental Response, Compensation and Liability Act of 1980 (CERCLA), or Superfund, provided government officials the right to sue on behalf of the public for resource damages resulting from release of hazardous materials. The Congress directed the President, who delegated the responsibility to the Department of the Interior, to develop regulations applicable to resource damage assessment. After a number of groups challenged the regulations, a federal appeals court upheld Interior’s adoption of contingent valuation methodology for assessing damages to natural resources and directed Interior to revise its rule to avoid limiting the role of non-use values or “non-consumptive” values in the calculation of damages. The grounding of the Exxon Valdez led to the passage of the Oil Pollution Act of 1990, which required the Department of Commerce, acting through the National Oceanic and Atmospheric Administration (NOAA), to develop regulations governing damage assessment. The Exxon Company USA could be subject to liability under the provisions of the Oil Pollution Act and sponsored research concerning contingent valuation, much of which is critical of the ability of contingent valuation to measure non-use values accurately. In an overview of contingent valuation practice, a leading resource economist stated that while there is no “standard approach,” contingent valuation studies typically include three general features. First, a contingent valuation study contains descriptions of the policy or program at issue and the likely environmental effects so that respondents can understand the good they are valuing. Second, a contingent valuation study contains a framework or mechanism for eliciting willingness to pay. Several mechanisms have been used in contingent valuation studies, such as open-ended questions (How much would you be willing to pay?), payment cards (Select an amount from a list of options.), and referendum formats (Would you vote for the described proposal if your taxes increase by $10?). Third, a contingent valuation study may gather information on socioeconomic variables and attitudes about the environment. This information can be used to estimate willingness-to-pay functions using econometric techniques. Researchers have developed many methods to implement contingent valuation studies within this broad framework. Additionally, within the context of the method’s development, there have been analytical debates over the merits of particular aspects of contingent valuation practice. The Exxon-sponsored research represented a change in the discussion of contingent valuation issues in that much of this research was carried out by economists and others who were not primarily specialists in natural resource and environmental issues. These researchers raised some new issues and provided new emphasis on other issues on which there had been ongoing analytical debate. As part of the process by which it developed its regulations related to oil spill damages, NOAA convened a blue-ribbon advisory panel to address a variety of issues, including the fundamental question of whether the contingent valuation method was capable of providing reliable estimates of non-use values for use in resource damage assessments. The panel’s report stated that contingent valuation “can produce estimates reliable enough to be the starting point of a judicial process of damage assessment, including lost passive-use (non-use) values.” Although NOAA was concerned with the use of contingent valuation in the damage assessment context, the NOAA guidelines have applicability to the contingent valuation method more generally. We refer to them because we believe that the NOAA panel’s deliberations represent valuable critical and impartial thinking related to improving the use of contingent valuation. The panel listed some guidelines for producing credible studies and noted some strong concerns about the results of some contingent valuation studies that it reviewed. Although its conclusion gave credence to the views of those who favor the use of the contingent valuation method, adherence to the panel’s suggestions would likely require changes in contingent valuation practice in that none of the studies the panel reviewed had been carried out to its suggested standards. The panel’s report listed a number of suggestions for producing high-quality contingent valuation studies. Some of these suggestions pertained to the importance of the underlying survey research in contingent valuation studies, in which the survey instruments often have to provide a substantial amount of background material in a manner that is accessible to the respondents. The panel suggested (1) using probability sampling and appropriate statistical sampling procedures, (2) subjecting the survey instruments to pretesting, and (3) taking steps to reduce nonresponse rates. Additionally, the panel suggested that contingent valuation studies disclose information on the sample selection process and provide information on survey instruments and responses. The panel stated a strong preference for the use of in-person surveys as superior to telephone or mail surveys. The panel’s report stated that it is “unlikely that reliable estimates of values could be elicited with mail surveys.” The panel also suggested that it was desirable to pretest any photographs that would be used to convey information to respondents. In terms of the elicitation format, the panel suggested that the referendum format, as opposed to open-ended elicitation, was desirable. In its basic form, a referendum format contingent valuation study describes a proposal to provide a specific improvement in an environmental good, and the survey respondents are asked if they would support this proposal as if it were a referendum item to be voted on. As part of the proposal, a “payment vehicle” is described, such as a tax increase or a utility bill increase, and each respondent is given a specific per person (or per household) dollar amount that this proposal will cost. The voting question is a dichotomous choice (“yes” or “no”), and, in conjunction with other information gathered in the survey, such as environmental attitudes, income level, etc., econometric techniques appropriate to dichotomous choice situations can be used to determine a measure of willingness to pay for the described proposal on the basis of the observed pattern of yes or no votes. Supporters of the referendum model argue that it creates a contingent market mechanism with which consumers are familiar. First, consumers are familiar with “posted price” market choice contexts.Second, the referendum format itself is familiar to people as a method of expressing political preferences. The panel was concerned that steps be taken so that results of contingent valuation studies conform to common notions of economic rationality. The NOAA panel endorsed the use of follow-up questions asking respondents the reasons that they voted the way they did as well as questions designed to test how well the respondent understood the program or policy at hand. The panel also suggested that survey respondents be provided with a reminder that paying for the non-use good at issue would result in a smaller budget to spend on other goods and services and that they be told of any available substitutes. One aspect of rationality is that, generally speaking, people are willing to pay more for greater amounts of a good. In its deliberations, the panel had concerns about evidence presented in one contingent valuation study that estimated willingness to pay “for the cleanup of all lakes in Ontario was only slightly more than willingness to pay for cleaning up lakes in just one region” and in another study that estimated “willingness to pay to take measures to prevent 2,000 migratory birds (not endangered species) from dying in oil-filled ponds was as great as that for preventing 20,000 or 200,000 birds from dying.” The panel suggested that a contingent valuation study demonstrate its sensitivity to these so-called “scope effects.” Some economists and other analysts have voiced criticisms of contingent valuation methods. An overarching concern among some observers is that contingent valuation does not adequately capture true estimates of willingness to pay. One component of this criticism is that respondents make choices but that these choices do not require real economic commitments. Also, particularly with respect to non-use values, critics argue that it can be difficult for respondents to comprehend a particular environmental or resource valuation issue, or to distinguish what researchers envision as a well-defined specific issue from a more general “warm glow” effect. Furthermore, some critics argue that the statistical estimation process by which willingness-to-pay estimates are produced from survey responses can be imprecise. At the same time, proponents of contingent valuation have made arguments that respond to many of these criticisms. One criticism of the contingent valuation method is that contingent markets do not create choice contexts with binding budget constraints and the financial consequences associated with “real” choice contexts. In general, the issue is that by actually spending a certain amount of money, an individual or household can no longer spend that money on something else. Thus, the goods and services that are purchased presumably represent the true preferences of the individual or household. In contrast, responding yes to a contingent valuation question does not financially bind the respondent in the same way. Proponents of the contingent valuation method have been also concerned with this issue and suggest that appropriate steps in survey design serve to reduce the problem. Others maintain that the existence of opportunities for strategic misrepresentation, among other problems, reduces the usefulness of the contingent valuation method. “. . . wired differently than the economic model of fully formed, stable, rational preferences requires. While the consumer’s wiring may produce patterns of market behavior that will often be approximated well by the economist’s model, when we approach the consumer from a different angle, asking direct and unusual questions about values, we find alarming variations from the standard economist’s story. All these consumers, so normal and rational on the outside, are revealed to be shells filled with vast rule-books of heuristics written by natural selection. Throw these people a curve ball, in the form of a valuation question that fails to fit a standard heuristic for market response, and the essential mindlessness of the organism is revealed.” Critics have also argued that estimates produced by contingent valuation studies may not be limited to values of the specific environmental amenity under consideration but may also incorporate a variety of broader values. The NOAA panel recognized the concern that contingent valuation estimates may contain a “warm glow” component associated with supporting worthy causes. One additional criticism is that resulting estimates of willingness to pay can be particularly sensitive to the statistical methods used. One analyst examined a variety of statistical issues in contingent valuation estimation and concluded that the estimates were sensitive to context effects, including anchoring effects, as well as the choice as to how statistical outliers were handled. Proponents of contingent valuation have responded to many of the arguments developed by critics. In particular, a prominent contingent valuation researcher has written an overview article that provides many general arguments in favor of contingent valuation, as well as a point-by-point discussion of several specific issues raised by critics of contingent valuation. Many observers believe that the use of contingent valuation seems likely to continue to grow. Some aspects of the contingent valuation method that are not entirely analytical may also influence the future path of its use in regulatory and damage assessment proceedings. One aspect concerns potential problems with incorporating evolving scientific understanding of the specific environmental issues crucial to a given policy evaluation into survey instruments that take time to develop, implement, and analyze. Another aspect involves consideration of the geographic extent of the affected population. A further issue concerns the “calibration” of willingness-to-pay estimates for use in regulatory or damage assessment proceedings. Additionally, some practitioners of contingent valuation are concerned that some of the specific recommendations of the NOAA panel may inappropriately preclude other analytical alternatives that may prove to be superior or more cost-effective. Federal regulatory actions often trigger specific requirements and may involve deadlines. For instance, the National Environmental Policy Act (NEPA) of 1969 requires federal agencies to prepare an environmental impact statement if a proposed federal action is likely to significantly affect environmental quality. Although neither NEPA nor its implementing regulations require non-use values to be considered, non-use values have been considered in NEPA proceedings. A contingent valuation study requires an accurate description of the likely change in environmental amenity, which in turn requires careful consideration of the underlying environmental impacts, perhaps including anthropological, atmospheric, biological, and physical components. In some contexts, much of the underlying scientific information may have to be developed during the environmental impact statement process. Because there are many steps required to develop, implement, and analyze survey instruments, there is a chance that the willingness-to-pay estimates will be produced on the basis of descriptions of expected environmental impacts that do not accurately reflect later scientific understanding, or that regulatory decisionmaking time frames are lengthened as that information is incorporated. In other regulatory contexts, such as the one involving the Navajo Generating Station (NGS), court-imposed deadlines may influence not only a decision to undertake a contingent valuation study, but decisions as to how underlying scientific understanding is incorporated into the survey research process. If a particular policy action is controversial or disputed, the accuracy of the underlying description of environmental impacts is likely to be challenged as leading to inaccurate calculations of willingness to pay for those improvements. Much of the analytical discussion focuses on estimates of per person or per household willingness to pay, and how sensitive or robust such estimates may be to particular choices in underlying description or analytical technique. However, for use in benefit-cost analysis or in estimating damage assessments, the issue of how many people are affected—for instance, how many people are assumed to have non-use values—is important in calculating gross benefit numbers. For contingent valuation estimates of recreation values, samples of recreationists offer a fairly straightforward way of defining the relevant population. For non-use values, the choice of the relevant population may not be so clear. For resources of national significance, researchers may reasonably consider that the national population is the relevant population and may design a study on the basis of that premise. In other cases, the answer is less clear. In any event, it is possible to generate a large benefit number when even fairly small estimates of willingness to pay are multiplied by 100 million, approximately the number of households in the country. Some observers have argued that contingent valuation estimates of willingness to pay need to be adjusted, or calibrated, because of the inherent limitations. In its deliberations, the NOAA panel reported that it was “persuaded that hypothetical markets tend to overstate willingness to pay for private as well as public goods” and that the same bias would be likely to occur in contingent valuation studies. In its proposed rule, the Department of Commerce (NOAA) recommended a 50-percent calibration factor to adjust for biases of unknown magnitude but of an upward direction. Although a comparison of contingent valuation estimates with other estimates is not possible for non-use values, some researchers have more recently compared contingent valuation estimates with “revealed preference” estimates in a number of studies for which both kinds of estimates were produced. The researchers examined a variety of recreation studies and also cases in which amenities might be capitalized into an asset price, such as a price premium a house with a beautiful view might command over a similar house without the view. The authors located 83 studies that provided 616 comparisons of contingent valuation to revealed preference estimates. The authors reported that contingent valuation estimates were “smaller, but not grossly smaller, than their counterparts.” Although some contingent valuation estimates were larger than their counterparts, the authors concluded that suggestions for a routine downward adjustment of contingent valuation estimates appear unwarranted. Some advocates for the use of the contingent valuation method have voiced concern over some of the NOAA panel’s suggestions. In particular, the panel’s strong preference for in-person surveys over mail surveys has been criticized by proponents of mail surveys, as has the panel’s preference for the referendum format. The panel’s preference for in-person surveys had much to do with the fact that sampling frames available for mailing provide incomplete coverage for the national population. The panel also was concerned that targeted respondents can review the subject of the questionnaire before deciding to respond, so those most interested in the subject may choose to respond. Proponents of mail surveys counter that other survey methods, such as in-person interviews, also have their drawbacks, such as problems caused by the presence of an interviewer, which may bias responses, or pressures on respondents to answer quickly while the interviewer is present. They also add that mail surveys of large samples offer significant cost savings over in-person interviews. The panel’s preference for the referendum format was based on a number of factors, including the fact that people are “rarely asked or required in the course of their everyday lives to place a dollar value on a particular public good.” Even though open-ended elicitation is not familiar, some researchers point to results from experimental economics indicating that posted price choice contexts perform poorly relative to open-ended contexts in “early rounds” of bidding situations in which respondents are not experienced. Given that respondents are not likely to be well informed in many contingent valuation contexts (at least for non-use goods), these researchers argue that the experimental results that people overpay in early rounds suggests that a “one-round” referendum may lead to an overstated willingness to pay. Other research suggests that the specific price that a referendum survey respondent is confronted with—the bid price—may lead to anchoring effects, so the resulting willingness-to-pay estimates may be too high. In contrast to the typical practice in which bid prices are distributed randomly to respondents, this research suggests that some initial investigation incorporating open-ended valuations could be useful in avoiding the assignment of high bid prices to respondents with low values (and vice versa). The largest benefit EPA expected to occur at the Grand Canyon National Park as a result of reducing emissions from NGS was an improvement in visibility during certain winter weather conditions. These conditions are expected to occur approximately 10 to 21 days each winter. EPA initially estimated that reducing the sulfur dioxide emissions by 90 percent would improve the winter seasonal average visibility by approximately 14 percent. These estimated improvements were based in part on a study of visibility impairment in the vicinity of the Grand Canyon by the National Park Service (NPS). EPA revised its estimate to approximately 7 percent after considering information from other studies that suggested that NGS has a lesser effect. EPA noted that its revised estimate may be understated because of other unquantified visibility improvements. The Clean Air Act requires that, upon a finding that it is reasonable to anticipate that an emissions source may be causing or contributing to the impairment of visibility in certain national parks or wilderness areas, the relevant state or EPA is required to determine an emissions limit for the source that reflects the best available retrofit technology (BART). In determining an emissions limit, EPA is required to take into consideration, among other things, the costs of reducing emissions and the degree of improvement in visibility that may reasonably be anticipated to result from the use of such technology. In September 1989, EPA proposed to attribute visibility impairment in the Grand Canyon to emissions from NGS and, as a result, was required to carry out a technology assessment of NGS. EPA’s determination of an emissions limit relied on data from an NPS study (the National Park Service Report on the Winter Haze Intensive Tracer Experiment [WHITEX]) of visibility impairment in the vicinity of the Grand Canyon. WHITEX was designed to evaluate the ability of using a variety of modeling approaches to attribute visibility impairment from a single source, NGS. Specifically, various models were to be evaluated in their ability to link NGS’ emissions to visibility impairment at the Grand Canyon and other nearby national parks. According to WHITEX, wintertime meteorological conditions in the area are characterized by several periods of stagnation in which air pollutants can be trapped by a persistent thermal inversion, resulting in a distinct visible surface haze layer. Although several earlier investigations have been conducted to determine the origins of the haze, WHITEX was a more comprehensive effort to address persistent questions about the nature and sources of the winter haze conditions. According to EPA and NPS officials, the emissions from NGS can have the largest effect during certain weather conditions that include (1) high relative humidity, which facilitates the conversion of the plant’s gaseous sulfur dioxide emissions to visibility-impairing sulfate particles, and (2) wind patterns that transport the emissions to the Grand Canyon. EPA estimated that these conditions occur between 10 and 15 times per winter, lasting from 3 to 5 days each occurrence. However, NPS officials explained that the effect of emissions from NGS on visibility impairment at the Grand Canyon during these conditions can be mitigated by local weather patterns. According to these officials, due in part to local weather conditions, the most severe effects occur approximately two to three times per winter, lasting from 5 to 7 days each time. These officials explained that visibility can be impaired during these winter weather conditions because of naturally occurring impairment—mist, fog, clouds—and because of man-made sources, primarily NGS. However, the officials noted that photographic and air monitoring data show that visibility impairment from man-made sources can continue for several days after the naturally occurring conditions have dissipated. In addition, the evidence also indicates that impairment from man-made sources is perceptible even on some days that include natural impairment. WHITEX, carried out during January and February of 1987, relied on injecting a unique chemical into NGS’ smokestack and tracking this chemical to air monitoring stations that were placed around the region, including at the Grand Canyon. The study concluded that NGS was the single largest contributor to visibility impairment in the Grand Canyon during the days for which air monitoring data were available. WHITEX’s results indicated that, for days on which air monitoring data were available, NGS contributed approximately 40 percent on the average to wintertime visibility impairment and approximately 60 percent to 70 percent during the worst visibility impairment conditions. EPA’s estimates of the degree of visibility improvement used WHITEX data to establish the relationship between NGS’ emissions and visibility impairment in the canyon. EPA explained that, because of the complex terrain in and around the Grand Canyon, the WHITEX data provided a more reliable estimate than the models often used to estimate improvements in visibility. Specifically, EPA used the ratio of sulfur dioxide emissions at NGS to sulfate particles in the Grand Canyon attributable to NGS. Using this ratio, EPA applied a “linear rollback” model, which used regression analysis techniques to estimate the level of visibility impairment that would result from a given level of NGS’ sulfur dioxide emissions. The model’s formula contained terms that attempted to account for, among other things, the percentage of sulfate that contributes to overall visibility impairment, the percentage of NGS’ contribution to total sulfates, and the removal rate of the control technology. EPA’s analysis of NGS’ emissions reductions and the resulting visibility improvements was complicated in several ways. For example, EPA had to determine whether to account for the possibility that a linear relationship may not exist between NGS’ sulfur dioxide emissions and the resulting visibility impairment in the Grand Canyon. In other words, would a reduction in NGS’ sulfur dioxide emissions result in a proportional or less-than-proportional reduction in visibility impairment in the canyon attributable to NGS? EPA explained that WHITEX showed that the conversion of sulfur dioxide to visibility-impairing sulfate particles is greater in a moisture-rich environment (e.g., clouds or fog) and the lack of such an environment tends to limit such conversion. However, EPA also explained that other studies showed that moisture-rich environments may also inhibit the conversion of sulfur dioxide to visibility-impairing sulfate particles because the compounds with which the sulfur dioxide might react (typically hydrogen peroxide and ozone) may combine first with other compounds and lessen the conversion of sulfur dioxide to the visibility-impairing sulfate particles. EPA determined that this issue was insignificant because adequate quantities of compounds, such as hydrogen peroxide, would likely exist during the winter and other studies of trends in various parts of the country did not indicate any significant nonlinearity. However, EPA did modify its model to address another complication. This complication stemmed from the possibility that reducing sulfur dioxide emissions could increase the amount of other visibility-impairing compounds to be formed. EPA was concerned that, if sulfur dioxide was reduced, ammonia that would have combined with sulfur dioxide to form visibility-impairing sulfate would instead combine with nitrogen oxides, forming ammonium nitrate. EPA modified its model to account for this potential “nonlinear” complication. EPA assessed a variety of different scenarios to determine the potential visibility improvements. First, the model assessed the potential visibility improvements (in terms of visual range—expressed in kilometers) under average conditions found during the WHITEX study period that could result from emission control rates of 70, 80, and 90 percent at NGS. Second, the model assessed the potential visibility improvements under the worst-case conditions found during the WHITEX study period that could result from removal rates of 70, 80, and 90 percent. EPA assessed scenarios that assumed a linear relationship between sulfur dioxide emissions and visibility impairment and other scenarios that assumed a nonlinear relationship. Model results showed a dramatic increase in estimated visibility improvements during the worst-case conditions compared to improvements during the average conditions. For example, the results showed visibility improvements under average conditions that ranged from approximately 11 percent for a 70-percent level of emissions reduction to approximately 14 percent for a 90-percent level of emissions reduction. These figures compare to the model’s estimates of visibility improvements under worst-case conditions that ranged from approximately 60 percent for a 70-percent level of emissions reduction to approximately 94 percent for a 90-percent level of emissions reduction. Because modeling average conditions does not necessarily represent actual conditions on a given day, EPA also examined potential visibility improvements using actual data collected during the WHITEX study. In cases where total visibility impairment data were not available (because of weather conditions during the study period—i.e., during periods of cloud cover), EPA reconstructed data that were measured during the study period. This analysis found visibility improvements that ranged, on average, from approximately 23 percent to 43 percent, depending on the level of emissions reduction. This level of visibility improvement was approximately 2 to 3 times higher than the estimated improvement found using average visibility conditions and approximately half the values found under the average worst-case conditions. In addition to improvements during certain winter weather conditions, EPA also estimated visibility improvements on other winter days. These estimated improvements were reported in terms of “changes in contrast,” which, like visual range, is another method of measuring visibility. EPA defined “contrast” as the percentage difference between the brightness of a scenic element and its background. With this method, EPA estimated that, using information developed in the WHITEX study and extrapolating it to the winter period (and applying a nonlinearity factor), reducing the emissions from NGS by 90 percent would have the following effects on visibility: (1) at least a “perceptible” change in visibility conditions (defined by EPA as a 4-percent change in contrast) approximately 100 days of the total winter days, (2) a “quite noticeable” change in visibility conditions (10-percent change in contrast) approximately 58 days of the total winter days, and (3) a “very apparent” change in visibility conditions (20-percent change in contrast) approximately 21 days of the total winter days. Although these estimates illustrate the varying effect of NGS on visibility during the winter, EPA eventually dropped the estimates. EPA explained that its calculations were in error because it did not take into account natural atmospheric scattering of light. Similar calculations were made by the plant owners, who attempted to correct for EPA’s error, and also showed that differing levels of improvements can be expected during the winter months. The plant owners’ estimates showed that 54 days, rather than EPA’s estimate of 100 days, would have at least a perceptible change. Using the results of their own visibility study, the plant owners argued that reducing the emissions by 90 percent would result in (1) approximately 4 days during the winter of a perceptible improvement in visibility, (2) approximately 2 days during the winter of a quite noticeable improvement in visibility, and (3) 0 days during the winter of a very apparent improvement in visibility. The National Academy of Sciences’ National Research Council established a committee to evaluate the WHITEX study. The Council noted that one of the study’s greatest weaknesses was that no measurements of visibility impairment were made below the rim of the Grand Canyon, within the canyon itself. The Council noted that meteorological evidence, still photographs, and time-lapse video suggest that sulfur concentrations in the canyon might have been considerably greater than was observed at the monitoring station used during the study and located at the rim of the canyon. On the basis of the data presented in the WHITEX study, the Council concluded that, on some days during the study period, NGS contributed significantly to haze in the Grand Canyon. However, the review also concluded that the study was not sufficient to ascertain the quantitative contribution by NGS to haze at any given time. The authors of the WHITEX study agreed with the Council that two of the quantitative techniques used in the study could not be used alone or to exactly apportion NGS sulfur dioxide emissions to visibility impairment at the canyon. Rather, the authors explained that they used these quantitative analytical techniques in conjunction with qualitative techniques to make reasonable estimates of NGS’ effect. Concerned with what they believed to be shortcomings of the WHITEX study, the plant owners conducted their own visibility study. This study was similar to the WHITEX study in its use of a unique tracer through NGS’ smokestack and air quality monitoring stations around the Grand Canyon. The owners’ study found that NGS’ sulfur dioxide emissions contributed less to visibility impairment at the canyon than the WHITEX study concluded. The owners’ study estimated that a 90-percent reduction in NGS’ sulfur dioxide emissions would not improve the average visual range in winter by more than 2 percent. In reviewing the owners’ study, EPA concluded that there was reasonable agreement between the findings of this study and the findings of the WHITEX study with respect to NGS’ peak contribution to sulfate and visibility impairment at the Grand Canyon. According to EPA, the major difference was that the WHITEX study led to a conclusion that the peak visibility impairment conditions occur more frequently and that the nonpeak visibility impairment conditions are greater than zero more often than found by the plant owners’ study. NPS established an air monitoring station within the canyon that addressed one of the shortcomings of its study that was noted by the National Research Council—that impairment measurements within the canyon were not made. Preliminary results from the monitoring station showed that visibility impairment in the canyon was worse than the impairment measured at the monitoring station used during the WHITEX study, which was located at the rim of the canyon. Data from the new monitoring site below the rim of the canyon confirmed that air transport and conversion processes below the rim of the canyon are sometimes decoupled from those processes above the rim. EPA also explained that photographic data taken during the WHITEX study indicated that airflow below the rim of the canyon could result in higher visibility impairment due to the trapping of pollution. EPA said that it did not quantify the expected visibility improvements below the rim of the canyon due to the limited amount of data available and a limited understanding of the air transport mechanisms below the rim of the canyon. Following a public comment period, EPA revised its estimate that reducing NGS’ sulfur dioxide emissions by 90 percent could improve the winter seasonal average visibility above the rim of the canyon from its initial estimate of approximately 14 percent to approximately 7 percent. In revising its estimate, EPA relied on the two visibility studies and other air monitoring information. EPA noted that it still believed that the primary improvement in visibility would stem from reductions of emissions from NGS during winter weather conditions. However, EPA also noted that other visibility improvements will occur, including improvements below the rim of the canyon, during seasons other than winter at the canyon, and at other national parks in the area. Therefore, EPA noted that its estimate of approximately 7 percent is likely an underestimate. EPA explained several factors that tend to make the estimate an understatement. First, EPA’s estimate did not include the more pronounced improvement that would be realized in the canyon, below the rim. EPA noted that, a comparative analysis, prepared by Air Resource Specialists, Inc., of 3 years (1988 to 1991) of sulfate levels from above-rim and in-canyon air monitoring stations, showed in-canyon visibility impairment up to 10 times greater than that measured on the rim and concluded that high sulfate conditions below the rim typically last from 3 to 5 days longer than do those observed at the rim. In addition, the study concluded that there is a high degree of confidence that NGS is responsible for at least 90 percent of the visibility impairment at the Grand Canyon during these periods. Second, EPA noted that the principal improvement will likely be during certain wintertime weather conditions. EPA’s approximately 7 percent estimate reflects an average over the 5-month period from November through March. Since the winter weather conditions during which NGS can have its largest effect occur intermittently throughout the 5-month period, EPA expects that the visibility improvement during these winter weather conditions is substantially greater than 7 percent. Third, EPA did not estimate the expected visibility benefits to be realized during nonwinter seasons at the Grand Canyon or at other surrounding national parks. NGS is located near several national parks located on the Colorado Plateau—which in addition to the Grand Canyon include Arches, Bryce Canyon, Canyonlands, Capitol Reef, Mesa Verde, and Zion. Two studies were submitted to EPA that estimated NGS’ year-round impacts on these other parks. One study, by the Air Resource Specialists on in-canyon visibility impairment, estimated the year-round effects of NGS. The study estimated the visibility effects of NGS’ emissions for every hour from December 1985 to November 1990 and concluded that NGS’ emissions were present at the Grand Canyon (1) an average of 35 percent of the time in the winter and (2) near or above an average of 20 percent of the time 8 months of the year. The study concluded that when NGS’ emissions are not in the Grand Canyon, they are most likely affecting another national park in the area. The study estimated that NGS’ emissions are present in these other parks on average at least 50 percent of each month throughout the year. The other study, prepared by Latimer and Associates, analyzed the impact of NGS’ emissions on impairment during all seasons in these national parks (including the Grand Canyon) for the same 5-year period. The study concluded that haze impacts generally are highest in the Grand Canyon in the winter and calculated that perceptible sulfate haze impacts due to NGS’ emissions occured in all other parks and in each season during the 5-year period modeled. The study concluded that since NGS is surrounded by national parks, the likelihood is high that at least one park is impacted at any given time. This appendix discusses two contingent valuation studies related to EPA’s 1991 regulatory action that established an emissions limit for NGS. One study was performed for EPA, the other for the plant owners. Both studies set out to value changes in visibility at the Grand Canyon, had survey instruments that were carefully designed by their researchers, and showed that people were willing to pay some amount to improve visibility at the Grand Canyon. The studies were different, however, in the specifics of what they were to value and how they went about doing so. Some of these differences added uncertainty to the studies’ results. It is less clear how the other differences affected results, and testing would be needed to determine the effects due solely to each of these differences. EPA set out to estimate the monetary value of visibility improvements in order to comply with Executive Order 12291. The order provided that, to the extent permitted by law, agencies should not take regulatory action unless the potential benefits to society outweighed potential costs to society. The order required agencies, including EPA, to prepare a regulatory impact analysis that included a cost-benefit analysis. Agencies were to do this for proposed rules that, among other things, were likely to result in an annual effect on the economy of at least $100 million. In such cases, an agency’s analysis was required to describe the benefits—expressed in monetary terms, if possible—as well as the potential costs. If the analysis did not show that benefits exceeded costs, the agency was to explain any legal reason why the regulation should still be promulgated. Cost-benefit analyses were expected to conform to guidelines developed by the Office of Management and Budget and EPA. The guidelines allowed EPA considerable flexibility in estimating its benefits. They stated, among other things, that the scope and precision of analysis should depend on the specific requirements of authorizing legislation, the quality of underlying data, the scientific understanding of the problems to be addressed through the regulation, and resource constraints at EPA. EPA, according to officials, was faced with such resource constraints. It had, in effect, a court-ordered deadline for completing its estimate of the monetary value of visibility improvements expected from limiting the plant’s emissions. On the basis of a 1982 lawsuit filed by environmental groups and a subsequent settlement agreement and revisions to the settlement agreement between EPA and these groups, EPA was under court order at this time to determine whether a specific pollution source caused or contributed to the visibility impairment at the Grand Canyon and, if so, issue a finding to that effect by August 31, 1989. In addition, following any finding, EPA was to conduct a best available retrofit technology (BART) analysis on the identified source. And, if the analysis indicated emissions controls would improve visibility at the Grand Canyon National Park, EPA was to propose regulations requiring their installation and use in order to achieve the emissions limit representing BART. Under the court order, EPA was to complete its technology analysis by February 1, 1990. This was less than 6 months from August 31, 1989, when EPA was required to issue its finding as to whether NGS was a source of impairment. EPA concluded that it did not have time to complete original research to estimate a monetary value of the specific visibility improvements expected from emissions controls at the plant. The agency chose, instead, to extract the monetary value from the results of existing contingent valuation research related to visibility changes at the Grand Canyon. EPA’s decision to estimate benefits based on contingent valuation was, according to a former EPA official who was the project economist for this rulemaking, partially an attempt to foster a wider review of the use of the contingent valuation methodology so that, if accepted, it could be used on other environmental policy issues and regulatory decisions. The existing study, “Preservation Values for Visibility Protection at the National Parks,” was partially funded by EPA through a cooperative agreement with the University of Colorado Center for Economic Analysis and performed by the research firm of RCG/Hagler, Bailly, Inc. The existing study was designed to advance the state of the art in estimation of use and non-use values because existing methods were considered to be quite limited when the need for such values was increasing for reasons including Executive Order 12291 requirements. The researchers were Lauraine G. Chestnut and Robert D. Rowe (EPA researchers). EPA selected this study, from among others, because it included many recent methodological developments intended to respond to earlier criticisms of the contingent valuation methodology for valuing visibility conditions. EPA also selected this study because its estimates of the monetary value of visibility improvements were conservative when compared with another earlier study’s. The plant owners also used contingent valuation to estimate a monetary value for the visibility benefits expected from limiting the plant’s emissions. In response to EPA’s use of the existing study and monetary value estimate, the owners decided to conduct their own study and contracted with a research firm, Decision Focus, Incorporated, to do so. Both the study EPA used and the owners’ study set out to value visibility improvements at the Grand Canyon National Park. The studies, however, valued different degrees of visibility improvement. That is, the study on which EPA based its estimated monetary value was intended to value a much broader visibility issue than the wintertime visibility improvements expected from emissions controls at the plant. The owners’ study, on the other hand, set out to value wintertime visibility improvements the owners expected would result from controlling their plant’s emissions. EPA expected, as stated in its proposed rule for a 90-percent emissions limit, that there would be an approximately 14 percent improvement in the winter seasonal average visibility at the Grand Canyon. The improvement was expected to occur over a 30-year period, which was EPA’s estimate for the remaining useful life of the plant. However, the study on which EPA based its estimate valued changes in annual average visibility that would last forever at several individual national parks, including the Grand Canyon. The photographs used in the study, which survey respondents were asked to value, were labeled summer days. And the broader study valued different visual range improvements than EPA expected would occur from limiting emissions: a 61-percent improvement in visual range; a 29-percent improvement in visual range; and a 26-percent degradation in visual range. The owners’ contingent valuation study was specifically designed to measure the wintertime visibility improvements expected from emissions controls at the plant. This study asked respondents to value five different scenarios of visibility improvements. Interviewers described, for respondents, the expected visibility improvements of each scenario and showed them photographs that illustrated the improvements. The five scenarios, shown in table III.1, were chosen by Decision Focus to relate to potential regulatory actions by EPA. The last three represent the different types of winter improvements Decision Focus hypothesized would result from emissions controls at the plant. We found that the researchers for both the study EPA used and the owners’ study made very thorough efforts in developing their survey instruments. They followed accepted survey research standards to ensure the validity of their survey instruments. As a result of these efforts, the survey instruments should have measured the visibility concepts the researchers intended them to measure and with wording that the researchers found to be most effective for their studies’ purposes. And, as one would expect because the studies were intended to measure different visibility improvements, the survey instruments provided respondents different information about what they were to value and used different photographs to demonstrate visibility improvements. Careful survey design, in our view, is critical to averting problems with bias or comprehension. It is needed because people, on whom these researchers relied to value visibility changes, are complex and their reactions to specific words or concepts are not always predictable. If the right questions are not asked or if questions are not asked in the right way, researchers are less likely to obtain high-quality results. Asking the right questions in the right way is both science and art. It is a science because it is guided by empirical evidence and uses many scientific principles developed from various fields of applied psychology, sociology, cognitive research, and evaluation research. It is an art because it requires anticipating the respondents’ interactions with the survey instrument. Both research groups pretested their survey instruments to avoid bias and comprehension problems. Pretesting involves administering survey questions to people who represent the population to be surveyed. This can be discussions with focus groups or in-person or telephone interviews and is intended to identify problems that researchers can correct before administering their survey instrument to a larger group. The researchers for the study EPA used held two rounds of pilot tests, each involving about 10 respondents. Then, after revising the survey instrument, the researchers had it peer reviewed by sociologists familiar with issues concerning national park visitors and survey design issues, economists familiar with contingent valuation, and an atmospheric scientist familiar with visibility. They then hired professional interviewers to conduct a final pretest with 20 respondents. The owners’ researchers held two rounds of focus groups to explore basic assumptions about visibility improvements, each followed by a round of telephone interviews. Then, after analyzing the information gathered, they conducted two more rounds of focus groups and preliminary in-person interviews. Drawing on the information gathered, the researchers then developed a survey instrument that they revised following two more rounds of focus groups, two rounds of test interviews, and finally a pretest with 22 respondents that was conducted by professional interviewers. For contingent valuation surveys to elicit useful information about respondents’ willingness to pay for specific environmental improvements, we believe researchers must ensure that the respondents understand exactly what they are being asked to value. In deciding what kind and how much information to provide respondents, researchers must weigh providing enough and properly ordered information with the possibility of overloading respondents or being criticized for trying to lead respondents. The survey instruments, for both the study EPA used and the owners’ study, provided respondents different background information and used different photographs to depict changes in visibility. The obvious reason for these differences, in our view, is that EPA used existing research designed to value different visibility changes than those expected because of emissions controls at the plant. Therefore, agency researchers could not have been expected to describe the environmental improvement expected from emissions controls at the plant. Another important reason is that contextual information and photographs are matters of researcher choice and are an area where contingent valuation may be more art than science. The nature of the information the research groups provided respondents was different. For example, in the owners’ study, before respondents were asked to value five levels of visibility improvements, they were given specific background information. Among other things, the respondents were told that on high visibility days, one can see more than 100 miles at the Grand Canyon; the rural southwest has some of the clearest air in the country; the actual amount of pollution at the Grand Canyon National Park is very low compared with the amount in cities; most visitors come in the summer period; and if any of the programs to improve visibility that the respondents were asked to value were implemented, certain older power plants, already meeting all current state and national air pollution standards, would have to install and maintain new equipment to remove pollutants. These background statements, in our view, might have caused respondents to minimize their concern over visibility problems at the Grand Canyon and accordingly to assign lower willingness-to-pay values for visibility improvements. On the other hand, these statements could be exactly what the respondents needed in order to understand what they were to value. Another example of contextual information is from the study EPA used. In that case, prior to asking respondents to value visibility changes, the researchers first introduced respondents to several nonvisibility effects of air pollution at national parks. Before asking the respondents to value visibility changes, the researchers first asked them to prioritize nonvisibility effects that were happening or could happen in national parks due to people’s activities outside park boundaries, for example, injury to vegetation and historic structures from air pollution. Then, later in the questionnaire, following the valuation questions, respondents were asked to separate, from their willingness-to-pay values, any amount they had included for nonvisibility improvements. A possible effect of introducing these additional results of air pollution before the valuation questions, in our view, might have been that respondents assigned higher willingness-to-pay values than they might otherwise have. And subsequent efforts to separate out any inflated amounts might not have been successful. On the other hand, introducing nonvisibility issues might have been a critical step in ensuring that respondents valued only visibility improvements, that by identifying these other, nonvisibility, effects, respondents might have been better able to value only visibility changes. Another aspect of contextual information is its level of detail. The study EPA used and the owners’ study were very different in terms of their level of detail. The owners’ researchers, knowing they were to value visibility improvements from emissions controls at the plant, were able to give respondents very specific descriptions of visibility conditions and ask very specific questions about the visibility improvements. In contrast, EPA’s researchers provided respondents more general descriptions of visibility conditions and possible events that might change them. This difference in detail, we believe, could cause substantial variation in the values the studies’ respective respondents placed on visibility improvements, depending upon the cognitive patterns of the respondents. A test, administering the two survey instruments to randomly selected samples from the same population, would be needed to determine the effects due solely to the level of detail in the survey instruments. The researchers for both the study EPA used and the owners’ study used photographs to illustrate the visibility improvements respondents were to value. The characteristics of the photographs they chose (e.g., size or season) were different. The effect, if any, these differences had on the values respondents assigned to visibility improvements is not known. Testing would be needed to determine the effects due solely to the differences in photographs. The selection of photographs, according to a former official who was EPA’s project economist for this rulemaking, demonstrates a challenge in accurately depicting what it is respondents are to value. At a minimum, according to this official, problems stemming from the selection of photographs can increase the uncertainty of the results and provide another avenue for criticism of results. At worst, problems can yield biased results with an unknown direction of bias. Both research groups said they selected their photographs to minimize bias. EPA’s researchers used NPS photographs that represented four visibility conditions on summertime days: 15 percent of the summertime days (the best conditions), 20 percent, 40 percent, and 25 percent (the poorest conditions). The photographs were taken at the same time each day and selected to minimize variations in extraneous factors such as clouds and snow. While the owners’ researchers also used NPS pictures and selected them with the assistance of a leading visibility scientist from NPS, the pictures they chose were very different. The owners’ researchers selected photographs representing different seasons and weather categories, for example, summer clear skies, winter clear skies, winter overcast, and winter layered haze events. These photographs were also selected to represent issues, including the change in the Grand Canyon’s appearance from season to season (primarily due to the change in the sun’s angle), summer afternoon thunderstorms’ tendency to frequently obscure views, and the impact the time of day has on the appearance of vistas. In addition to different weather patterns, the researchers used different numbers, sizes, and qualities of photographs. The EPA researchers used four photographs printed to be mailed to respondents—each picture was 3 by 5 inches. The owners’ researchers used 12 pictures, measuring 8 by 12 inches, mounted on display boards to be shown to respondents during in-person interviews. We believe these differences could cause substantial variation in the values respondents assigned for visibility improvements. A test showing the different photographs to two randomly selected samples, from the same population, would be needed to determine the effects due solely to the differences in photographs. EPA’s and the owners’ researchers administered their survey instruments in different ways. EPA’s researchers used mail questionnaires to contact the 710 respondents in its study, while the owners’ researchers contacted 202 respondents in-person. There are trade-offs when choosing between these two survey techniques. Both techniques have their strengths and weaknesses. In-person interviews, in our view, have both strengths and weaknesses. Strengths of in-person interviews include researchers’ being able to control the amount of information respondents have available when answering specific valuation questions and interviewers’ ensuring that survey questions are asked in the exact order the researchers intended. Because in-person interviews are conducted in one setting, respondents are less likely to be interrupted by outside events, for example, personal or family illness, that might change their perspective when answering questions. Furthermore, these interviews are also generally more successful with respondents whose reading levels are low in comparison to the complexity of the questions. The weaknesses of the method include higher costs because interviewers not only must be trained, but they must also travel to and from interviews—some of which may not be successful. In addition, interviewers, by their presence, may affect how respondents answer questions. For example, respondents may provide an answer they believe the interviewer wants or give any answer just to further the interview process. Strengths of mail questionnaires include being substantially cheaper than in-person interviews. Being less expensive, mail questionnaires can be sent to larger samples than may be possible with in-person interviews and, as such, may be more appropriate for research issues requiring nationwide results, such as the valuation of visibility improvements at the Grand Canyon National Park. Mail questionnaires allow respondents time to carefully consider each question and their response. Weaknesses of the method include the possibility of respondents’ having more information than researchers intend them to have when they answer a specific question (because they can skip back and forth between questions or read ahead). Also, when mail questionnaires are used, there is no one who can assess for researchers whether respondents understand the questions or what it is they are to value. While some in the research community tend to prefer in-person interviews for contingent valuation surveys, mail questionnaires have not been proven to be a less valid technique for collecting data. While both studies showed respondents were willing to pay some amount for visibility improvements at the Grand Canyon, the researchers used different techniques to calculate their willingness-to-pay values. EPA, in extrapolating from the results of the earlier contingent valuation study, made various assumptions and judgments about how to translate values elicited for larger benefits to values for the narrower, specific benefits of this case. And the plant’s owners, in calculating results from their pilot study, used a data-trimming technique that removed a fixed amount of data from the calculations. These techniques added uncertainty and possible bias to the study results. Additional uncertainty may have also been added through adjustments both groups of researchers made in response to the final results of visibility studies that showed less visibility improvement than the studies valued. Any uncertainty in these results was magnified when both groups of researchers projected their willingness-to-pay values to the nation without conducting national samples. EPA then projected its nationwide results to the number of years it expected the regulation to be in effect. EPA’s goal was to identify that portion of the broader study’s willingness-to-pay values relevant to the expected visibility improvements from emissions controls at the plant. To accomplish this, EPA’s researchers did the following: created a database of the results from the original research that were pertinent to visibility at the Grand Canyon. The original research used six surveys, three of which contained questions about visibility at the Grand Canyon National Park. The results of the three survey instruments were what researchers combined for the database. determined, for the database, the relationship between visibility improvements and the willingness to pay for the improvements. The original study made three willingness-to-pay estimates—one each for a 61-percent visibility improvement, a 29-percent visibility improvement, and a 26-percent degradation of conditions. The researchers calculated a mean willingness-to-pay value for each of these levels of improvement. determined for each level of improvement, using regression analysis, the relationship between the individual willingness-to-pay values and other factors such as respondents’ age, household income, gender, and history of visiting national parks, which were important in determining the willingness to pay. using this empirical relationship, predicted willingness-to-pay values for the approximately 14 percent visual range improvement that EPA initially expected would occur from the addition of emissions controls at the plant. using both sensitivity analyses and comparisons to related past contingent valuation studies, tested the validity of their predicted willingness-to-pay values. EPA’s researchers’ estimate of the monetary value of visibility improvements, expected from a 90-percent emissions reduction, ranged from $1.30 to $2.50 per year per U.S. household. EPA recognized that extrapolation, by definition, added considerable uncertainty to the resulting values. Nevertheless, EPA believed that the results were sufficient to serve as indicators of the direction (i.e., negative or positive) and the order of magnitude (i.e., whether the values were in millions, tens of millions, or hundreds of millions) of the values. The owners’ researchers calculated the mean willingness-to-pay values for households for each of the five visibility programs included in their study. They calculated these values using a data-trimming procedure that involved removing a fixed amount of data (first the highest and lowest 5 percent of the willingness-to-pay values and then the highest and lowest 10 percent of the willingness-to-pay values) from both ends of the data distribution curve and then calculating “trimmed means.” According to the researchers, they used trimmed means because the ordinary means of the untrimmed data were grossly distorted by a very small number of outliers. According to the researchers, the ordinary mean is the correct statistic under traditional welfare economic theory, if one is willing to ignore distribution consequences, that is, accept a program in which—in the worse case—all the benefits are accrued by one individual. Trimming, according to the researchers, is an alternative that avoids this extreme case and results in willingness-to-pay value being based on a more central part of the distribution. EPA’s project economist for this rulemaking told us that data trimming in this case was problematic because the distribution of the study’s results was highly skewed, with 90 percent of the willingness-to-pay values being $0. Data trimming eliminated some results from respondents who said that they would pay a large amount for visibility improvements and some results from respondents who said that they would pay nothing for visibility improvements. This practice greatly affected the owners’ results. For example, for visibility improvements on 20 winter days, the untrimmed mean of the willingness-to-pay distribution was $2.38. This is compared with $0.50 for the 5-percent trimmed mean and $0.02 for the 10-percent trimmed mean. Table III.2 shows these results for each of the five visibility programs the owners’ research examined. Both EPA’s and the owners’ researchers made additional adjustments to their willingness-to-pay results to reflect the final results of visibility studies—results that were not available at the time they began their studies. These adjustments may have added additional uncertainty to the final results. EPA, as previously discussed, was operating under a court-ordered deadline and began its analysis using the best available, rather than final, estimates of expected visibility improvements. So when the final results became available and were significantly less than the preliminary results its researchers had used to estimate the value of visibility improvements (a winter seasonal average visibility improvement of approximately 7 percent instead of the approximately 14 percent used in the analysis), EPA scaled its willingness-to-pay estimates downward. And while the EPA researchers (at EPA’s request) had designed the computational formulas so that results could be revised when final visibility improvement estimates became available, they also recognized the possibility of adding uncertainty to the results. The researchers said that uncertainties could be added to results if willingness-to-pay values drop off dramatically at some point when visibility changes from summer to winter or from most days to some days. However, they also said that while there was no evidence of such a dramatic drop in mean values, available evidence on the question is quite limited. The revised willingness-to-pay estimates ranged from $0.75 to $1.75 annually per household. The owners’ researchers, faced with the same time constraints, also were required to begin their research while awaiting final estimates of the visibility improvements expected from emissions controls. Then, after their visibility research efforts were completed, the owner-funded visibility study concluded that visibility improvements from emissions controls would be less than any of the scenarios valued. The study concluded that with a 90-percent emissions reduction, visibility would improve (at least perceptibly) on 6 days. Therefore, the researchers extrapolated their monetary value from the values they had estimated for higher degrees of visibility improvement. The researchers’ final report did not explicitly state a willingness-to-pay value per U.S. household. Rather, the report indicated a total public value of $2.3 million—this equates to approximately $0.023 per household. A senior associate at Decision Focus, Incorporated, was unable to provide specific details on how the final calculations were made because of the time that had passed since the study was completed. Any uncertainty in willingness-to-pay values is magnified, in our view, when the results are used to project a nationwide value and applied to the entire period of time to be affected by the regulatory action. While neither group of researchers had a nationwide sample, both projected their results to the nation as a whole. EPA additionally projected its results to the entire time period the regulation would be in effect. Neither EPA’s nor the owners’ researchers had sampling strategies that allow nationwide results. Faced with resource constraints, EPA sacrificed its ability to obtain nationally representative values by choosing to extrapolate from existing research. And while the plant’s owners planned to have nationwide results, time and resource constraints contributed to their not completing the planned study. The contingent valuation study from which EPA extrapolated surveyed a sample drawn from residents of five states: Arizona, California, Missouri, New York, and Virginia. These states were selected so that there would be variation in the distances between respondents’ residences and the national parks studied. In addition, Arizona, California, and Virginia were selected because they were states with national parks being studied. For each of the selected states, a survey instrument was mailed to respondents whose names were selected from national databases drawn from drivers’ licenses, car and voter registrations, and other sources. Although different surveys were sent to the different states, with different questions about the different national parks, EPA extrapolated from the 710 responses that pertained to the Grand Canyon National Park. The EPA researchers agreed that their sample did not technically allow a reliable assessment of the U.S. population. However, they believed that they had compensated for the partial sample by adjustments they made to the willingness-to-pay estimates. These adjustments were intended to account for socioeconomic differences (e.g., household income, age, sex, and distance of residence from the parks in question) between their sample and the U.S. population. Nonetheless, the researchers also said that they were unable to calculate the expected level of error in the sample. The plant owners’ sample was drawn from households in two counties—San Diego and St. Louis. This sample, however, was for a pilot study that tested a survey instrument. With appropriate revisions, the survey was to be administered to a much larger sample from which national results could be drawn. The owners’ researchers selected the two counties to provide different settings. Then, to select households for interview, the researchers first randomly selected blocks within the counties. And second, beginning with a random predesignated starting point and proceeding in a predetermined manner, they went from household to household until they had conducted five interviews with heads of households that met established age and gender quotas. The researchers set these quotas to ensure that men and young people were interviewed since women and older people are easier for researchers to locate. In total, 202 persons were interviewed. The owners’ researchers recognized that their sample was not a national sample and planned to conduct a national sample. However, according to a senior associate at Decision Focus, by the time they completed the pilot, there was not enough time remaining to complete and summarize a national sample so it could be used in EPA’s decision-making. In addition to time and resource constraints, according to an official of the Salt River Project—part owner and the operator of the plant—the owners did not authorize the remaining research because they did not see value in doing so. While both EPA and the owners projected their per U.S. household monetary value to the nation as a whole, EPA also projected these results to the number of years the regulation would be in effect (the number of years the plant was expected to operate). EPA estimated that the monetary value of the visibility benefits would range from $90 million to $200 million in 2000 (measured in 1992 dollars). EPA estimated the present value of the monetary benefit stream, as of January 1992 (expressed in 1992 dollars and discounted using a 10-percent real rate), at $523 million to $970 million. Jonathan T. Bachman Stephen M. Brown Daniel J. Feehan Sue E. Naiberk Cheryl L. Pilatzke Cynthia S. Rasmussen Victor S. Rezendes Pamela K. Tumler The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Environmental Protection Agency's (EPA) decision to limit sulfur dioxide emissions from the Navajo Generating Station, focusing on: (1) the effect on emissions reductions and the associated costs that resulted from the negotiated agreement used by EPA in making its decision compared to its initial proposal; (2) the visibility improvements the agency estimated would result from the emissions controls and the means by which these improvements were determined; and (3) how contingent valuation was used to estimate the monetary value of visibility improvements. GAO noted that: (1) the negotiated agreement is expected to result in greater emissions reductions at less cost than EPA had initially proposed; (2) the agency initially proposed limiting sulfur dioxide emissions at the Navajo Generating Station by approximately 70 percent at an annual cost estimated between $91.9 million and $128.3 million; (3) the negotiated agreement is expected to increase emissions reductions to approximately 90 percent at an estimated annual cost of approximately $89.6 million; (4) the lower costs resulted from several factors, according to the plant operators; (5) according to a project engineer for the Salt River Project, with its compliance determined on an annual basis, the plant can operate its emission control equipment most days at a rate greater than that needed to cut emissions by approximately 90 percent to make up for those days on which emissions are not controlled because the equipment is not operating; (6) also, delaying the initial installation of the emission control equipment by almost 3 years, from January 1995 to November 1997, allows the project to be completed in a more cost-effective manner; (7) EPA estimated that reducing the sulfur dioxide emissions at the Navajo Generating Station by approximately 90 percent would improve winter seasonal average visibility at the Grand Canyon approximately 7 percent--from about 124 miles to about 133 miles; (8) most of this improvement was estimated to result from improvements during certain winter weather conditions; (9) EPA initially estimated an approximately 14 percent improvement in the winter seasonal average visibility primarily on the basis of a National Park Service study of visibility in the vicinity of the Grand Canyon; (10) EPA revised this estimate to approximately 7 percent after considering the results of other analyses; (11) however, EPA noted that its revised estimated may be understated because it did not include visibility improvements: (a) below the rim of the Grand Canyon; (b) in seasons other than winter at the Grand Canyon; and (c) year round at other nearby national parks; (12) both EPA and the Navajo Generating Station's owners used contingent valuation to estimate the monetary value of visibility improvements; and (13) although relying on the same methodology, the studies were different and yielded widely different results.
Victory in the Cold War brought changes in the size and resources available to today’s armed forces. A decline in DOD budgets has been a trend since the mid-1980’s peak in defense budgets. Since the collapse of the Soviet Union, the range of public and private businesses, departments, or facilities that work in the interests of U.S. national security operate in a defense environment different from the past, where defense policy has changed accordingly. DOD is buying and developing fewer types of military systems and purchasing smaller quantities of the systems it does buy. Weapons purchased today have gained from considerable military and technological advances made over time. In constant dollars, DOD procurement outlays in fiscal year 1995 were 52 percent smaller than 1987 levels—the highest level since 1946. This has an effect on the defense industrial base (DIB)—industries that supply, manufacture, or assemble aircraft, ships, missiles, tanks, ammunition, weapons, and electronics and communications equipment for national defense purposes. In fiscal year 1995, DOD procurement outlays were $55.1 billion and defense-related industry employment was approximately 2.3 million. As companies develop and implement strategies for survival in the new spending environment, the Congress and the executive branch have considered the balance between market forces that influence the structure of the defense industrial base and the federal government’s role in securing and meeting the nation’s defense needs. For example, DOD’s Bottom Up Review (BUR) was designed to define the nation’s defense strategy, force structure, modernization and infrastructure requirements as a result of the end of the Cold War. Promoting a more efficient post-Cold War defense industrial base is a goal of initiatives to reform DOD’s weapons acquisition process. While many of DOD’s recent acquisition reform efforts were embodied in the Federal Acquisition Streamlining Act (FASA) of 1994, DOD has made other efforts to adapt to the post-Cold War period of smaller procurement budgets, shrinking defense industry, and increased international competitiveness. In 1994, DOD set up groups to identify, coordinate, or implement process improvements to reduce “cost drivers” believed to cause increases in the price DOD pays for goods and services. DOD’s initiatives to aggressively pursue acquisition reform include the elimination of some military standards and requirements, adopting commercial practices, and the use of Integrated Product Teams (IPTs) to continuously include government and industry stakeholders in making program and business decisions. A large-scale post-Cold War transition assistance program, authorized under the National Defense Authorization Act for fiscal year 1994, and announced in March 1993 by the executive branch, is the Defense Reinvestment and Conversion Initiative. The initiative included funding for (1) worker training and adjustment, (2) investments in hard-hit communities, (3) dual-use technology and commercial integration, and (4) conversion opportunities in new civilian technology investment. In fiscal year 1994, the Congress appropriated $2.5 billion for DOD’s defense reinvestment and conversion program. As described above, a number of issues have been addressed through programs or legislation directed to assist the transition of the defense industrial base in the post-Cold War era. In your request, you asked for information on productivity and competition in the defense industrial base. In this report, we describe the trends in available data on productivity and competition and the related issues of trends in defense industry employment, the status of major defense contractors in the post-Cold War, and trends in defense budgets and outlays. We make use of existing statistical information and supplement these data with information collected from industry experts and defense contractors. This work makes use of findings from studies, now just beginning to emerge, that examine the industrial, economic, and national security implications associated with the post-Cold War drawdown and conducted or sponsored by DOD, as well as private research organizations or groups. We present a broad historical overview of data about the defense industry to provide a context for the significant changes that the defense industry has faced in the post-Cold War period. As stated previously, in this report we describe (1) overall trends in productivity, competition, and other financial indicators in the defense industry over time and (2) the relationship between these trends and indicators of defense spending over time. To focus our review of these issues, we developed the following six key questions, which we answer in this report where data allowed us to. 1. What are the trends in DOD’s total, procurement, and RDT&E budgets? 2. What are the trends in the dollar amount of DOD procurement and RDT&E awards to defense contractors and subcontractors over time? 3. What are the trends in indicators of employment, productivity, and competition over time? 4. How are employment, productivity, and competition related to indicators of defense spending? 5. What are the trends in the financial indicators of major defense contractors over time? 6. What is the relationship between indicators of defense spending and indicators of the financial status of major defense contractors over time? The industries in our analysis include U.S. manufacturers of items for major DOD procurement programs. DOD and other executive agencies have identified them as “defense-dominated” industries, or industries in which the output is largely purchased for defense purposes: aircraft, guided missiles, ammunition and ordnance, tanks, ships, and electronics and communications equipment. Where the industrial output of these manufacturing industries is not purchased by DOD, it may be purchased by commercial companies, other U.S. government agencies, or international companies. We designed a macro-level evaluation to describe overall trends and patterns and to provide a basis for the additional phases of the work that you requested. The highly aggregated nature of much of the existing data and information about defense industries also in part required that we adopt a macro-level approach. Since our focus was global, we did not examine specific disparities, differences, or nuances in the data. The aggregate nature of the data did not permit us to offer definitive explanations for the trends these data reveal. We collected, integrated, and analyzed published and unpublished data across the period 1975-95 from the executive agencies that maintain information on defense industries—DOD, the Department of Commerce, and DOL. This resulted in multiple data sources and multiple measures. We used those that were the most comprehensive with respect to that time period and the aspects of defense industry that we focused on. We interviewed individuals and reviewed studies at Commerce, DOD, and DOL as well as at private research and consulting organizations, Wall Street firms, and major defense contractors. (A list of the offices we contacted is in appendix I.) The measures and data that were available provide a method to describe and illustrate trends and patterns. The information that was available has varying degrees of uncertainty and completeness. Appendix II details our methodology and study limitations and defines our terms and concepts. In order to understand the context for the post-Cold War trend in declining defense budgets, we examined trends in DOD budgets over the past 50 years. The recent downturn in defense budgets is the fourth in 50 years. The three prior funding drawdowns came at the ends of World War II, the Korean War, and the Vietnam War. This fourth one follows the peacetime defense buildup of the early 1980s. Figure 1 shows DOD’s 1945-95 total, procurement, and RDT&E budgets. Average post-Cold War (1990-95) procurement outlays are 10 percent higher than average Cold War outlays (1947-89). DOD’s yearly average procurement outlays were $69.3 billion during the Cold War; since the collapse of the Soviet Union, they have been $76.3 billion. Since 1990, average yearly RDT&E outlays have been $38.5 billion, compared to the average $24.3 billion from 1947 to 1989. Because defense industry is most concerned with DOD’s procurement budget, as it includes the purchase of weapon systems, we focus on broad trends in procurement budgets specifically. The greatest 1-year percentage decline in the procurement budget’s growth was the 80-percent decline in 1945, following World War II. The greatest increase was the 372-percent increase in 1951, preceding the Korean War. These periods represent the most extreme past cases of growth increase and decrease. In post-World War II history, the period 1985-95 represents the longest consistent decline in the procurement budget. However, this period of decline includes fiscal year 1987, a year marked by the highest procurement outlays since the Korean War. Figure 2 shows the yearly percentage growth or decrease in DOD’s procurement budget throughout the past 50 years. Examining trends in procurement and RDT&E contract awards indicates DOD’s spending within industry segments. These data show where DOD’s procurement and RDT&E dollars have gone in the past. They also provide an indication of the industry segments that have experienced the most funding decline in DOD post-Cold War contract dollars. In the past 20 years, DOD has spent more in procuring aircraft, guided missiles, and electronics and communications equipment than in procuring other major hard goods for national defense. (See figure 3.) In particular, expenditures for aircraft exceeded all others during the period. DOD’s 1975-94 prime contract awards for aircraft, missiles, and electronics and communications equipment show a trend in which spending exceeded that on other weapon systems. Figure 3 shows that aggregate procurement spending on aerospace products has been 65 percent greater since 1975 than the cumulative spending on ships, tanks, weapons, and ammunition.Contract awards for missiles, electronics and communications equipment, and especially aircraft peaked in the 1980s. While their levels have since fallen, DOD’s constant dollar spending for aircraft, missiles, and electronics and communications equipment, and for most other major hard goods, is the same or nearly the same as just prior to the peacetime defense buildup of the early 1980s. The change in post-Cold War procurement contract spending has not been constant or equal across procurement programs. While the average post-Cold War reductions in spending for aircraft in 1990-94 were the smallest, at 3.6 percent, reductions in spending for ammunition were the largest, at 18.7 percent. The post-Cold War average percentage change in the dollar amounts of DOD’s prime contract awards for procurement were aircraft: –3.6 ships: –8.7 weapons: –9.0 electronics and communications equipment: –10.2 missiles: –11.7 ammunition: –18.7. Like DOD’s procurement spending, its expenditures in the aerospace industry have dominated its RDT&E contracts. In every year of the past 20, RDT&E investments for aircraft, missiles, and electronics and communications equipment differed, but their trend was always to surpass RDT&E investments in weapons, ships, and ammunition (figure 4). The post-Cold War average percentage change in the dollar amounts of DOD’s RDT&E contract awards from 1990 to 1994 were aircraft: +1.6 electronics and communications equipment: –3.7 missiles: –6.3 weapons: –9.0 ships: –18.3 tanks: –18.3 ammunition: –23.7. Post-Cold War RDT&E reductions in aerospace have been the smallest relative to other major weapon systems; spending for aircraft has even increased approximately 1.6 percent. Post-Cold War RDT&E reductions for ammunition have been the largest, at 23.7 percent. The only source of information available to describe trends in subcontract awards to defense contractors, over time, is DOD’s records of participants in its subcontracting program (see appendix II). The participants can be small or small disadvantaged businesses or large businesses. For example, companies like Lockheed Martin and Boeing have received subcontractor awards under this program. DOD’s published sources did not permit us to determine awards by weapon system or industrial segment but we were able to observe that the trends in the dollar amounts awarded to subcontractors are similar to those for prime contractors. Subcontractor awards peaked in the 1980s and began a gradual decline in 1989. The average change in post-Cold War funding available through DOD’s subcontractor program is –6.7 percent. A recent RAND report sponsored by the Office of the Secretary of Defense (OSD) indicates that in the aerospace industry, small suppliers to “large military aircraft programs” receive about 10 percent of defense dollars that go to contractors. Therefore, in some cases, reductions in defense spending should be expected to affect small suppliers differently relative to large defense firms. Views that small defense subcontractors are disproportionately affected by defense spending reductions merit further evaluation given constraints in the macro-level information about defense subcontractors we were able to obtain. Our ability to examine relationships between defense spending and employment, and to generate conclusions, is complicated by the fact that employment data are often derived from models or estimation procedures that have degrees of uncertainty. Post-Cold War cutbacks in defense spending have been associated with declining employment in military force levels, federal defense-related civilian employment, and defense-related employment in private industry. On the one hand, DOD estimates show a 39-percent decrease in defense-related employment between 1989 and 1997—approximately 5 percent per year. DOL reports that private employment generated by defense spending fell by 600,000 jobs between 1987 and 1992 and projects at least an additional 1.2 million job losses by 1997. Between 1989 and 1994, McDonnell-Douglas Corporation reduced its total corporate staff by approximately 70,000 people. On the other hand, the Defense Conversion Commission reported to DOD that the concept of job loss can overstate the effect of the post-Cold War drawdown on employment because it does not account for the ability of the economy to absorb dislocated workers. The commission estimated that the drawdown will account for less than 2 percent of all unemployment between 1992 and 1999. In a report to the U.S. Senate Budget Committee, CBO found that cuts in defense spending, or in any type of federal spending, will temporarily reduce employment. However, it notes that defense cuts that are matched by increases in public-sector investment, or nondefense spending, can offset the short-term effects of spending reductions. CBO reports that overall growth in the U.S. economy is a greater factor in reemployment for displaced defense workers than what happens in the defense sector. CBO reports, as well as other reports we reviewed, also indicate that the effect of reduced defense spending on employment varies by regions of the country, whereas those that are less dependent on defense spending are generally affected to a lesser extent. We analyzed available indicators of defense sector employment and an indicator of DOD procurement outlays linked to those sectors over the period of our study to determine the strength of the relationship between the two (see appendix II for methods discussion). We found a statistical relationship between the available indicators of employment levels and procurement outlays for the period 1975-91 that was not large in size and is less than values considered moderate in size (r =.27 to .36, depending upon the indicator used). (See appendix III.) Because the available indicators of defense sector employment and DOD spending are estimates, they are subject to possible error that may come from the estimating procedures and “operational” errors, or errors in the primary data collection reporting or coding procedures of the offices that collected the data. Moreover, the limitations of correlational analysis also introduce uncertainty that does not permit definitive conclusions regarding the exact nature of the relationship between defense sector employment and defense spending. DOD, Commerce, and DOL maintain or collect some information related to productivity in defense industries, some of which overlaps and some of which is unique. All the information on defense industry productivity that we obtained from these agencies was based on economic models or methodologies that have some degree of uncertainty. From this information and data from others such as the Aerospace Industries Association (AIA), we observed the following trends. The value of production output in most defense-concentrated industries has risen while defense budgets, as well as subsequent contract spending for major hard goods, have fallen (see appendix III, figure III.2). When AIA data on unit production are plotted with trends in DOD’s aircraft procurement budget, there is a trend between 1969 and 1986 in which more aircraft procurement money is associated with the production of fewer aircraft. From about 1986 to 1993, the trend shows a relatively constant number of aircraft being produced while aircraft procurement budgets have declined. In other segments of the aerospace industry, in its 1995 assessment of the helicopter industry DOD projects that the unit cost for military helicopters will increase while the number of units produced will remain relatively flat through 2004. DOD expects to procure fewer, “more capable,” higher-cost helicopters rather than larger quantities of lower-cost helicopters. Other DOD data on trends in ship and tank procurement indicate that DOD is purchasing fewer units at higher costs. One explanation for this trend is that the complexity and sophistication of weapons, and related weapons manufacturing processes, have increased over time. We were unable to locate research that could address this issue systematically and comprehensively for the range of weapon systems within the scope of our work. Long-Term Trends in DOD Contracting. Within the scope of this report, and where data were available, we studied longitudinal trends in competition. There is little consensus on how to measure competition. Consequently, we chose to base our analysis on the concept of competition embodied in the Competition in Contracting Act of 1984 and the Federal Acquisition Regulation. DOD collects a variety of information on contracting actions. The dollar value of the contracts and the solicitation procedures used are recorded in DOD’s DD350 database. It provided us with trend data on where the defense dollar was being spent and in what solicitation category it was being spent, such as “full and open competition” and “other than full and open competition.” Hence, using the DD350 data, we measured one aspect of competition: the total dollars awarded in each solicitation category. The shortcoming of this database is that it does not fully capture the number of offers received in response to solicitations in each solicitation category, which could be another indicator of competition. The DD350 serves as a basis for internal reports and reports to other agencies and the Congress and contains the only available data on the dollar amounts of contract actions for full and open and other than full and open competition. The “other than full and open competition” category captures instances where DOD uses various authorities to limit competition such as soliciting only one source when awarding follow-on contracts or when a “unique source” exists (see table III.1 for a complete list of authorities). Among all the legal authorities for using other than full and open competition, dollars awarded under the broad category “only one source” accounted for 80 percent of the total contract dollars between 1986 and 1994. Included in this broad category are “follow-on contracts” (17 percent of the total), awards to a “unique source” (37 percent of the total), and awards categorized as “only one source-other” (25 percent of the total). DOD’s data on competition in contracting reveal that in the categories of major hard goods we looked at, over the past 18 years, the money associated with major systems procurement has been greater for contracts awarded using other than competitive methods than for those awarded using competitive ones. We found this trend as an 18-year average (see figure 5 and figure 6) and in each individual year for most programs in the period. (See also appendix III.3, figure III.16.) DOD’s definition of its competitive and other than competitive contracting procedures on the DD350, used as guides in our work, are shown in appendix III, table III.1. Figure 5 shows the DD350 competition data we were able to obtain for the period 1977-85, or “pre-CICA” (Competition in Contracting Act) data. Figure 6 shows “post-CICA” data, for 1986-94. We note that pre- and post-CICA data are based on different categories of required information that DOD collected concerning the use of competitive or other than competitive procedures used to award procurement contracts. Differences between pre- and post-CICA data stem from the 1984 enactment of CICA. In our work for this request, we did not audit the pre- and post-CICA data derived from the DD350. Therefore, the full extent of differences between pre- and post-CICA data and the accuracy of DOD’s reported data for both time periods would require more evaluation. In general figures 5 and 6 show similar findings, although the data presented are different measures of competition used in pre- and post-CICA periods. The portion of average contract dollars awarded using noncompetitive methods ranged from 66 percent for ships to 80 percent for aircraft (figure 5). For the post-CICA period, the range for other than full and open competition was 58 percent for ships and 81 percent for ammunition (figure 6). Post-Cold War Restructuring and Reform. The major defense contractors we spoke with indicated that in the post-Cold War drawdown, defense companies have been acting to improve production efficiency, reduce costs and overhead, streamline operations, and reorient themselves toward a more cost-conscious customer. One outcome of changes in the way defense firms have been doing business since the Cold War, with relevance for competition, is a reduction in the number of independent defense firms by company mergers and acquisitions or by companies leaving the defense business. Notable examples include the March 1995 merger of Lockheed and Martin Marietta, Lockheed Martin’s acquisition of Loral’s defense electronics and systems integration business, the intended Boeing-McDonnell-Douglas merger, Raytheon’s purchase of Texas Instruments defense unit, and Northrop-Grumman’s acquisition of Westinghouse defense electronics. In other areas of defense industry, the January 1994 agreement between FMC Corporation’s Defense Systems Group and Harsco’s BMY-Combat Systems Division to form United Defense Limited Partnership (UDLP) changed three major competitors in the light and medium armored vehicle market to two: UDLP and General Dynamics Land Systems. A goal of business restructuring in the post-Cold War environment is to enhance or at least maintain a competitive position in the marketplace. We did not evaluate the effect of the recent trend in mergers and acquisitions on competition. However, in its 1996 annual report, while supportive of consolidations, DOD has concluded that “Consolidation carries the risk that DOD will no longer benefit from the competition that encourages defense suppliers to reduce costs, improve quality, and stimulate innovation.” Moreover, in its assessment of the conventional ammunition segment, DOD concluded that a reduction in the number of suppliers has reduced competition. The number of contractors will continue to decrease, according to DOD’s published findings, officials we interviewed at Booz-Allen and Hamilton and TASC, and projections from officials at McDonnell-Douglas. They expect more mergers in some segments of the defense industry, such as helicopters and missiles, and expect some companies to keep the possibility of acquisition within their long-term strategies. Moreover, at least one noted defense industry expert has reported that barriers to entering the defense business—created by the need for large amounts of capital for preparing contract proposals and by the need to gain access to scientific and engineering talent and to specialized, expensive, production equipment—will continue to lessen the likelihood that new defense companies will enter the market in the near future. This post-Cold War process of defense industry consolidation and restructuring may reduce some segments of the defense industry to one major provider. For example, one possible avenue DOD sees to achieve its stated goal of reducing costs for medium and heavy space launches is to consolidate the medium and heavy launcher booster families and “evolve” a new family of launch vehicles. DOD’s procurement plan for this Evolved Expendable Launch Vehicle (EELV) is to have a single provider by 1998. While not taking a position on consolidation and mergers, the Defense Science Board’s 1994 report to DOD on the antitrust aspects of defense industry consolidation states that reducing the number of firms capable of developing a suitable design for a new weapon system may lead to higher prices, poorer products, smaller advances in technology, and a reduction in the number, variety, or quality of the proposals that companies submit to DOD. The report further states that congressional findings, industry opinion, and a large body of literature lead to the conclusion that DOD’s regulatory and auditing procedures cannot substitute for competition as a way of ensuring the best mix of price and quality. Within its current Defense Acquisition Reform vision, DOD has recently implemented several new acquisition reform programs intended to increase efficiency and value in weapons procurement and to reduce unnecessary costs. DOD’s cost as an independent variable (CAIV) reform represents a move toward making cost the significant driver in system design, compared to the Cold War era in which the emphasis was on systems that could outperform or overwhelm Soviet threats. The fiscal year 1996 Defense Authorization Act simplifies the processes for commercial item acquisition by exempting procurements for commercial items from cost or pricing data requirements. DOD created the Defense Standards Improvement Council to carry out policies mandated in June 1994 by the Secretary of Defense to develop performance-based solicitation requirements and expand the use of nongovernment standards or specifications. An assessment of the effect of recent acquisition reforms on DOD’s weapons procurement process and the broader defense industrial base would supplement the information presented here. To date, however, an independent assessment of the effect of DOD acquisition reform initiatives or programs on the issues discussed in this report has not been conducted. We believe this is an important area for future evaluation, given the potential for reform initiatives to reduce or contain costs and facilitate efficiency improvements. Some defense contractors among the top 100 receiving the largest dollar amount of DOD prime contract awards in 1994 have grown in the post-Cold War budget environment, while others have not shown growth. (We detail the financial indicators in appendix IV.) DOD finds that most of the defense firms it has assessed have been profitable in the drawdown. Assessments of a random sample of small California aerospace businesses that supply goods or services to large military aircraft programs show that between 1992 and 1995, 94 percent were still in business while 3 percent had either merged or been acquired. Officials at the major defense contractors we visited, the defense industry experts we interviewed, and the annual reports from major defense contractors we reviewed indicate that, in order to survive and remain viable in the funding drawdown, the top companies have, among other things, been (1) attempting to gain market share and to be more competitive for future defense business through mergers and acquisitions; (2) reorganizing and restructuring internally, in ways that involve job losses and layoffs, and reconfiguring job duties; (3) reducing their supplier-subcontractor base; (4) engaging in team concepts or entering joint ventures in which several firms subcontract with one another; (5) expanding defense markets to broaden the international customer base and increase sales; or (6) selling the defense business segments that are not core business units or that do not represent niche markets, as well as exiting segments of the defense industry. For various reasons, defense manufacturers have not given emphasis to converting their products or capabilities to commercial ones. Officials at Lockheed Martin have noted that if defense businesses understand commercial markets, they may be able to produce competitive commercial products, but officials at Booz-Allen and Hamilton have emphasized that producing competitively for the commercial sector is different from producing for the defense sector. The production process and infrastructure that have been set up to serve DOD’s customers are markedly different from those of commercial companies manufacturing competitive products for the average consumer. Further, some industry experts suggest that there are no commercial markets for converted military products. However, an official at a large defense firm noted that Rockwell International corporation achieved success in establishing a commercial market for Global Positioning System (GPS) receivers. In responding to this issue, however, some top defense firms have survived by investing in mergers and acquisitions and by reorganizing and downsizing their companies. The defense industry experts and major defense contractors we spoke with agreed that companies that choose to stay in a post-Cold War defense industry must remain viable and competitive. They indicated that while industry consolidation can help them do this, the heart of consolidation is the reduction of overcapacity. Overcapacity increases costs through excess, underutilized overhead. When fewer dollars are available, companies must reduce costs in order to remain competitive. DOD also views the elimination of excess capacity as a means of achieving some cost-savings. Booz-Allen and Hamilton has pointed out that while mergers and acquisitions have the potential to produce cost savings, particularly administrative savings, cost-savings benefits associated with consolidation are limited if excess production capacity is not reduced. They note that reduction of excess product design capability, as well as general production capacity, should be addressed in consolidation decisions. Booz-Allen and Hamilton also notes that cost-savings are minimized to the degree that merging companies or segments have dissimilar business. Similarly, internal company reorganization, teaming, and joint ventures may not result in any real savings if excess production capacity is not eliminated. Increasing foreign military sales might help spread out overhead costs normally charged to DOD but only as long as production lines remain open for weapons to be purchased by international customers. Although DOD’s industrial assessments have all claimed that consolidating the defense industry will produce cost-savings, our past work, our review of research from DOD and the private sector, and our discussions with industry consultants and defense contractors all suggest that this assumption should continue to be studied, tested, and validated. There are efforts to study costs and savings associated with specific defense business combinations. Section 818 of Public Law 103-337 requires DOD to provide the Congress with the projected amounts of costs and savings for defense contractor mergers or acquisitions when DOD is asked to reimburse the contractor for the costs associated with company restructuring. At the time we completed our work, under this provision, DOD had so far certified restructuring payment for three business combinations: United Defense Limited Partnership between FMC Corporation, Defense Systems Group, and Harsco Corporation, BMY Combat Systems Division; Martin Marietta Corporation purchase of multiple business entities of GE Aerospace; and Northrop Corporation purchase of Grumman Corporation. Further, under section 818, GAO has a requirement to report to the Congress on restructuring costs. At the time we completed our work, we had issued two reports under this provision.Aside from reimbursements for restructuring costs, section 818 does not provide for analysis and validation of the type of broad cost-savings claims that appear in some of DOD’s published industrial assessment reports. Moreover, in both reports, we found that defense contractor’s estimates of savings associated with business consolidation activity, submitted for official DOD review and certification, were greater than the estimates DOD could later verify. Finally, we have also reported that although contractors have been reducing overhead rates by consolidating facilities and by other means, they have been projecting future increases in overhead rates. Vertical integration in defense industry was pointed out, by one external reviewer and DOD officials who reviewed a draft of this report, as an emerging issue of interest or concern for the defense industry linked to recent defense industry consolidation activity. Vertical integration can occur in multiple ways. Vertical integration that occurs when major prime contractors acquire control of key components that make up the systems they sell has recently received attention. Industrial concentration that occurs through the acquisition of lower-tier firms by prime contractors can create the opportunity for contractors to freeze out of the market competitors that do not have access to these particular components. An external reviewer noted that vertical integration can allow prime contractors to shut out as sellers traditional second- and third-tier component suppliers who normally sell to the prime contractors. A DSB task force on vertical integration convened in September 1996, at the request of the Under Secretary of Defense for Acquisition and Technology. The task force is expected to issue a report in 1997. We note that the effect on lower-tier smaller suppliers is considered by one industry leader as a relevant issue in assessing vertical integration. In our work, we were limited in our ability to obtain comprehensive data about smaller subcontractors in the post-Cold War defense industry. However, we believe that the effect of defense industry consolidation is fully understood by reviewing the state of the smaller defense subcontractors in addition to the larger prime contractors. Given that small suppliers may typically concentrate on making one or a handful of products, compared to a broader mix among the primes, industry activity that limits the market for small suppliers may exert a disproportionate impact on them. We provided copies of a draft of this report to the Department of Defense. To obtain DOD’s comments, we met with officials from the Offices of Deputy Under Secretary of Defense for Industrial Affairs and Installations; Under Secretary of Defense, Comptroller; Secretary of the Air Force, Acquisition Research and Engineering; and Assistant Secretary of the Navy, Research Development and Acquisition. Further, we conducted follow-up work on DOD’s comments with officials from the Office of Program Evaluation and Analysis; Directorate of Defense Procurement; and Office of the Assistant Secretary of the Army, Research, Development, and Acquisition. Officials conveyed to us that DOD planning, execution, and review of these matters is routinely at much lower levels of detail. From these levels, DOD determines that a particular issue may be indicative of a broader problem. DOD officials stated that there were differences between the level and type of analysis we used to depict the data trends and the level and type of information they readily have at hand to manage and evaluate the agency’s programs, which hampered their ability to provide a complete and timely review. The scope of the work we report here is consistent with the terms of the congressional request. Our ability to present the data trends was greatly challenged by the fact that neither DOD nor other executive agencies maintain in a single office or location the information required to address the issues raised by the congressional request. It was necessary for us to obtain data and information from multiple executive agencies and to adopt methodologies based on existing or commonly used practices of executive agency offices and other knowledgeable groups so that we could furnish and present the data. DOD officials did not disagree with the data sources we used. However, where they identified additional data sources relevant to the issues discussed in the report, or had questions that we could resolve concerning the information presented, appropriate changes were incorporated in the text. DOD officials indicated that their office of Program Analysis and Evaluation compiled reports that would have been useful in determining the disbursement of procurement dollars across industry, although we did not use them. We determined that the data referred to were produced under the Defense Economic Impact Modeling System. During our earlier data collection work, we determined that data from this source were insufficient in scope relative to other survey-based data collected by DOD’s Washington Headquarter’s Services on DOD procurement outlays. DOD noted that the pre- and post-CICA DD350 data we report are based on different measures that DOD collected about the use of competitive procedures in DOD procurement contracting. DOD’s pre-CICA data we present (1977-85) are data DOD collected, consistent with the reporting requirements and data elements relevant to track competitive contracting procedures within the period (see appendix III, table III.1). Similarly, DOD’s available post-CICA data (1986-94) are those consistent with and relevant to track DOD’s results pertaining to the current laws and regulations governing competitive procurement procedures (see table III.1). We believe it is relevant and informative to present the data elements that are consistent with and representative of the laws and reporting requirements to track competition that were in place in the pre-CICA time period and in the post-CICA time period. Major contributors to this report are listed in appendix V. If you have any questions concerning this report or need additional information, please call me at (202) 512-3092. The six key questions and the data we used to answer them are outlined in this appendix. Because our approach was at a macro-level, we used as a general rule the data sources that were the most comprehensive with respect to 1975-95 and the defense industries we examined (aircraft, guided missiles, tanks, shipbuilding, ammunition and ordnance, and electronics and communications equipment). 1. What are the trends in DOD’s total, procurement, and RDT&E budgets? We obtained our information on DOD’s budgets from DOD’s Office of the Comptroller and from DOD’s Future Years Defense Plan (FYDP). We present the budget figures in terms of either total obligational authority or outlays, depending upon availability. We used outlays when they could be made available to us in a timely manner. They generally represent cash payments. “Total obligational authority” is a financial term that DOD uses to express the value of the direct defense program for a fiscal year. We transformed all FYDP budget figures from current dollars to constant-year dollars to correct for inflation, using 1995 as the base year. We used DOD deflators in adjusting current-year dollars to constant dollars. Where DOD’s Office of the Comptroller sources reported constant dollar (fiscal year 1995) budget figures, we used them. Budget figures from 1945 to 1995 reflect both peacetime and wartime spending. DOD’s Office of the Comptroller could provide the incremental costs (that is, outlays) associated only with the Vietnam War and the Desert Shield and Desert Storm conflicts. The aggregate incremental costs for Vietnam were $110.6 billion from 1965 to 1976 and include the transition period. The aggregate incremental costs for Desert Shield and Desert Storm were $1.9 billion from 1990, projected to 1998. That they appear to have been considerably less than those for Vietnam may be partly because the Persian Gulf war was much shorter but also because DOD received for it offset payments from foreign nations that totaled at least $48.4 billion. 2. What are the trends in the dollar amount of DOD procurement and RDT&E awards to defense manufacturers and subcontractors over time? We used data from publicly available reports provided by DOD’s Directorate for Information, Operations, and Reports, Washington Headquarters Service (WHS), on the dollar amounts of obligations for prime contract awards and RDT&E awards for each category of major hard goods that DOD purchased. (See table I.1.) Complete aircraft, including helicopters; Airframe assemblies and spares; Aircraft engines and parts, propellers and hubs, instruments and parts, jet engines and parts used without major modification on guided missiles; Electrical equipment; Accessories including gun turrets, bomb racks and releases, rocket launchers, fuel tanks, droppable aircraft tanks, tires and tubes, control wires, servo and other control mechanisms; Special jigs, dies, and fixtures for fabricating only a specific model; Maintenance tools peculiar to the aircraft and to the engine; Ground handling equipment; Assist takeoff other than droppable units; Mobile training units; Flight simulators All missile and space system parts and related equipment procured from prime contractors; GFE electronic equipment; Special jigs, dies, and fixtures; Booster cases; Ground handling and launching equipment; Target drones Construction of vessels of all types, including assault boats and tracked amphibious vehicles such as LVTs; Ship parts; Ship armor not procured as weapons; Shipborne deperming and degaussing equipment; Aircraft catapults and arresting gear; Floating cranes, floating drydocks, bridge erection boats, and production equipment procured as part of and mounted on floating equipment; Special jigs, dies, and fixtures; Total cost of services, civilian labor, and ship parts used in conversion, repair, overhaul, and modernization Tanks and self-propelled gun motor carriages; Other combat vehicles; Combat vehicle parts; Special jigs, dies, and fixtures; Modification, private or government (continued) Trucks, ambulances, passenger cars, buses, motorcycles, and other motorized vehicles, including wheeled amphibious vehicles; Power-driven decontaminating trucks; Trailers and semi-trailers; Truck tractors; Repair, maintenance, and other special-purpose noncombat vehicles; Bicycles; Prime-contractor-furnished repair, rebuild, production, and service equipment; Special jigs, dies, and fixtures; Other accessories and parts; Modification, private or public Small arms, automatic weapons, mortars, artillery, guns, rocket and grenade launchers, and pyrotechnic projectors, including those mounted on vehicles, ships, and aircraft; Flame throwers; Smoke generators, land; Torpedo tubes; Harpoon protection nets and depth-charge protectors; Wholly optical, electrical, or mechanical fire control equipment, including binoculars, bomb sights, other optical equipment, stop watches, and fire control mounts; Nonelectronic portions of electronic fire control equipment; Special jigs, dies, and fixtures; Deperming and degaussing equipment Rockets, bombs, mines, grenades, torpedoes, depth charges, and other ammunition and demolition material and pyrotechnics; ATO units (droppable only) and fuel; Rocket and guided-missile fuel; Machine-gun links; Ammunition parts; Chemicals used in bombs, flame throwers, smoke generators, and ammunition; Special jigs, dies, and fixtures (continued) WHS collects information on DOD prime contracts and RDT&E (contract obligations) awards from Department of Defense Form 350 (DD350), “Individual Contract Action Report.” The DD350 form is used to collect data on contract statistics within DOD. The data gathered by means of the DD350 are used for reporting the size and distribution of DOD contracting actions; types of contracts used; numbers and amount of contracts placed with categories of contractors such as small, small disadvantaged, and women-owned small business concerns; the extent competed and other essential facts about contract actions. Prior to 1982, the DD350 was completed only on contracts greater than $10,000. Since 1982, it has been completed for contract actions greater than $25,000. The data reported on the DD350 may be subject to operational errors in reporting, collecting, or coding the data for entry into database or other electronic formats. We did not assess possible operational errors or other errors in the reporting procedures followed by WHS. WHS publishes information on awards to subcontractors from the information it receives from participants in DOD’s mandated subcontracting program. This information is collected on Standard Form (SF) 295. The 1978 Amendments to the Small Business Investment Act of 1958 (15 U.S.C. 637(d) (1994)) require business firms that have received a contract in excess of $500,000, or a contract in excess of $1 million for construction, to establish a small business and small-disadvantaged business subcontracting program. The nature of DOD’s reporting procedures makes it possible to determine aggregate amounts of awards to subcontractors but not what the awards are made for. These data are not classified by procurement program or weapon system in published sources that aggregate the data. Because DOD subcontractor awards are given to large and small businesses, this information cannot be used to make generalizations about a given “tier” of the defense industry. In addition, as stated above, we did not assess operational errors or other possible errors in the data collection and reporting procedures followed by WHS. 3. What are the trends in indicators of employment, productivity, and competition over time? In developing methods to address these issues, we interviewed and consulted with knowledgeable experts in the defense industry from the private and federal sectors as well as defense contractors on trends in employment, productivity, and competition. Given the scope of our work, the most comprehensive employment data were available from the Annual Survey of Manufacturers series published by the Bureau of the Census. The macro-level quantitative data we used were indicators of productivity and data that are relevant to the evaluation of competition. For productivity, they included the value of production output in defense-concentrated industries from DOL’s Bureau of Labor Statistics (BLS). From DOD offices and AIA records, we obtained limited information on units produced for some defense sectors. For competition, it included dollars spent on procurement contracts awarded using competitive and other than competitive procedures identified on the DD350 form and retrieved from DOD’s DD350 database. We describe these measures in detail. The employment and productivity data were defined according to separate defense-concentrated industry groups—a cluster of one or more manufacturing industries identified by a four-digit Standard Industrial Classification code. DOD, Commerce, and DOL, as well as private research firms that study trends in defense industry, refer to them as “defense-dependent” or “dominant” industries because a large proportion of their output is purchased for defense purposes. For example, in 1985, shipbuilding, ammunition (except small arms ammunition), ordnance (not elsewhere classified), and aircraft and missile engines industries produced 75 percent or more of their output for defense. Nondefense related output produced by these industries may be purchased by commercial companies, other U.S. government offices, or international companies. When we conducted our work for this report, aircraft, guided missiles, ammunition and ordnance, tanks, electronics and communications equipment, and shipbuilding and repairing were the principal defense-concentrated industry groups identified by defense industry researchers in federal agencies and private research organizations. The value of production output in defense-concentrated industries is measured by productivity indexes that we obtained from the Bureau of Labor Statistics (BLS) Office of Productivity and Technology. The index BLS provided—the constant-dollar value of production output per hour—is derived by dividing an index of the value of production (shipments, revenue, or sales) in each of the manufacturing industries by an index of aggregate employee hours. This is a standard measure of productivity used in BLS’ program of productivity measurement and technology studies. The limited data on procurement or production rates we used came from reports prepared by AIA and DOD’s Office of Economic Security and PA&E. Extant data from DOD’s DD350 database gave us information about the competitive nature of procurement contracts awarded for major weapon systems or components. We retrieved and analyzed data from the blocks of information on the DD350 that specifically indicated the extent of competitive procedures used to award contracts in pre- and post-CICA time periods (see table III.1). These data provide an indication of the processes DOD uses (that is, competitive or noncompetitive) in awarding weapon procurement contracts to defense contractors. We did not determine the degree to which these processes are reliable indicators of competition or noncompetition within the defense industry. 4. How are employment productivity and competition related to indicators of defense spending? The available quantitative data permitted us to provide a limited response to this question. We developed methods that made use of existing information and we supplemented the quantitative data with information we collected from the experts we spoke to and our review of existing literature. Our quantitative method for examining the relationship between indicators of defense spending and productivity over time involved generating correlation coefficients between available measures of defense budgets and the BLS productivity indexes. For example, we correlated the productivity indexes for the tank industry with DOD’s budgets for tank procurement for the years available. We used the same approach in examining the relationship between indicators of defense spending and defense-related employment: correlating the available measures of total employment in the defense-concentrated industrial sectors from Census with the total dollar amount of contracts awarded for procuring major hard goods (see table I.1 for the categories of major hard goods) over the period of our study (data were available only for the period 1975-91). We also conducted the analysis using DOD budgets (FYDP, TOA) as an indicator of DOD spending linked to defense-concentrated industrial sectors. An important issue in selecting appropriate measures of employment in defense-concentrated industrial sectors and an indicator of spending (that is, outlays) linked to those sectors was selecting measures that were independent from one another. For example, in the course of our work we discovered that data on defense-related industry employment reported in DOD’s series of reports on national defense budget estimates (also known as the “green book”) is not independent from data on procurement outlays also reported in these series. The lack of independence between the two data sets calls into question the validity, or accuracy, of any correlational analysis done using this data and, of course, any resultant correlation coefficient observed. Given available data, we were unable to develop a comparable quantitative method for addressing the relationship between competition and levels of defense spending over time. Our interviews on these relationships with defense contractors and defense industry experts supplemented the quantitative information on defense industry competition that we were able to obtain. 5. What are the trends in the financial indicators of major defense contractors over time? Considerable variability characterizes the methods used to determine appropriate financial indicators or financial viability. For example, defense industry analysts at the Center for Strategic and Budgetary Assessments indicate that there are at least 12 ways to conduct financial assessments of the defense industrial base. We interviewed Wall Street business analysts, reviewed the procedures DOD recommends for conducting financial assessments, and spoke with defense contractors and industry experts. They agreed that financial viability is best assessed with multiple indicators. We used sales and cash flow because they are conventional indicators and because information on them could easily be retrieved from Standard and Poor’s COMPUSTAT database. Other measures or variables from company income statements that can be used to analyze financial viability include gross income, operating income, and net income. Our sample of defense companies included those among the top 100 that received the largest dollar amount of DOD prime contract awards in 1994. So that most of the defense industries would be represented, we included companies that have business units in one or more of the defense industry segments. 6. What is the relationship between indicators of defense spending and indicators of the financial status of major defense contractors over time? Our focus was predominantly on trends in the financial status of companies in the last several years of the recent defense spending reduction. We supplemented the information on corporate sales and cash flow from Standard and Poor’s database with reviews of DOD’s assessments of the financial state of major defense companies since the end of the Cold War. We also incorporated into our review the perspectives of Wall Street experts and defense contractors. When we collected our information, data for all years and industries in our study were not available; our depiction of trends in some years and industries may therefore be incomplete. Existing data sources do not collect or specifically identify comprehensive data on DOD’s subcontractors or large defense contractors. Therefore, unless we have indicated otherwise, we could not define the data by the size of a business or its position in the defense industry “hierarchy.” Unless noted otherwise, potential error introduced by estimation or modeling procedures or in the data collection or reporting procedures used by the offices that provided original data used in our work may be reflected in findings generated with those data. In this section, we present the results of the correlational analysis conducted on the available measures of employment in defense-concentrated industrial sectors obtained from Census and defense spending linked to those sectors obtained from DOD’s records of contract awards for major hard goods procurement. Correlational analysis provides one indication of how two or more variables are related. The possible range of a correlation is –1 to +1. A correlation of zero means that two numbers (variables) are not correlated. A negative correlation means that large values of one number are associated with small values of another number. A number correlated with itself returns a correlation of 1. A correlation coefficient provides an indication of the strength and direction of a linear relationship. In this case, the observed correlation coefficient allows us to determine the strength of the relationship between indicators of defense industry employment and defense spending over a specific time period. The procedures used to generate a correlation coefficient, by themselves, make it impossible to determine whether changes in one variable cause changes in another variable. More analysis is required to reach such conclusions. Declining defense-related employment since the post-Cold War spending reduction began has been described. We statistically compared total employment in the defense-concentrated industrial sectors where data were available (aircraft, ammunition and ordnance, shipbuilding, electronics and communications equipment, and tank manufacturing) to an indicator of defense spending linked to those sectors (total amount of contract awards for major hard goods procurement) for the years data were available (1975-91). The observed correlation, r = .27, indicates that the strength of the relationship is not large and is less than values considered moderate in size. In addition to the limits of correlational analysis stated above, other factors limit the ability to generate definite determinations or generalizations about the relationship between defense spending and defense industry employment. At minimum, they include the absence of fully comprehensive data on DOD spending specifically attributed to the “defense industrial base” and the use of estimation or modeling procedures to generate defense industry employment data. Census documents provided additional data about employment trends in defense-concentrated industrial sectors from their annual surveys of manufacturers, which include numbers of all employees as well as production workers in defense-concentrated industries. From these data, we calculated the ratio of nonproduction employees to production workers. Figure III.1 shows the trends in these ratios for 1975-92. Production workers have consistently been fewer than nonproduction workers in the guided missile industry: in all years, the ratios of nonproduction to production employees are consistently greater than 1. Moreover, ratios of production to nonproduction workers in the guided missile industry are considerably higher than in all other industries. In more recent years, the ratios have increased in the missile, aircraft, tank, and ammunition manufacturing industries, indicating that the split between production and nonproduction workers is widening. In 1993, TASC reported that the defense sector employed a high proportion of engineers and technicians and relatively few production workers. Officials whom we interviewed at Lockheed-Martin also indicated that there are no major defense companies in the manufacturing business anymore. To provide an indication of the relationship between trends in BLS’ productivity indexes (value of production) and trends in DOD’s budgets, we compared them statistically through a correlational analysis. Figure III.2 shows the correlation between BLS indexes of productivity and trends in DOD’s procurement budgets for five industries. During 1975-86, budgets increased along with the value of production (the correlation coefficients are all positive). For more recent years for which data were available (1987-91), the value of production continued to increase but defense budgets did not (the correlation coefficients are all negative). Elec.-Comm. The shipbuilding and repairing industry data differ from those of the other industry groups. The strength of the relationship between budgets and productivity is weaker, and the direction of the relationship in recent years is positive. There could be any number of reasons for this, ranging from disparities in the data to unique aspects of shipbuilding. However, the nature of the data we were provided and the statistical technique we applied do not permit us to specify explanations. We simply note that there is some apparent difference. There are multiple ways of defining, conceptualizing, and measuring competition. The data that were available on competition permit a limited discussion and presentation of information about this issue. We were able to develop methods that allow us to address the extent of the use of competitive and noncompetitive procedures used in major systems procurement from data reported on the DD350. From the available information, we determined the total dollar amounts associated with these processes for procurement of the major hard goods listed in table I.1 for the time period and scope covered in our work. Among other data elements, the DD350 provides data on the processes (competitive or noncompetitive) that DOD has used in awarding procurement contracts for weapons. Because DOD is the primary, and in some cases only, buyer of weapons produced by U.S. defense contractors, the processes and patterns it uses in purchasing goods and services are relevant to understanding the potential effect on business practices of defense firms and the broader defense industrial base. However, because competition is a multifaceted concept, and we did not determine the extent to which the DD350 measures of competition are reliable or valid, this information should be considered an indicator of DOD’s use of competitive or other than competitive processes. As reported on the DD350, DOD’s pre- and post-CICA definitions, used as guides in our work, are shown in table III.1. Pre- and post-CICA definitions and categories differ because of the 1985 enactment of CICA. Pre-CICA (data available, 1977-85) Post-CICA (data available, 1986-94) The DOD data show that, on average, in 1977-94, more money was associated with major systems procurements that were awarded using DOD’s other than competitive procedures compared to competitive ones (see figures 5 and 6). In our analysis, we found not only greater average dollar amounts associated with other than competitive procurement procedures but also more money spent on other than competitive contracts for electronics and communications equipment, ammunition, weapons, and aircraft in every year of the past 18. With exceptions in a few years, we found the same trend for procurement contracts for missiles, tanks, and ships. These trends are detailed in figures III.3 through III.16 for each procurement program and separately for pre- and post-CICA time periods. In recent years, the amount of money associated with DOD competitive and other than competitive contracts has declined for most procurement programs. At the same time, the gap between money awarded on competitive and other than competitive contracts is getting smaller. The only exception to this is contract dollars DOD spent on aircraft procurement, as shown in figure III.10. The split between dollars spent for competitive and other than competitive aircraft procurement has actually increased in recent years, such that increasingly greater amounts of money are associated with aircraft procurement contracts that DOD awards with procedures it defines as other than competitive. In addition to data trends on the processes DOD uses to award procurement contracts, another indicator related to understanding competition in defense industry is the number of businesses available to enter into competition. In the post-Cold War, there has been a decline in the number of independent defense contracting businesses, either through business combinations or exiting the defense business. From a broader historical view, figure III.17 shows that there are clearly fewer contractors today than 50 years ago, at the end of World War II. There is some consensus that the top defense firms have come through the post-Cold War funding reduction well and that they remain profitable. The Wall Street analysts whom we spoke with indicated that the forecasts for defense companies are not as bad as the companies perceive. Analysts at Lehman Brothers indicated that companies are more pessimistic than necessary about future growth, given DOD’s plans to increase defense spending in the outyears. It is important to note, however, that many companies have suffered difficult reorganization and employee loss as they have made the transition to decreased defense spending. Still, companies have not all been equally affected, and some variability in the financial indicators of the top companies suggests that the period of reduction has been less painful for some companies than for others. We selected financial indicators to examine—sales and cash flow—after our discussions with Wall Street defense business analysts, defense contractors, and experts from the private sector and after reviewing DOD’s work in this area. Other measures or variables from company income statements that can be used to analyze financial viability include gross income, operating income, and net income. The companies that we report data for are companies among the top 100 that received the largest dollar volume of DOD prime contract awards in 1994. Figures IV.1 and IV.2 show multiyear trends for five top defense contractors whose corporate sales and cash flow fell, while figures IV.3 and IV.4 show multiyear trends for four top defense contractors whose corporate sales and cash flow rose or remained stable. The sales and cash flow figures shown are the amounts reported for that year. Sales and cash flow may appear to be better for some companies than for others for many reasons. A detailed assessment of this issue was beyond the scope of our work. However, we can suggest explanations such as how diversified a company is, the range and number of defense segments that some companies have (for example, General Dynamics is involved in shipbuilding and tank manufacturing), transactions associated with business combination or divestiture activity (for example, Lockheed Martin; Northrop-Grumman; General Dynamics; other acquisitions by Raytheon, Loral, FMC), and other profitable business segments companies have with other federal agencies or offices, commercial clients, and foreign companies. Some of the largest changes shown in the figures reflect merger and acquisition activity, such as the growth in sales and cash flow for Lockheed Martin and the declining sales and cash flow for General Dynamics. Other noteworthy contributions to the work were made by Winslow Wheeler, who contributed to the project’s early direction, and Robert Copeland, who contributed to the study design. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed productivity and competition in the defense industrial base since the end of the Cold War, focusing on: (1) overall trends in productivity, competition, and other financial indicators in the defense industry over time, where possible; and (2) the relationship between these trends and indicators of defense spending over time, where possible. GAO found that: (1) the size and nature of the defense industrial base is critically shaped by the amount and emphasis of U.S. defense outlays; (2) with regard to trends in the actual expenditures in segments of the defense industrial base, after adjustments for inflation, recent spending on procurement and RDT&E contract awards is similar to spending just prior to the peacetime defense buildup of the early 1980s; (3) aside from outlays there are other differences in today's industrial base compared to past periods; (4) Department of Defense (DOD) and Department of Labor (DOL) data on productivity in defense-concentrated industries, and other studies on productivity, indicate that the value of output has increased over time while the quantity of output has decreased; (5) the business environment for defense industry has also changed over the years; (6) recent defense contractor mergers and acquisitions are seen as a trend that will perpetuate constraints on the number and nature of businesses that may be willing and able to compete for business with DOD; (7) these fewer contractors are operating in an environment where DOD tends to award more money on weapon procurement contracts using other than full and open competition; (8) little is known about how the ongoing reconfiguration of the defense industrial base will affect or be affected by these trends in DOD weapon procurement processes; (9) defense industry employment is another key factor affected by changes in the industrial base; (10) the loss of jobs related to the reduction in defense budgets is widely documented, although estimates and projections vary; (11) GAO found a correlation, or statistical relationship, between an indicator of employment in defense-concentrated industrial sectors and an indicator of procurement outlays in those sectors for the period 1975-91 that is not large and is less than values considered moderate in size; (12) market forces and expectations about future trends in DOD budgets have facilitated the restructuring of the defense industrial base; (13) DOD's industrial assessments indicate that companies have been profitable since the funding drawdown and that its needs can be met in the segments it has assessed; (14) as part of its current "Defense Acquisition Reform vision" and under the Federal Acquisition Streamlining Act, DOD has recently engaged and piloted several new acquisition reform programs intended to achieve greater efficiency and value in weapons procurement and to reduce unnecessary costs; and (15) although these efforts are aimed at addressing critical and relevant issues for the defense industrial base, it is too early to tell what their full effects will be.
In a 2001 report, we estimated that by the end of fiscal year 2006 about 31 percent of the 24 CFO agencies’ employees working in 1998, or 493,000 people, will be eligible to retire, and about half of the eligible employees (236,000 people, the equivalent of 15 percent of the 1998 workforce) would actually retire. We included the SES in our analysis, but did not separately analyze or break out data for the SES. In 2000, we reported on SES retirement eligibility and pointed out that because individuals normally do not enter the SES until well into their careers, SES retirement eligibility is much higher than for the workforce in general. Our analysis showed that 71 percent of the almost 6,000 career SES members employed as of October 1, 1998, would reach regular retirement eligibility by the end of fiscal year 2005. We concluded that the retirement eligibility trends of the SES point to the importance of agencies placing appropriate emphasis and attention on SES succession planning because SES retirements will result in a loss of leadership continuity, institutional knowledge, and expertise among the government’s top career managers. The importance that we place on workforce planning, including planning related to employee retirement, is illustrated by our designation of strategic human capital management as a governmentwide high-risk area that needs urgent attention to ensure that the federal government functions economically, efficiently, and effectively. The Civil Service Reform Act of 1978 that established the SES states, among other things, that the policy of the federal government is to ensure equal employment opportunity in the workforce. It is generally recognized that a diverse SES corps can be an organizational strength that contributes to achieving results. In fact, we consider diversity so important that we identify it in our model for federal agencies on strategic human capital management as one of the eight critical success factors for strategic management. The demographics of the public served by the federal government are changing, and diversity has evolved from public policy to a business need. SES losses over the next several years present both a challenge for the federal government in filling the vacant positions and an opportunity to affect, through selections to the SES, the diversity of the corps. Because of the wave of retirements and normal attrition for other reasons, the federal government will have the challenge and opportunity to replace over half of its SES corps during fiscal years 2001 through 2007. Our simulation estimates that almost 3,400 of the 6,100 career SES members as of October 2000 will have left the service by October 2007. While a large portion of the GS-15s and GS-14s who represent the primary pool of replacements will also have left by October 2007, substantial numbers of minorities and women will be among the potential SES candidates in that pool. However, if current SES appointment trends continue, the proportion of the SES represented by minorities will remain essentially unchanged. Table 1 presents the results by racial, ethnic, and gender groups of our simulation of SES attrition and projection of SES appointments using current trends. The simulation estimates that 56 percent of the SES members who held positions at the start of fiscal year 2001 will leave service during the ensuing 7 years. The table also shows that the racial/ethnic profile of those SES members who will remain in the service throughout the 7-year period will be about the same as it was for all SES members in October 2000. This is because minorities will be leaving at essentially the same rate overall as white members. Thus, any change in minority representation will be the result of new appointments to the SES. However, as the last columns of table 1 show, if current appointment trends continue, the result of replacing over half of the SES will be a corps whose racial and ethnic profile is virtually the same as it was before. The outlook regarding gender diversity is somewhat different because the percentage represented by white women is estimated to increase by 4 percentage points and the percentage of minority women minimally by 0.5 percentage point. The proportion representing minority men is estimated to be virtually unchanged, only a 0.2 percentage point increase while white men’s proportion will decrease by 5 percentage points. To ascertain what the racial, ethnic, and gender profile of the candidate pool for SES replacements will look like, we performed the same simulations and projections for GS-15s and GS-14s as we did for the SES. Over 80 percent of career SES appointments of federal employees come from the ranks of GS-15s. Similarly, those promoted to GS-15 are GS-14s over 90 percent of the time. Table 2 presents the results for GS-15s and table 3 for GS-14s. The results for both are similar to those for the SES, but a somewhat lower proportion will leave because GS-15s and GS-14s are generally younger and have somewhat different propensities to leave than SES members. Almost half of the GS-15s (24,499, or 47 percent) and about a third of GS- 14s (28,419, or 34 percent) will have left government service by October 2007 according to our simulation. Minority representation among those GS-15s who remain will be about the same as it was in fiscal year 2001, indicating that whites and minorities will leave at about the same rates. However, the proportion of the remaining GS-14s represented by minorities will increase somewhat (by 1.5 percentage points) and the proportion of both grades represented by white and minority women will also increase. Moreover, if current promotion trends to GS-15 and GS-14 continue, marginal gains by almost all of the racial and ethnic groups would result. Our simulation shows that significant numbers of minority GS-15s (4,673) and GS-14s (10,567) will be employed throughout fiscal years 2000 through 2007, and our projection of promotions also shows substantial numbers of minorities at the GS-15 (8,957) and GS-14 (15,672) levels. These numbers indicate that significant numbers of minority candidates for appointment to the SES should be available. With respect to gender, the percentage of white women at GS-15 is projected to increase by 2.6 percentage points and at GS-14 by 1.0 percentage point. The proportions of minority women will increase by 0.9 percentage point for GS-15s and 0.6 percentage point for GS-14s while those for minority men will increase 0.7 percentage point for GS-15s and 0.5 percentage point for GS-14s. White men will represent 4.2 percentage points less of GS-15s and 2.1 percentage points less of GS-14s. The results of our simulation of SES attrition and our projection of appointments to the SES over the October 1, 2000, through September 30, 2007, period show variation across the 24 CFO agencies, as illustrated in table 4. However, as with the governmentwide numbers discussed in the previous section, agencies tend to increase the proportion of women in the SES, particularly white women, and decrease the proportion of white men. The proportion represented by minorities tended to change relatively little. The precision of our estimates of SES attrition at individual agencies by racial, ethnic, and gender groups is likely to be less precise than for our overall SES estimates because of the smaller numbers involved. Nevertheless, the agency-specific numbers should be indicative of what agency profiles will look like on October 1, 2007, if current appointment trends continue. The racial, ethnic, and gender profiles of the career SES at the 24 CFO agencies varied significantly on October 1, 2000. The representation of women ranged from 13.7 percent to 36.1 percent with half of the agencies having 27 percent or fewer women. For minority representation, rates varied even more and ranged from 3.1 percent to 35.6 percent with half of the agencies having less than 15 percent minorities in the SES. Detailed data on each CFO agency in the same format as tables 1, 2, and 3 are included in appendix II. Our simulation results also varied for the proportion of SES members who will leave service by October 1, 2007, but most of the CFO agencies are estimated to lose at least half of their SES corps. The effect on representation of minorities and women in the residual SES also varies but exhibits little change at most agencies for minorities from the October 1, 2000, profile. Only 3 agencies exhibited increases in minority representation of more than 1 percentage point. Increases for women were higher once again with only 1 agency having an increase of less than 3 percentage points. Most of the changes for women were accounted for by white women. Our projection of what the SES would look like if current appointment trends continued through October 1, 2007, also showed variation with 12 agencies having increased minority representation and 10 having less. While projected changes for women are often appreciable, with 16 agencies having gains of 4 percentage points or more and no decreases, projected minority representation changes in the SES at most of the CFO agencies are small, exceeding a 2 percentage point increase at only 6 agencies, with 10 agencies having decreases. The diversity picture for GS-15s and GS-14s is somewhat better than that for the SES at most agencies. The main differences from the SES are that a smaller proportion of the GS-15 and GS-14 population is estimated to leave government service and projected representation of minorities tends to be somewhat greater for those grades compared to the SES. Even after considering estimated attrition, agencies tend to have substantial numbers of minorities and women in the SES replacement pool and projected promotions to GS-15 and GS-14 increase those numbers. As mentioned above, appendix II presents detailed information on GS-15s and GS-14s at the CFO agencies. Again, our estimates for the GS-15 and GS-14 populations at individual agencies are likely to be less precise than our governmentwide figures because of the smaller numbers involved but should be indicative of what agency profiles will look like in October 2007. OPM has recently reaffirmed its commitment to diversity in the career SES, and has provided guidance to federal departments and agencies on maintaining and increasing workforce diversity. The four federal agencies we visited had implemented, or were in the process of implementing, many if not all of the steps recommended by OPM in its guidance. OPM, EEOC, and the four federal agencies we visited all said that our analysis was an accurate reflection of the likely future composition of the career SES if current patterns of selection and attrition continue. They all said that more diversity was needed in the SES and that based on our estimates more efforts would need to be taken if diversity is to increase. In an April 2002 memorandum to federal departments and agencies, the Director of OPM reaffirmed OPM’s commitment to diversity in the SES. About 2 years before, in June 2000, OPM provided comprehensive guidance to federal departments and agencies for building and maintaining a diverse workforce. OPM recommended the following: incorporate diversity program activities and objectives into agency workforce planning and executive succession planning; incorporate diversity into recruitment planning and activities, and use tools and techniques that are more likely to discover and attract a more diverse field of candidates (i.e., visits to majority-minority campuses, partnerships with minority organizations, and advertisement in specialty media); continually monitor the agency workforce profile, and the numbers of women and minorities participating in agency development programs; and build accountability for hiring, retaining, and developing a diverse, high- quality workforce into the performance management system for managers and supervisors. We visited the departments of Energy, the Interior, and Veterans Affairs (VA) and the Social Security Administration (SSA), each of which has had efforts in these OPM-recommended areas under way, often for a number of years. According to agency officials, three of the four agencies have diversity goals for their career SES. Officials at all four agencies told us that they have in place, or are putting in place, agencywide human capital planning and executive succession management, which includes diversity as an element in planning. All four agencies have programs for entry-level minority recruiting and for leadership development, which they believe will lead to an increased minority presence in leadership and executive ranks sometime in the future. Officials at all four agencies said that building a diverse workforce was an element in their current performance evaluation for agency executives. All four agencies either have in place or are putting in place human capital information systems that will be able to generate, periodically or on request, reports to management officials on the diversity of the current workforce at any level, and on the ethnic and gender composition of recent agency hires. (See app. III for details on the responses of OPM and the four agencies concerning workforce diversity in general and SES diversity in particular.) Response to GAO’s Analysis OPM officials, after reviewing our analysis of present SES diversity and projections of future SES composition, said that women and minorities continued to be underrepresented in the federal executive corps. They said that it would be unsatisfactory if the racial, ethnic, and gender composition of the career SES in 2007 were as we projected. EEOC expressed concern about the trends suggested by our analyses to the extent that they may point to the presence of arbitrary barriers that limit qualified members of any group from advancing into the SES. Energy, Interior, VA, and SSA said that our analysis of their current and future career SES diversity was reasonable. All of the agencies agreed that improvements needed to be made in their current SES diversity. All of them also said that the composition of the career SES that we projected if present selection trends continued would not be acceptable. Moreover, all four agreed that they would need to undertake additional efforts beyond those currently used if diversity is, in fact, to be enhanced. We asked OPM, EEOC, and the four agencies we visited for comments on a draft of this report. The Director of OPM’s comments are reprinted in appendix IV. Also, the comments of EEOC’s Acting Director of Communications and Legislative Affairs are reprinted in appendix V, those from the Commissioner of SSA in appendix VI, those from the Secretary of Veterans Affairs in appendix VII, and those from the Director, Office of Management, Budget and Evaluation, Department of Energy, are in appendix VIII. The Department of the Interior said that it had no comments on the draft report. OPM said that it concurs with our findings and welcomes the attention this report will bring to a critical opportunity facing the federal workforce and federal hiring officials. The Director said that increasing diversity in the executive ranks continues to be a top priority for OPM and that the agency has been proactive in its efforts to help federal agencies obtain and retain a diverse workforce, particularly in the senior ranks. She also said that talk is not enough, only results matter and that OPM itself in the past year has expanded its efforts to attract senior managers from government and the private sector and changed its SES performance standards to reflect a priority recruitment of qualified minorities. The comments cited several efforts OPM had under way to promote diversity, such as leading the Interagency Task Force on Hispanic Employment and reaching out to the next generation of public servants—college and university undergraduates—as they begin to choose career paths by such actions as hosting a reception in January 2003 for students from Historically Black Colleges and Universities to introduce them to and explain the hiring process for the federal government. EEOC said that the projected large losses in the SES ranks present the government with both a challenge and an opportunity to further strengthen the SES through employment practices that will ensure that the SES corps is staffed with the best and brightest talent available regardless of race, ethnicity, gender, or disability. EEOC went on to say that in the years ahead, federal agencies will need to continue their vigilance in ensuring a level playing field for all federal workers and should explore proactive strategies, such as succession planning, SES development, and mentoring programs for midlevel employees, to ensure a diverse group of highly qualified candidates for SES positions. EEOC listed a number of initiatives it said it had begun recently to help agencies identify and remove barriers to free and open competition in the federal workplace. However, most of these initiatives were related to the equal employment opportunity (EEO) compliant process and EEO data reporting and only tangentially to SES diversity. SSA said that it agrees with the reports findings and conclusions, including the value of all the OPM recommended actions we cite in the report. SSA said that it has been implementing such actions since late 1998. SSA also commented that it agrees with all of the information we present about SSA’s efforts to increase SES diversity. Energy said that more needs to be done to enhance SES diversity, particularly in light of the retirement and other losses anticipated over the next 5 years. Energy said that the attrition has important implications for government management and emphasizes the need for good succession planning as well as racial, ethnic, and gender diversity in the SES corps. Energy reiterated the efforts it has under way, discussed earlier in this report and in appendix III, to enhance SES diversity in the future and said that it is committed to enhancing diversity. VA commented that it generally agreed with our observations. VA said that it is undertaking efforts to increase diversity within its SES ranks. EEOC, Energy, and VA also made technical comments, which we incorporated where appropriate. Unless you announce its contents earlier, we will make no further distribution of this report until 30 days after its date. At that time, we will send copies to the Director of OPM, the Chair of EEOC, the heads of the 24 CFO agencies covered by the report, and other interested committees and members of Congress. We will also make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have questions, please call me on (202) 512-6806 or contact Thomas G. Dowdal, Assistant Director, on (202) 512-6588 or at [email protected]. Major contributors to this report are listed in appendix IX. Our objectives were to (1) identify the effect of estimated employment separations on the racial, ethnic, and gender diversity among the career Senior Executive Service (SES), GS-15s, and GS-14s in the 24 Chief Financial Officer (CFO) agencies and governmentwide, (2) determine the effect of estimated appointments to refill these vacancies on diversity, and (3) obtain from the Office of Personnel Management (OPM), the Equal Employment Opportunity Commission (EEOC), and four selected agencies their observations on our estimates and on SES diversity during this time of change. To determine the effect of estimated employment separations and appointments to fill these vacancies on racial, ethnic, and gender diversity, we analyzed personnel data from OPM’s Central Personnel Data File (CPDF) to determine past trends and used statistical simulation to estimate future trends. We analyzed separation data for fiscal years 1996 through fiscal year 2000 (the most recent data available at the time we started our analyses) for each of the three grade levels. We included voluntary retirements, other retirements, resignations, death, and terminations for poor performance or conduct in separations. For separations, we calculated each employee’s years of service by finding the difference between the service computation date (we used the 15th of each month as the day) and the date of actual separation. Similarly, we calculated age at separation by finding the difference between the date of birth (we used the 15th of each month as the day) and the date of separation. We calculated the age and years of service for employees at the end of the fiscal year similarly but used September 30 as the end date. For each 2-year interval in the years of service and age of those who separated from service during the period 1996 through 2000, we calculated the probability of leaving by dividing the number of workers that separated by the number of employees in the workforce with a similar combination of years of service and age who were on board at the end of the fiscal year preceding the fiscal year of separations. We put all staff members who were 67 years and older into a single group. We similarly grouped all employees with 32 or more years of service. Using regression, we then modeled the rate of separation as a function of years of service, age, agency type (civilian or defense), race (white or nonwhite), and gender. We developed an equation that estimates the rate of separation for any age, years of service, as well as agency type, race, and gender combination. We applied a statistical simulation technique to each of the 83,153 GS-14, 51,826 GS-15 and 6,110 SES workforce members on board as of September 30, 2000 (the end of fiscal year 2000). Each employee’s age and years of service as well as race, ethnicity, gender, and agency type were used as input into the simulation. Based on the simulation results, an employee was considered to have “separated” if the predicted rate of separation (from zero to 1.0) was greater than a simulation-generated random number from zero to 1. If the predicted value was less than a generated random number then the individual employee was deemed not to have separated. This process was repeated for each employee. The process was then continued for those employees who were not estimated as having separated in fiscal year 2001, but with each employee now being 1 year older and having 1 more year of service. As before, the employee’s new age and new years of service were used as input into the model and a predicted rate of separation was contrasted with a new generated random number to determine whether the employee was considered as separated in fiscal year 2002. A separation decision was made for each of the remaining employees, and each employee was either counted as having separated in fiscal year 2002 or 1 year was again added to both age and years of service. This process was repeated seven times for each year from fiscal years 2001 through 2007. The total number of separations by grade level, agency type, race, ethnicity, and gender was calculated across all 7 years. To determine how many employees remained after the separations, we subtracted the number separated for each combination of grade level, race, ethnicity, and gender from the total number of staff in the agency at that grade level. To determine what percentage of this remaining workforce was in each of the race, ethnicity, and gender groups, we divided the number remaining for each race, ethnicity, and gender by the total number of staff remaining at each agency. To estimate the race and gender profile of expected appointments in fiscal years 2001 through 2007, we analyzed by agency career appointment trends for the SES, GS-15, and GS-14 grade levels by race and gender for fiscal years 1995 through 2000. We included conversions, appointments (new hires), and promotions into each grade level. We combined the data for each fiscal year into a single 6-year total for each grade level. We determined what percentage each equal employment opportunity (EEO) group (race and ethnicity by gender; for example, African-American females) constituted of these past accessions for each level. We assumed that the number of separations estimated for fiscal years 2001 through 2007 would all be refilled through accessions. Therefore, the number of estimated accessions equaled the number of estimated separations. To estimate how many accessions would occur for each EEO group during fiscal years 2001 through fiscal year 2007, we multiplied an agency’s predicted total separations times an EEO group’s percentage of an agency’s past accessions. For example, if the Department of Commerce was expected to have 100 separations for a particular grade level and African- American males were 6 percent of Commerce’s past accessions at that grade level, then we estimated that Commerce’s accessions to fill these 100 vacancies would include six African-American males (100 separations times 0.06). These calculations were done for the period from fiscal years 2001 through 2007 as a whole, not year by year. To assess the effect that accessions to refill the positions vacated by separations had on the EEO profile, we added these expected career accessions to the career staff members we expected to remain after separations in each agency at each grade level at the end of fiscal year 2007. We then calculated the percentage for each racial and ethnic group by gender in each grade level within each agency. These calculations were done for each of the 100 iterations of separation predictions for each race, ethnicity, gender, agency, and grade level. Because these replacement calculations were done for each of the 100 iterations, we determined confidence intervals around all estimates reported. In general, confidence intervals for EEO groups for specific agencies are likely to be less precise than the governmentwide ones. We used regression analysis to develop a statistical model using five variables: (1) age at the end of each year for fiscal years 2001 through 2007, (2) years of service at the end of each year for fiscal years 2001 through 2007, (3) race and ethnicity, (4) gender, and (5) whether employed by a Defense agency or a non-Defense agency that most closely fits actual rates of separation over fiscal years 1996 through 2000. The objective in developing the statistical model was to minimize the squared differences between the actual rates of separation and the predicted rates of separation. An index, which is known as the squared correlation coefficient and is bounded between zero and one, is one useful numerical quantity to assess the strength or predictive power of the mathematical model. A perfect fit in a model would yield a squared correlation coefficient of 1.00. In our mathematical model, we achieved squared correlation coefficients of 0.73 for the SES and 0.75 for both the GS-14s and GS-15s. Thus, we were able to capture and predict three-fourths of the variability in the rates of separation for the 5 years of separation data. Therefore, one limitation is that our model does not predict with 100 percent accuracy the actual rates of separation, although it is uncommon in real world applications to find such a high squared correlation as our model achieved when dealing with behavioral data such as separating from federal government employment. The accuracy of our projections are also limited by the assumptions we made. Because the model is based on past trends, the estimates of future separations and appointments assume that factors affecting past trends will continue into the future. If one or more assumptions are incorrect, then the projections would change. The model does not account for possible changes in the future such as substantial pay increases, major changes in personnel policies, substantial changes in the number of employees, or large government reorganizations, such as creation of the Department of Homeland Security. For example, our past work has shown that a major increase in SES pay reduces the rate of retirements (and thus separations) in the first 3 years after the pay hike, followed by an increase in subsequent years. Many nonwork factors can influence an employee’s decision to separate from the government. Factors such as an individual’s health or children’s age/college status may affect the decision of when to separate. Data for these other factors were not available to be included in the statistical model. Other nonwork factors, such as future stock market performance, were not included. Because we did not include transfers from one agency to another as separations, the estimated separations for individual agencies are probably not as precise as separations governmentwide. Similarly, because we did not include promotions from GS-14 to GS-15 or appointments to the SES from GS-14 or GS-15, separations for GS-14s and GS-15s are probably underestimated. We asked OPM and EEOC to comment on our estimates and their implications for diversity in the SES. We also asked OPM and EEOC to provide information on any efforts they had planned or under way to address diversity, considering the magnitude of estimated SES losses. In addition, we visited four federal agencies—two with relatively high proportions of women and/or racial and ethnic minorities in the SES and two with relatively low proportions—and sought the same information from them as we had from OPM. Our work was performed from October 2001 through September 2002 in accordance with generally accepted government auditing standards. We presented to each of the four agencies we examined—the departments of Energy, the Interior, and Veterans Affairs (VA) and the Social Security Administration (SSA)—the results of our simulation of Senior Executive Service (SES) losses and projections of the SES racial, ethnic, and gender profiles at each agency if current appointment trends continue. We also presented our governmentwide simulation results and projections to the Office of Personnel Management (OPM) and the Equal Employment Opportunity Commission (EEOC). SSA and the three departments said our analysis of their career SES, and GS-15 and GS-14, losses and future diversity was reasonable. All of the agencies agreed that improvements needed to be made in their current SES diversity in at least some minority and gender groups. All four also acknowledged that the composition of the career SES that we projected if present selection trends continued would not be acceptable and discussed efforts they had made and were planning to take to promote diversity in general and in the career SES in particular. These efforts have significant elements in common. Because almost all career SES selections come from within the agency with a relative few selected from other agencies, agency officials generally agreed that the most effective way to enhance SES diversity is to increase diversity of the GS-15 and GS-14 feeder pool. OPM officials also said that if our projections were the actual result of SES selections through fiscal year 2007, that result would be unacceptable from a diversity standpoint. The reactions of these six agencies to the data and what they are doing and plan to do to address diversity in the career SES are summarized below. SSA’s goal is that its workforce at all levels, including the SES, should be at least as diverse as the national civilian labor force. To achieve this end, SSA officials said that the agency would need to improve its representation of Asian Americans and Native Americans; our analysis indicates that an increase of Hispanic representation would also be needed. SSA recently reinstituted an SES candidate development program, and also has separate leadership development programs for GS-13 and GS-14 and for GS-9 through GS-12. Thus, most employees above entry level can apply for management and leadership development programs. SSA officials said that it was agency policy to include women and minorities in the screening of applicants and final selections for these programs. SSA has made an effort to have line management “buy in” to diversity by making a case that diversity is not only a good thing in itself but enhances the agency’s ability to perform its mission. Executives in SSA are held accountable for diversity as an element in their performance contracts. SSA’s Office of Human Resources provides the Commissioner with a monthly summary of the ethnic and gender composition of each component of the agency by grade level, and a similar summary of the hiring done during the previous month. The Commissioner reviews these summaries with the deputy commissioners each month, which enables the Commissioner to demonstrate to senior managers the agency’s commitment to diversity and hold them accountable with their peers for the diversity in their units. SSA does regular entry-level recruiting at historically black colleges and Hispanic institutions and also has co-op agreements with Native American tribal colleges. SSA uses the Outstanding Scholar Program to recruit minorities and women, and makes use of authority granted by OPM to use bilingual registers for hiring. SSA has internal advisory councils for women, minority groups, and persons with disabilities. These councils, which are chartered by the SSA Commissioner and composed of volunteers, exist at national headquarters and at the SSA regional offices. They provide input and advice to SSA national and regional management on diversity issues and also join in SSA recruiting efforts where appropriate. Interior uses the relevant civilian labor force as a basis for looking at diversity at all levels but had not yet prepared a comparison labor force for its career SES. Interior planned to do so in the near future. A department official acknowledged that the current diversity of the career SES in Interior is unsatisfactory, and that future diversity cannot be enhanced without additional efforts. The official noted that diversity at the entry level and midlevel has improved and that, with the assistance of the candidate development program that it has had for some time, this improvement will eventually be reflected at the SES level. Interior officials noted that recruitment, workforce information, and workforce planning has until very recently been left up to the components within Interior instead of being handled at the departmental level. Officials of the U.S. Geological Survey, Office of Surface Mining, and Bureau of Land Management told us that they have had ongoing efforts to recruit minorities at educational institutions and career fairs. The Geological Survey has developed a job applicant database, which allows for tracking of the job selection process through all of its stages; thus, diversity of the applicant pool through successive screenings can be noted. Other parts of Interior do not have this type of process, although at least one other component uses the Geological Survey’s process. Interior management has recently acted to shift responsibility for recruitment and workforce planning (and thus for diversity) from components to the departmental level. The Deputy Assistant Secretary for Human Resources is developing a national recruitment initiative, under which Interior components will collaborate in recruiting and diversity will be included as one factor in recruiting. The Deputy Assistant Secretary is also preparing a human capital plan for submission to OPM, which will associate workforce planning with the departmental mission and will target diversity at midlevel and senior level. A senior official at Energy said that more progress is needed in making and keeping the department’s career SES diverse. The official noted that there has been a recent significant increase in diversity at the GS-14 and 15 levels, which could be reflected in future SES selections. Energy uses the civilian labor force as a basis for judging diversity, and also compares itself with other federal agencies. Energy hosted a Human Capital Summit in 2001, which resulted in a renewed commitment to executive succession planning. After this summit, Energy restarted an SES candidate development program, which had been inactive since 1994, with its Human Resource Management office coordinating with its Office of Civil Rights to develop a diversity recruitment strategy for the program. Energy had an intern program to bring in recruits for technical positions at the GS-7/9 level and, according to a senior official, 50 to 55 percent of the interns in that program have been minorities, and most have stayed long enough to reach the GS-12/13 level. A new career intern program recently began and it is too early to report its results. Pursuant to Executive Order 13171, dealing with Hispanic federal employment, Energy has set up an internship program aimed specifically at Hispanics. Energy is also establishing a formal mentoring program, under which GS-13 through GS-15 staff members can benefit from guidance from SES executives. Senior Energy officials noted that Energy does regular periodic diversity analysis of the workforce, and may extend this analysis to job applicants in the future. Building diversity is one of the key leadership attributes of the annual performance review for Energy executives. Energy officials noted that it is especially difficult to recruit qualified minorities for scientific and technical positions, especially considering the competition for such candidates from the private sector and other agencies, such as the National Aeronautics and Space Administration. Energy uses authorities such as recruitment and retention bonuses and relocation allowances to help in minority recruitment. Energy could also use dual compensation waivers for this purpose, but does not often use this option at present because, in the opinion of a senior Energy official, it is complicated to implement. A senior official said that Energy is considering applying for general waivers for certain occupational categories. In addition, Energy said that it uses executive search firms to increase the cadre of minority candidates. VA also said that our analysis was reasonable, and that more diversity was needed and greater efforts would be required if diversity is to increase. However, VA does not have an agreed-upon standard, such as the civilian labor force used by some agencies, by which to evaluate diversity in the career SES. Senior VA officials told us that, like Interior, VA has until recently left issues such as recruitment and leadership development to agency components. Officials of the Veterans Benefits Administration (VBA) and Veterans Health Administration (VHA) said that they were concerned about diversity in their management ranks, and had stepped up efforts to recruit minorities at lower levels and to prepare them for leadership. An official of the National Cemetery Administration said that it had made similar efforts, but had run into difficulties with finding minority candidates with civil service status and getting them on final selection lists. VA said that it had recently submitted to the Office of Management and Budget a restructuring plan under which, among other things, it will conduct an evaluation of its leadership development programs and develop a national recruitment and marketing plan. According to agency officials, both VHA and VBA have instituted leadership development programs, with VBA having had an SES candidate development program for some time. VA announced an SES candidate development program in October 2001, and training for the initial group of participants began in November 2002. VA also recently began to develop a plan for workforce management and succession planning. In addition, VA’s Office of Diversity Management and Equal Employment Opportunity, at the request of the VA Assistant Secretary for Human Resources and Management, is providing diversity information on VA’s current composition and monthly hiring. While diversity has been an element in performance evaluation for VA executives, this information will for the first time allow VA to fairly evaluate executives for diversity performance, according to a VA official. To gain a broader prospective on SES diversity issues, we met with OPM officials to get their reaction to our work. The officials agreed that our methodology was reasonable. They said that women and minorities continued to be underrepresented in the federal executive corps. OPM’s strategy for increasing executive diversity is to encourage agencies to enhance diversity at entry level and midlevel, identify individuals with leadership ability early in their careers, and provide experience and learning opportunities to prepare them for senior level positions. OPM cited the following actions as among the major steps it has taken to address diversity in the SES. Creating an Interagency Task Force on Hispanic Employment to focus on the continued underrepresentation of Hispanics in the federal workforce. The Director of OPM chairs the task force. Fostering the establishment and growth of agency candidate development programs, which train selected GS-14 and GS-15 employees in the skills necessary for success in the career SES. Issuing the first annual Report to the President on Hispanic Employment in the Federal Government, concerning the state of Hispanic employment. Meeting with leaders from 21 different Hispanic organizations to discuss barriers to Hispanic recruitment and retention as well as to enlist their support in recruitment. The organizations have goals and missions related to five different sectors: Hispanic education, federal employment, national advocacy, private sector, and professional organizations. The Director has held two meetings with the organizations and issued guidance to federal agencies about the benefits of utilizing the organizations' expertise. Co-chairing the Asian American and Pacific Islanders (AAPI) Joint Task Force with EEOC. The task force issued its Report on AAPI Federal Employment and Glass Ceiling Issues. Hosting, in partnership with the Department of Labor, the first Asian Pacific American Federal Career Advancement Summit in May 2002. OPM conducted several workshops on diversity issues and preparing for the SES. Meeting with the founders of the new employee group—the African American Federal Executive Association—which formed in 2002. One result from this meeting was an OPM initiative called the Executive Diversity Roundtable. The roundtable will be a venue for discussions focused solely on increasing and leveraging diversity in the executive ranks. Replacing “Cultural Awareness” with “Leveraging Diversity” as an SES leadership competency. A panel of public and private sector experts worked with OPM to revise the title and definition for the SES leadership competency that deals with diversity. The revised competency embodies the values of building, managing, and maintaining a diverse workforce; is results oriented; and stresses accountability. Compiling best practices that agencies can use to develop strategies to improve the representation of minorities and women in the federal workforce and including them in the fiscal year 2001 annual Federal Equal Opportunity Recruitment Program (FEORP) Report to Congress. Issuing the first Semi-Annual Statistical Report to the President on Hispanic Employment in Federal Agencies. The report compared Hispanic hiring in fiscal year 1995 with that of fiscal year 2001. It contained information on hiring activity both governmentwide and by individual agency and provided information about federal agency utilization of available hiring tools. Issuing a guide to federal agencies, Building and Maintaining a High Quality, Diverse Workforce. Conducting sessions on diversity issues as part of 1-week seminars offered to federal managers by OPM's Management Development Centers. During fiscal year 2001, conducting two workshops for agencies about federal equal employment opportunity regulations and federal agency roles and reporting requirements. Launching a new disability Web site in June 2002, a one-stop source of information for managers, applicants, and human resources professionals that is designed to be both comprehensive and user- friendly. Issuing a model plan on the employment of adults with disabilities. Regularly presenting workshops on diversity issues at numerous nationwide conferences of organizations with compatible goals, including organizations representing the interests of African Americans, Hispanics, women, Asians and Pacific Islanders, and persons with disabilities. Regularly providing consultation services and technical assistance to individual agencies regarding their questions, plans, and activities on diversity issues. In addition, upon request, OPM provided workshops to organizations that expressed interest in developing future leaders. These workshops focused on the application process and understanding the leadership competencies necessary for SES membership. These services were provided to about 10 federal agencies during fiscal year 2001. For EEOC’s reaction, see the “Agency Comments” section of the letter and appendix V, where its comments are reprinted. In addition to the individual named above, the following individuals made significant contributions to this report: Walter E. Reed, Jr., Steven J. Berke, Mitchell B. Karpman and Gregory H. Wilmoth. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to GAO Mailing Lists” under “Order GAO Products” heading.
The federal government faces large losses in its Senior Executive Service (SES), primarily through retirement but also because of other normal attrition. This presents the governmental with substantial challenges to assuring an able management cadre and also provides opportunities to affect the composition of the SES. GAO estimated the number of SES members who would actually leave service through fiscal year 2007 and reviewed the implications for diversity, as defined by race, ethnicity, and gender, of the estimated losses. Specifically, GAO estimated by race, ethnicity, and gender the number of members of the career SES who will leave government service from October 1, 2000, through September 30, 2007, and what the profile of the SES will be if appointment trends do not change. GAO made the same estimates for the pool of GS-15s and GS-14s, from whose ranks the vast majority of replacements for departing SES members come, to ascertain the likely composition of that pool. More than half of the 6,100 career SES members employed on October 1, 2000, will have left service by October 1, 2007. Using current SES appointment trends, the only significant changes in diversity will be an increase in the number of white women and an essentially equal decrease in white men. About 46 percent of GS-15s and 34 percent of GS-14 workforce will provide agencies the opportunity to select minority members for the SES. Estimates for 24 large agencies showed substantial variation in SES losses in the proportion leaving and the effect on agencies' racial, ethnic, and gender profiles, with 10 agencies with decreases in minority representation and 12 with increases. The 6 agencies GAO visited recognize that the SES needs to be more diverse than GAO's projections estimate and have efforts under way to address SES diversity. They also recognize that more will have to be done than in the past if diversity is to be enhanced.
Public education evolved from primarily single-gender (boys’) education to primarily coeducation before the turn of the 20th century. In colonial America, formal public education was primarily available to boys; girls were typically educated informally and in the home. Gradually, girls began to be integrated into the public elementary or “common” schools and, by the middle of the 19th century, almost as many girls as boys were attending these schools. Most of the common schools were small and located in rural areas where the economy of educating boys and girls together may have played a part in the coeducational model. Coeducational schools also thrived, however, in urban areas where population density made separate schools a more practical alternative. During the 1800s, the desirability of coeducation in secondary schools was debated, and opponents cited the need to protect girls both from danger to their health and from boys. In addition, considerable discussion centered on the appropriate curriculum, including differences in abilities and learning styles of boys and girls and whether they should learn the same subjects in school. By 1890, coeducation was clearly the most common model for public schools; in a survey of 628 U.S. school superintendents, only 41 reported having single-gender schools. Reviewing the findings of this survey, the U.S. Office of Education and the National Education Association’s Committee on the Education of Girls concluded at that time that the debate over the preferability of coeducation had been settled. Nevertheless, some single-gender schools existed. In 1972, nondiscrimination legislation was passed to protect students from discrimination in education on the basis of gender. Title IX of the Education Amendments of 1972 prohibits school districts from discriminating against students on the basis of sex and sets legal limits to single-gender public education. In addition, several court cases in recent years have challenged single-gender public education under the Fourteenth Amendment of the U.S. Constitution. In the last 2 years, at least three bills with single-gender education components were introduced in the Senate. In 1994, the Senate passed the Danforth Amendment to the Improving America’s Schools Act of 1994. The amendment would have allowed a limited number of single-gender classrooms as demonstration projects; however, the demonstration projects were eliminated from the bill in conference. On May 15, 1995, Senator Kay Bailey Hutchison introduced S. 829, a bill to provide limited waivers from Title IX and other statutes to permit single-gender classes to enable researchers to collect data on the effectiveness of such classes for low-income educationally disadvantaged children. It was referred to the Committee on Labor and Human Resources. On September 6, 1995, Senator Dan Coats introduced S. 1205, the Mentor Schools Act. The purposes of the proposed bill are to (1) award grants to local education agencies for establishing same-gender schools for low-income students; (2) determine whether same-gender schools make a difference in the educational achievement and opportunities of low-income, educationally disadvantaged individuals; (3) improve academic achievement and persistence in school; and (4) involve parents in the educational options and choices of their children. The bill authorizes an appropriation of $300 million for fiscal year 1996 and additional sums as necessary for 1997 to 2000 to carry out the act. As of May 1996, this bill was in committee. Educators and other experts with whom we spoke view single-gender programs as a way to address (1) high dropout rates, low academic achievement, and other problems faced by many urban males— particularly minorities—and (2) girls’ low academic performance in advanced mathematics and science; general lack of confidence, competence, and leadership skills; and narrow views of potential careers. The distraction that boys and girls may cause each other when in the same classrooms further contributes to problems for coeducation settings. The concept of classrooms and schools that provide students male role models and cultural and social awareness enjoys popularity among many educators who see such settings as opportunities to combat high dropout rates, low academic achievement, and other problems faced by many urban males—particularly minorities. These programs typically provide mentoring, tutoring, field trips, and other personal and academic enrichment activities. They emphasize self-esteem building and responsibility to the community. Recent research on the academic achievement of young girls suggests that they defer to boys in coeducational classrooms, are called on less than boys, and are less likely than boys to study advanced mathematics and science. Some educators believe that single-gender settings can improve girls’ academic performance and attitude toward these subjects. Such settings typically emphasize enhancing confidence, competence, and leadership skills as well as expanding their views of potential careers. Finally, some educators report that single-gender settings reduce the distraction that boys and girls create for each other. They believe all-boy or all-girl classes provide calmer classrooms with lower risk for educational failure. The middle school years are the most distracting for students, according to some educators. Many educators are convinced of the value of single-gender settings for urban minority males. Several program officials we spoke with reported improved test scores, better attendance, or improved behavior among students in single-gender settings. Although public school single-gender programs have not been rigorously researched, some studies of minority students in private single-gender schools suggest academic gains for both boys and girls. The most commonly cited studies are those by Cornelius Riordan of Providence College, who showed that African American and Hispanic students of both sexes do better in single-gender schools on all tests and score nearly a year above their counterparts in coeducational schools. In a more recent study of single-gender schools in four countries (Belgium, New Zealand, Thailand, and Japan), however, Riordan concluded that single-gender schools do not have uniform and consistent effects and their effects are conditional. That is, single-gender schools are most effective when they are atypical: “The more that these schools remain rare and special, the more effective they will be for those minority of students who select them.” Moreover, he points out that the most important factor contributing to the observed gains may be the parents’ and students’ making what he calls a “proacademic choice,” not the single-gender setting. Officials we spoke with from all-girl programs were enthusiastic about the girls’ performance. As evidence of success they cited increased competence and confidence, development of leadership qualities, and better focus on academics than girls in coeducational classes. Some recent studies have focused on gender bias against girls; they have viewed problems arising from such bias in coeducational settings compared with control groups of girls in single-gender settings. For example, a 1992 report by the Wellesley College Center for Research on Women for the American Association of University Women analyzed more than 1,200 research studies on girls and boys in public schools. It found, among other things, that girls receive significantly less teacher attention than boys, the gender gap in science has not declined and may be increasing, and many standardized tests contain elements of sex bias. In addition, the work of Myra and David Sadker explores and documents the gender bias girls face in coeducational classrooms and its adverse effects on their academic and career aspirations and self-esteem. Also in 1992, the Department of Education’s Office of Education Research and Improvement convened a group of researchers and practitioners to share their views and findings about single-gender education. The conferees reviewed and discussed various research studies and agreed that some studies support the assertion that single-gender schools may provide benefits. They also noted, however, that all single-gender schools are not equal in providing a productive learning environment and many factors contributing to the success of effective single-gender schools are fundamental to effective schools regardless of their gender policy: a small student body, strong emphasis on academics, and commitment to the school’s mission and values. Although single-gender settings may help avoid gender bias and the distractions of coeducational classrooms, some experts question whether they are the best remedy. They acknowledge the urgent problems single-gender programs are meant to solve; they also express concerns, however, about the risk of a separate and unequal allocation of education resources and the reinforcement of stereotypes that certain groups are low achievers and need extra help. Some experts caution that a program focusing on providing special services to urban minority males may not acknowledge that urban minority females share some of the same social and academic problems. Some experts who are not proponents of single-gender education as a strategy noted that research has not conclusively identified single-gender education as the desired solution to gender bias in coeducational settings. Some believe that successful strategies used in single-gender settings— smaller classes and more individual attention—can be just as effective in coeducational settings. They believe teacher training in diversity and equity can also contribute to a bias-free coeducational classroom. Finally, some experts caution that separating the sexes should not be viewed as a simple solution to complex problems and that program goals, content, and desired outcomes must be carefully scrutinized. Whatever the effectiveness and desirability of single-gender programs, single-gender public elementary and secondary education is limited by law. Restricting enrollment in a public school program to either gender may discriminate on the basis of gender and, thus, be contrary to Title IX of the Education Amendments of 1972. It may also violate the equal protection clauses of the U.S. Constitution and state constitutions. Title IX prohibits discrimination on the basis of gender in educational programs receiving federal financial assistance. Although Title IX does not govern admissions practices at the elementary and secondary school level except for vocational schools, it does require that school districts provide comparable facilities, courses, and services to boys and girls. Thus, Title IX does not preclude a school district from having single-gender schools. Title IX as implemented by the Department of Education regulation, however, generally prohibits single-gender classrooms in coeducational schools. The regulation has some exceptions; for example, single-gender classes are permitted for portions of physical education classes when students are playing contact sports or portions of classes on human sexuality. It may also be possible for a school to have single-gender classrooms as a remedy for past discrimination or as a form of affirmative action under certain specific conditions. (See app. I for a complete list of exceptions.) Officials at the Department of Education’s OCR, which enforces Title IX, state they have had relatively few complaints or requests for guidance on either single-gender schools or single-gender classrooms in the last 10 years. In each instance in which a complaint about a single-gender program has been filed with OCR, the school district and OCR have resolved the matter. Single-gender public elementary and secondary schools may violate Title IX if the school districts do not provide comparable facilities, courses, and services to both boys and girls. OCR has investigated complaints against two allegedly single-gender public schools but concluded that neither of them violated Title IX. In 1992, OCR investigated complaints in Philadelphia and Baltimore alleging that the school districts maintained single-gender public high schools for girls only—Baltimore’s Western High School and Philadelphia High School for Girls. OCR examined whether the school districts were excluding anyone on the basis of gender from the districts’ schools. School officials of both schools stated that they did not deny admission to boys and the schools were open to both boys and girls. Philadelphia High School for Girls traces its inception to the Model School, which opened as a coeducational teacher’s training school in 1818 and became the Girl’s Normal School in 1848. A school district official reported that the school offers an academic enrichment curriculum and the usual extracurricular activities such as sports and music. Currently, about 1,500 girls attend the school, which includes grades 9 through 12. According to school officials, the school draws students from all over the city, and—reflecting school district demographics—about 44 percent of the students are from families below the poverty line. They told us that the school targets for admission students with high academic performance and good attendance and report that about 98 percent of its graduates attend college. In 1992, the school was one of nine magnet high schools in the city. During OCR’s investigation, district officials stated that all students are encouraged to apply to these magnet schools and are provided with booklets that describe the high school programs. OCR found that district officials had no policy of excluding males from this school so the district had not violated Title IX. Baltimore’s Western High School was founded in 1844 to provide girls an opportunity to receive an education beyond the elementary level. School officials told us that the school became college preparatory in the 1960s; about 96 percent of the current graduates go to college. From the beginning it offered a liberal arts curriculum as it does today; it also provides the typical after-school programs such as sports and clubs. Western is 1 of 10 citywide high schools in Baltimore and draws qualified students from the entire city. To be accepted for admission, students must have a B average, and, to remain at the school, they must maintain a C average. Total enrollment in grades 9 through 12 is about 1,250. Students at the school come from about 30 national and ethnic groups, and about 80 percent are African American. During a review of Western’s policies and curriculum, OCR found that other citywide high schools also offered programs to both sexes similar to those offered at Western. District officials stated that the booklets the guidance staff distribute have no language indicating that Western is for girls only and applications are evaluated on merit and ranked in order without regard to sex. OCR found that the district did not exclude male students from applying or attending Western and was therefore in compliance with Title IX. Typical requests for guidance and complaints brought against school districts involving single-gender classrooms, which are generally prohibited under the Title IX regulation, include single-gender physical education classes, segregated technology classes, single-gender math classes for math-phobic girls, and single-gender mentoring clubs. Complaints were resolved in a variety of ways. Complaints against single-gender physical education classes are among the most common. OCR states that schools segregate the sexes, unaware that in most cases this is not permissible under the Title IX regulation, although the regulation does permit portions of classes when students are playing contact sports to be separated by gender. These complaints are generally resolved by changing the physical education classes to coeducational classes. Merely adding coeducational classes while maintaining single-gender classes does not resolve the violation. Schools must discontinue segregating their physical education classes on the basis of gender to comply with the Title IX regulation. Another type of complaint OCR has received alleges single-gender mathematics classes for girls. For example, in Ventura, California, the school district piloted a program to see if math sections composed primarily of girls who were math phobic or otherwise reluctant math students could, with support from adults, increase the girls’ enrollment in higher level math courses. Some boys also fit this profile and were enrolled in the pilot classes. In response to a complaint filed with OCR, the district modified its procedures for counseling, registering, and recruiting students for the pilot math classes to reflect academic need rather than gender. The classes are therefore described as providing a supportive environment for students who are math phobic or doubtful about their ability to succeed in challenging mathematics courses; all students regardless of gender who fit these categories can be targeted and encouraged to enroll. The Connecticut Department of Education also sought OCR’s guidance on a new introductory technology course to be offered in two formats, an all-girl class and a mixed-gender class. After discussion with OCR, Connecticut revised the format so that it had a “regular class and a second class targeted for female students but accessible to all students regardless of sex.” Both classes were to be open to all students, and OCR noted that the revised proposal did not appear to raise concerns of discrimination under Title IX. Concern for at-risk males has led some school districts to experiment with a separate educational program for minority males. One such school in Brooklyn, New York, operated a separate third grade class for at-risk minority students, which was alleged to separate students by race and gender. OCR’s investigation did not support the allegation of race segregation since the school was 100-percent minority; however, it did find that students were separated on the basis of gender. Regarding separation by gender, the New York Public School system agreed that if it decided to have a special program for at-risk students, it would submit criteria to OCR for placing at-risk students in a gender-neutral manner, document the reason each student was chosen for the class, and keep a record of the gender of each student in the class. According to OCR, the program was a 2-year pilot program and was not renewed. Another program, which targeted young African American males with no male role models at home, was the object of a request for OCR guidance from Dade County Public Schools in Florida. Dade County wanted to evaluate the effect of having a gender- and race-segregated class with a male teacher for young African American males in kindergarten and first grade. OCR found that such division by race would violate Title VI of the Civil Rights Act of 1964 and such division by gender would violate Title IX. OCR determined that the proposal to assign students on the basis of gender, even though voluntary on the part of the boys who would participate, is not allowed under the Title IX regulation and does not fit into the rationale for the stated exceptions to the regulation. Mentoring is another area in which OCR has received a complaint. The complaint alleged that Prince George’s County Public Schools in Maryland sponsored single-gender mentoring clubs for boys. Upon investigation, OCR found that the district operated a multimillion dollar program of single-gender clubs for boys and operated a significantly smaller program of single-gender clubs for girls. At least one of these clubs for boys is operating at all of the county’s 176 schools, and, at some of the schools, the district also funds community-based clubs for boys only. OCR found that the district operates only 31 clubs for girls. The need for mentoring activities through single-gender clubs was articulated by the district in a report on African American male achievement recommending that the district strengthen its efforts to provide students with mentors and experiences that forge ties between academics and the work world. OCR noted that single-gender clubs would comport with Title IX in meeting affirmative action standards only if (1) those who have experienced conditions resulting in a limited opportunity to participate in the district’s programs due to their gender are the targeted beneficiaries, (2) less discriminatory alternatives have been considered and rejected, and (3) the evidence demonstrates that comparable gender-neutral means could not be reasonably expected to produce the results desired. OCR found that despite the laudable goals of the district’s program, it did not appear that the means to achieve those goals had been tailored to comply with the Title IX regulation. In response, the district opened all district-sponsored programs, clubs, and activities to all qualified students regardless of gender (excluding such usual Title IX exemptions as football and other contact sports). The district also agreed to ensure that female students are informed of and are welcomed into the district’s formerly all-male mentoring programs and male students are informed of and are welcomed into the district’s formerly all-female mentoring programs. Single-gender public education could also be challenged under the Fourteenth Amendment to the U.S. Constitution. The equal protection clause of the Fourteenth Amendment declares that a state may not deny anyone its jurisdiction the equal protection of the laws. The U.S. Supreme Court has not yet ruled on the constitutionality of single-gender elementary or secondary schools. Several cases, however, such as Mississippi University for Women v. Hogan, Vorchheimer v. School District of Philadelphia, and Garrett v. Board of Education of School District of Detroit, may provide guidance for policy decisions being made on single-gender schools. The U.S. Supreme Court addressed the issue of a single-gender college in Hogan in 1982. A male student sought admission to a state-supported professional nursing program at Mississippi State University for Women (MSU). He was denied admission solely on the basis of gender because MSU has been limited to women since it was created by Mississippi statute in 1884. Hogan claimed that the admissions policy violated the equal protection clause of the Fourteenth Amendment. The Supreme Court agreed with Hogan in a five to four decision. In its analysis, the Court defined the standard applied in this case: A state needs to show an “exceedingly persuasive justification” for classifying individuals on the basis of gender. That burden can be met only by showing that the classification serves “important governmental objectives” and that the discriminatory means employed are “substantially related” to achieving those objectives. Under Hogan, this test must be applied free of fixed notions about roles and abilities of males and females. In applying this standard to the facts, the Court found the state’s argument that its single-sex admissions policy compensates for discrimination against women to be unpersuasive. Mississippi had not shown that women lacked opportunities to obtain nursing training when the school opened its doors or that women were deprived of such opportunities when Hogan sought admission. The Court found that Mississippi’s policy of excluding males from admission, rather than compensate for discriminatory barriers faced by women, tended to perpetuate the stereotyped view of nursing as an exclusively women’s profession. The policy also failed because the state did not show that the gender-based classification was substantially and directly related to its proposed compensatory objective. The issue of single-gender public high schools came up in the Vorchheimer case in 1976, which was decided before Hogan and therefore did not use the same analytical framework as Hogan. A female high school student was denied admission to an all-male academic high school in Philadelphia solely because of her sex. The Philadelphia School District at that time operated two single-gender academic high schools, Central High School and Philadelphia High School for Girls. The court found both schools to have excellent reputations for academic excellence. Enrollment in either school was voluntary. The district also provided “comprehensive” coed high schools that included courses required for college admission and advanced placement courses. The U.S. Court of Appeals for the Third Circuit found that Girls and Central were academically and functionally equivalent and, consequently, the admission requirements based on gender classification did not offend the equal protection clause of the Fourteenth Amendment. The court reasoned that gender should not be treated the same as race under the equal protection clause because, although no fundamental difference exists between races, differences between boys and girls do exist that may, in limited circumstances, justify disparity in law. It also noted that the primary aim of any school system must be to furnish an education of as high a quality as feasible. “Thus, given the objective of a quality education and a controverted, but respected theory that adolescents may study more effectively in single-sex schools, the policy of the school board here does bear a substantial relationship” to providing high-quality education. Vorchheimer was decided in 1976 and predates the Supreme Court’s decision in Hogan discussed above. In Hogan, the Supreme Court referred to the Vorchheimer case to show how the issue it was deciding differed from that in Vorchheimer. The Supreme Court stated, “We are not faced with the question of whether States can provide ‘separate but equal’ undergraduate institutions for males and females,” as was the case in Vorchheimer. The Supreme Court may answer the “separate but equal” question for colleges in a pending case, United States v. Virginia. Virginia was found by the U.S. Court of Appeals for the Fourth Circuit to be violating the equal protection clause of the U.S. Constitution in operating Virginia Military Institute (VMI) as a male-only military college and not providing a similar single-gender educational environment for women. The court of appeals gave Virginia the option of admitting women to VMI, discontinuing its support of VMI, or establishing a parallel program for women. Virginia established a parallel program—the Virginia Women’s Institute for Leadership at Mary Baldwin. In reviewing this remedy, the court of appeals found that the distinctions in the VMI program and the Mary Baldwin program are justifiable because of gender differences but that the programs were otherwise comparable in substance. The following issues are on appeal before the U.S. Supreme Court: (1) whether a state that provides a rigorous military-style public education program for men can remedy the unconstitutional denial of the same opportunity to women by offering them a different type of single-gender educational program and (2) whether coeducation is the required remedy in this case. Finally, a district court decision may also help guide school districts. In Garrett, the Detroit School District sought to establish three male academies in 1991 to serve approximately 250 boys from preschool through fifth grade, with grades six to eight phased in over the next few years. The academies were to offer special programs, including an Afrocentric curriculum, mentors, Saturday classes, individualized counseling, and uniforms. The plaintiffs contended that these special programs did not require a uniquely male atmosphere to succeed and that they addressed issues females face, too. Moreover, the academies did not target only at-risk boys but boys from all achievement levels. The case came to the court on a motion for a preliminary injunction. In such cases, the courts do not render a final decision, but they will grant an injunction forbidding a party from engaging in certain activity if they find, among other things, that it is likely that plaintiffs would succeed at trial and would suffer irreparable injury if the injunction is not granted. The court applied the standard used in Hogan; it found that both the U.S. Constitution and the Michigan Constitution prohibit the exclusion of an individual from a publicly funded school because of his or her gender unless the school district can show that gender-based classification serves important governmental objectives and that the discriminatory means employed are substantially related to achieving those objectives. The court noted that no evidence existed that the education system was failing urban males because of females attending schools with males. The preliminary injunction was granted, and the case never came to trial. The parties agreed to expand the academies to include girls and to have comparable male-focused and female-focused classes and activities. In addition to the equal protection clause of the U.S. Constitution, some state constitutions have similar equal protection provisions or equal rights amendments that have been interpreted by their courts as more rigorous or restrictive than the federal equal protection clause. Thus, even if a particular example of a single-gender education program is acceptable under federal law, it may still be challenged under state laws. Most single-gender education programs we identified were classroom rather than schoolwide programs. Several of the programs we examined, including those described below, have not been reviewed by OCR, and these programs may not be in compliance with Title IX. The five single-gender programs discussed in this section were operating at the time of our study. Following are descriptions of such programs based primarily on information we obtained from interviews conducted with program officials. In September 1995, a large urban middle school (grades seven and eight) in a northeastern city established an all-boy academy within the school. The academy is one of three magnet programs at the school. The school’s enrollment is about 1,000 students, of whom about 99 percent are minority. The academy program, a 2-year program for seventh and eighth graders, is voluntary with an enrollment of 57 seventh graders. The school plans to recruit a new class of seventh graders to begin in September 1996. The academy has four teachers, and the boys travel among these teachers’ classrooms. The objective of the program is to help the boys become responsible, successful people and to build self-esteem through academic success. The standard middle school curriculum is taught with an emphasis on individual growth, academic success, social responsibility, and good citizenship. Special curriculum components include a mentoring program in which boys are counseled on subjects such as careers, gangs, family issues, and academics. In addition, the curriculum emphasizes culture, history, society, and technology. The school is planning to initiate an all-girl program in September 1996 or 1997. In autumn 1995, a teacher in a suburban elementary school established an after-school math and science program for fifth and sixth grade girls. The program is intended to encourage girls to study these subjects and to build self-confidence in their abilities. It is one of several after-school programs offered by the school, although the others—such as basketball, chess, and computers—are for both boys and girls. The girls meet every Thursday afternoon for an hour to learn about science-related matters, such as optical illusions and the metric system, and to participate in activities to enhance their enjoyment of math and science such as building tetrahedrons and playing math strategy games. The program founder told us the program has been filled to capacity, and she plans to continue it next year if the school district funds the late-running school bus that allows the children to attend an after-school program. In 1989, the principal of an urban coeducational elementary school decided to try single-gender classes in grades one and three. She subsequently expanded the program to include grades one through five. The program goal is to improve academic achievement for all the children and to identify best practices to encourage the boys to find alternatives to violence and to be supportive of each other. All students study the standard curriculum and this year have received special instruction in character issues such as honesty, trust, and conflict resolution. After-school activities include mentoring by high school and college students as well as local business and professional people. In addition, representatives from the U.S. Armed Forces visit the school weekly to tutor and discuss careers. The school operates year-round and offers summer courses aimed at building self-esteem and promoting career awareness in such areas as hotel management, nursing, and real estate. The principal in an urban middle school launched a single-gender program 3 years ago to address both academic and social issues relative to her students—especially African American boys with serious learning problems. In each of the three grades (six through eight), the school has all-girl classes, all-boy classes, and coed classes. Parents may choose whichever setting they prefer. About 650 students attend the school, and about 99 percent receive free or reduced-price lunch. All students in the school study an Afrocentric curriculum that was in place before the single-gender classes. The school has extracurricular mentoring programs for boys and girls and about 30 other after-school activities, including Karate, chess, and tutoring. The boys in the school serve as mentors for the second grade boys at a nearby elementary school. The single-gender program will be discontinued after this school year to comply with the state administrative code. In 1992, the principal in this urban junior high inaugurated single-gender homerooms for some students to provide a place where they could talk more openly about issues important to them and where teachers could provide crisis intervention when necessary. She believed it worked so well that the next year she made all classes in the school (grades seven through nine) single gender. Her primary goal was to promote the students’ academic success and to also minimize the distractions of rowdiness or inappropriate behavior among the students. She believes that because the students face danger in their inner city neighborhood, the school must be a safe haven and likes to consider it a second family for the children. All students are taught the standard junior high curriculum, and social skills and responsibility are emphasized. The school has a Saturday program in which students from a nearby college tutor both boys and girls. Officials we talked to in schools that have experimented recently with single-gender education said that such programs have resulted in observable qualitative differences in the behavior of children in single-gender environments; conclusive quantitative research, however, on the effectiveness of such public school programs is not available. Opponents maintain that targeted problems can be effectively addressed in coeducational settings without subjecting students to discrimination on the basis of gender and that the effectiveness of single-gender programs is questionable. Proponents believe, nevertheless, that single-gender programs ought to be available as tools for improving the academic and social performance of school children. Some single-gender programs, however, are subject to legal impediments. In commenting on a draft of this report, the Assistant Secretary for Civil Rights in the Department of Education made several suggestions on the report’s purpose, research on single-gender education, and issues involving legal standards. As the Assistant Secretary correctly observed, our study was not intended to be an exhaustive research effort but was intended to identify the major issues in single-gender education and cite some examples. We did, however, add some additional references that may be useful to researchers. Regarding legal standards, the Assistant Secretary asked that we further clarify and explain the applicable legal principles. We have done so in the final report. The Assistant Secretary also provided technical comments on specific statements and facts included in our draft report, and, where appropriate, we used the information to clarify our report. If you have any questions about this report, please contact me at (202) 512-7014 or Eleanor L. Johnson, Assistant Director, at (202) 512-7209. This report was prepared by Susan Lawless, Evaluator-in-Charge, and Susan Poling, Assistant General Counsel. Title IX of the Education Amendments of 1972 generally states that no person shall, on the basis of sex, be excluded from participation in, be denied the benefits of, or be subjected to discrimination under any education program or activity receiving federal financial assistance (20 U.S.C. 1681 (1990)). The implementing regulation is found in part 106 of title 34 of the Code of Federal Regulation. The Title IX regulation permits nonvocational, single-gender elementary and secondary schools, as long as comparable facilities, courses, and services are made available to students of both genders (34 C.F.R. 106.35(b)). The Title IX regulation generally prohibits single-sex classrooms in coeducational schools. It states that a “recipient shall not provide any course or otherwise carry out any of its education program or activity separately on the basis of sex....” (34 C.F.R. 106.34). Following are some exceptions to this regulation: contact sports offered in physical education classes (34 C.F.R. 106.34(c)); chorus, when based on vocal requirements or quality (34 C.F.R. 106.34(f)); portions of classes dealing with human sexuality (34 C.F.R. 106.34(e)). Separate classes may also be provided for pregnant students, but must be voluntary (34 C.F.R. 106.40(b)(3)). If the Assistant Secretary for Civil Rights finds discrimination on the basis of sex, a recipient may be required by the Assistant Secretary to take remedial action necessary to overcome the effects of the discrimination (34 C.F.R. 106.3(a)). In the absence of a finding of discrimination by the Assistant Secretary for Civil Rights, a recipient may take affirmative action to overcome the effects of conditions that have limited participation by gender (34 C.F.R. 106.3(b)). Regarding affirmative action, in particular, the classifications that result in single-gender classes must be directly related to the reasons for the institution of the single-gender classes. This means that the (1) beneficiaries of the single-gender classes or programs must have had limited opportunities to participate in a school’s programs or activities due to their sex, (2) less restrictive or segregative alternatives that may have accomplished the goals of the single-gender classes or programs must have been considered and rejected, and (3) there must be evidence that comparable sex-neutral means could not be reasonably expected to produce the results sought through the single-gender classrooms or programs. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the major educational and legal issues involved with public single-gender education. GAO found that: (1) single-gender educational programs are thought to reduce dropout rates and improve overall academic performance among urban males and academic achievement in mathematics and science among females; (2) single-gender settings are believed to reduce the distraction boys and girls create for each other, particularly during the middle school years; (3) some studies of minority students in private single-gender schools have suggested that both boys and girls improve academically in such settings; (4) the effectiveness of single-gender programs may be due more to students' and parents' motivation and commitment and small student populations; (5) some experts fear that single-gender educational programs will lead to unequal resource allocations and reinforcement of stereotypes; (6) some believe that training teachers in diversity and equity, creating smaller classes, and providing more individual attention would be just as effective in coeducational settings; (7) some public schools have terminated or modified their single-gender programs because of federal and state limitations on single-gender educational programs; and (8) the Department of Education has received numerous complaints regarding single-gender educational settings.
In October 2001, an American Media Incorporated employee died from inhalation anthrax disease. In the same month, contaminated letters laced with Bacillus anthracis, or anthrax spores, were sent through the mail to Senators Thomas Daschle and Patrick Leahy. The response to the incident in the American Media Incorporated building in Florida in September 2001 led to the identification of mail as the potential source of contamination; eventually, it led to the sampling of the postal facilities. The agencies began sampling on October 12, 2001, in Florida and stopped on April 21, 2002, when the Wallingford, Connecticut, facility was sampled for the last time. The letters led to the first cases of anthrax disease related to bioterrorism in the United States. In all, 22 individuals contracted anthrax disease in Connecticut, Florida, New Jersey, and New York, as well as in Washington, D.C., and 5 died. The federal agencies involved in the response in the postal facilities have different responsibilities. CDC and state and local health departments primarily provided public health advice and assistance to USPS. CDC has primary responsibility for national surveillance of specific diseases, including anthrax; it also conducts epidemiologic investigations to determine, among other things, the source of the disease, and it participates in environmental sample collection and analysis activities. The FBI is responsible for criminal investigations involving interstate commerce and the mail and crimes committed on federal property. EPA is the nation’s lead agency for responding to a release of hazardous substances into the environment and subsequent decontamination. On October 8, 2001, the President created the Office of Homeland Security to develop and coordinate a comprehensive national strategy for dealing with domestic terrorist threats or attacks. The office, which had limited involvement in the 2001 response, was superseded by the Homeland Security Act of 2002, which transferred many of its functions to DHS, which became operational in 2003. DHS was created by combining many previously separate agencies and is assigned a lead role in coordinating the efforts of federal agencies that respond to acts of terrorism in the United States. The federal agencies primarily used a targeted strategy—they collected samples from specific areas considered more likely to be contaminated, based on judgments. Such judgments can be effective in some situations— for example, in determining whether a facility is contaminated when information on the source of potential contamination is definitive. However, in the case of a negative finding, when the source of potential contamination is not definitive, the basic question—Is this building contaminated?—will remain unanswered. CDC and USPS officials said that they used a targeted strategy for several reasons, including limitations on how many samples could be collected and analyzed. They also said that in 2001, they lacked the data from empirical research to develop an initial sampling strategy that incorporated probability sampling. We disagree with this interpretation. Probability sampling is statistically based and does not depend solely on empirical criteria regarding the details of possible contamination. The situation in 2001 was unique, and the agencies were not fully prepared to deal with environmental contamination. In the future, if the agencies decide to use a targeted rather than a probability sampling strategy, they must recognize that they could lose a number of days if their targeted sampling produces negative test results. In this case, additional samples would have to be collected and analyzed, resulting in the loss of critical time for public health interventions. This was so at the Wallingford postal facility in the fall of 2001, when about 3 weeks elapsed between the time the first sampling took place and the results of the fourth testing, which revealed positive results. Furthermore, about 5 months elapsed between the time of the first sampling event and the time anthrax was found in the Wallingford facility’s high-bay area. Therefore, strategies that include probability sampling need to be developed in order to provide statistical confidence in negative results. Further, even if information on all the performance characteristics of methods is not yet available, a probability sampling strategy could be developed from assumptions about the efficiency of some of the methods. And even if precise data are not available, a conservative, approximate number could be used for developing a sampling strategy. This would give agencies and the public greater confidence in negative test results than was associated with the sampling strategy used in 2001. CDC, EPA, and USPS, the federal agencies involved in sampling the postal facilities in 2001 to detect anthrax, undertook several activities. These included development of a sampling strategy followed by collecting samples, using a variety of methods, and transporting, extracting, and analyzing the samples. Neither these activities nor the overall process was validated for anthrax testing. Consequently, the agencies were challenged by limited information for reliably choosing one method over another and by lack of information on the detection limit to use when evaluating negative results. Federal agencies used different methods for collecting samples. While USPS generally used dry swabs to collect samples (the least effective method), CDC and EPA used multiple methods—dry swabs, premoistened swabs, wet wipes, and a high-efficiency particulate air (HEPA) vacuum— in various combinations or alone. However, none of the agencies’ collection methods were evaluated for anthrax detection in environmental samples. In the absence of empirical research, agencies had no information available for reliably choosing one method over another and no information on the limits of detection to use when evaluating negative results. The majority of the samples collected from the postal facilities tested negative. In all, federal agencies collected about 10,000 samples during initial testing. It is interesting that of the 9,807 samples that the agencies collected, more than 98 percent, or 9,648, were negative; a little more than 1 percent, or 159, were positive. In all, 286 facilities were tested for anthrax contamination. Of these, Brentwood, Trenton, and Morgan were primary facilities; that is, these 3 facilities processed the original letters containing the anthrax. The results of the CDC, EPA, and USPS testing in 286 postal facilities were largely negative. Of 286 facilities, 23 tested positive. For 2 of these 23 facilities, test results were negative at first but positive on a subsequent testing. However, in 1 of these facilities—the Wallingford, Connecticut, facility—it was not until the fourth testing that positive results were obtained. Testing results differed between the primary facilities and Wallingford. In the 3 primary facilities, results were positive each time a facility was tested, with the important exception of the two quick tests in Brentwood. In Wallingford, considered less likely to be contaminated, results were positive only on the fourth sampling. These results underscore the importance of retesting and cast doubt on the efficiency of the judgmental sampling strategy. Of the 263 facilities that tested negative, only 9 were sampled more than once. A facility in West Trenton tested negative, even though an employee had contracted cutaneous anthrax. The facility in West Trenton was tested twice by the FBI and once by CDC, during which a total of 57 samples were collected, with negative results. Final, or confirmed, results will be negative if contamination is not present in a facility. However, a result can be erroneously negative for several other reasons, such as (1) the sampling method was not efficient enough, (2) samples were not collected from places where contamination was present, (3) not enough samples were collected, (4) not enough spores were recovered from the sample material, or (5) analysis of the sample extract was not sensitive enough to detect anthrax spores that were present. The agencies that sampled postal facilities in 2001—USPS, CDC, and EPA—did not use validated sample collection and analysis methods to perform their tests. According to these agencies, validated methods were not available at that time. They conducted several interdependent activities, including sample strategy development, followed by sample collection, transportation, and analysis of the samples to detect anthrax. Neither these activities nor the overall process had been validated for anthrax testing. Validation is a formal, empirical process in which an authority determines and certifies the performance characteristics of a given method. Therefore, investments are also needed to validate these methods, as well as the overall anthrax detection process. Validating the overall process, as well as the individual activities, is important because operational and health-related decisions are made on the basis of testing results that the process generates. CDC and USPS officials said that they used targeted sampling; that is, they collected samples from specific areas considered—based on agencies’ technical judgments—more likely to be contaminated. Such judgments can be effective in some situations, for example, in determining the source of contamination in a disease outbreak investigation, provided results are positive. However, if the results are negative, the basic question—Is this building contaminated?—cannot be answered with statistical confidence. When the level of contamination is extremely high and dispersed in a facility, the method of sampling (for example, wipes versus swabs) is not as critical if the purpose is to find some contaminant. However, at lower levels, a way of interpreting the significance of negative results is needed, and this requirement emphasizes the importance of validation of the methods and the need for statistically based sampling strategies. This emphasizes the need for methods that have been validated, and sampling strategies that are likely to find contamination at low levels. Probability- based sampling does allow conclusions, at specific levels of confidence, about testing results. Using a probability-based sampling strategy, together with validated methods for detecting contamination, would provide a known level of confidence with which to interpret any negative results. This would allow agencies to be more definitive in determining necessary actions. Figure 1 shows how lack of validation could affect individual activities—including the sampling strategy—as well as the results generated by the overall process. The lack of validated methods for assessing contamination in postal facilities impeded the agencies in responding to the incidents. The significance of the lack of validated methods was exemplified in the case of the one postal facility where negative preliminary results were obtained by field-based methods of analysis, with limitations that appear not to have been well understood by some agencies. Negative results do not necessarily mean a facility is free from contamination. As we reported, results can be negative if (1) samples were not collected from places where anthrax was present, (2) the detection limit of the method was greater than the actual contamination level, (3) not enough samples were recovered from the sample material, (4) analysis of the sample extract did not detect spores, or (5) anthrax was not present in the facility. In addition, while the 2001 events involved anthrax, many other biothreat agents exist. Differences in their characteristics mean different solutions. Accordingly, efforts to develop sampling strategies and to validate methods should address requirements specific to those threat agents as well. However, since addressing other agents would consume resources and time, all these efforts should be prioritized in a long-term strategy. The several agencies that dealt with the anthrax attacks generally worked well together, but we have identified areas that would have benefited from one agency’s taking the lead in coordinating the response. Given the mission of DHS and its responsibilities, it appears that DHS is now well positioned to take a lead role in promoting and coordinating the activities of the various agencies that have technical expertise related to environmental testing. In addition, it is important that all participating agencies recognize and support DHS in that role and that they have an effective structure for participating in identifying and addressing the appropriate issues. Given the lack of validated methods for detecting anthrax contamination in facilities, we recommended in our 2005 report that the Secretary of Homeland Security develop a coordinated approach to (1) improve the overall process for detecting anthrax and (2) increase confidence in negative test results generated by that process. This approach would include working with agencies to ensure that appropriate validation studies of the overall process of sampling activities, including the methods, are conducted. Specifically, we recommended that the Secretary 1. take a lead role in promoting and coordinating the activities of the various agencies that have the technical expertise related to environmental testing; 2. ensure that a definition of validation is developed and agreed on; 3. guarantee that the overall process of sampling activities, including methods, is validated so that performance characteristics, including limitations, are clearly understood and results can be correctly interpreted; 4. see that appropriate investments are made in empirical studies to develop probability-based sampling strategies that take into account the complexities of indoor environments; 5. ensure that appropriate, prioritized investments are made for all 6. make sure that agency policies, procedures, and guidelines reflect the results of such efforts. When we issued our report, CDC, DHS, and USPS agreed with our conclusion—that methods for detecting anthrax contamination in facilities were not validated—and with the thrust of our recommendations—calling for a coordinated, systematic effort to validate the methods to be used for such testing. But they (1) disagreed with or expressed concern about our conclusions or the recommendation dealing with targeted versus probability sampling, (2) emphasized that validated testing methods for anthrax were not available in 2001 and that federal and state organizations did the best they could under the circumstances, and (3) identified factors or issues that need to be considered in validating testing methods. After we issued our 2005 report, it became evident that there was uncertainty over which agency would take the lead role in improving the overall process for detecting anthrax and how studies were to be funded. For example, DHS stated that while it has overall responsibility for coordinating the federal response during future biological attacks, EPA had the “primary responsibility for establishing the strategies, guidelines, and plans for the recovery from a biological attack” and HHS had the lead role for any related public health response and guidelines. DHS also stated that it coordinated regularly with EPA’s National Homeland Research Center to exchange information on research needs and to discuss priorities and gaps for a wide range of security-related research areas. DHS stated that it would coordinate with EPA to ensure that appropriate investments were made to explore improved sampling. However, it is unclear to us how DHS would ensure that appropriate prioritized investments are made for all biothreat agents and how such priorities and gaps would be addressed. On the basis of these uncertainties, we recommended in our May 9, 2006, testimony that DHS’s approach to validating the overall process should start with a strategic plan that includes a road map outlining how individual agencies efforts would lead to the validation of the individual activities as well as the overall process, noting that such a plan would assist DHS in monitoring progress and measuring agency performance toward improving the detection of anthrax and other prioritized threat agents. On May 19, 2006, DHS officials stated that while they concurred with the recommendations from our report and accepted the overall responsibility to ensure the methods will be validated, they stated that “there are legal limitations in DHS authority to direct the activities of other agencies.” They said that while they take a lead role in coordinating the meetings and in bringing people from different agencies together, they cannot guarantee that the overall process of sampling will be validated because different agencies have responsibility for different aspects of validation, and DHS’s control over other agencies actions and budgets is ultimately limited. They stated that DHS cannot ensure and guarantee that validation studies would be done, since this is a shared responsibility among different agencies. Also, since validation would require a sustained effort over a long period, DHS noted that it could not mandate commitment of other agencies’ funds, over which it has no control. DHS officials told us in July 2006 that they recognize that DHS is the principal agency responsible for coordinating the federal response and they would work with a good faith effort toward developing a strategy for validation studies and a road map by the end of calendar year 2006 outlining how individual agencies’ efforts would lead to the validation of the overall sampling process. On March 27, 2007, DHS told us that it had developed a working draft of the strategic plan and the road map by December 2006 but it could not share these with us because they were not final. Until responsibility is accepted for ensuring that sampling activities will be validated, the fate of the validation process will remain uncertain. Without validation, if another anthrax attack were to occur tomorrow, federal civilian agencies would not be able to conclude with any given level of statistical confidence, in cases of negative results, that a building is free of contamination. Mr. Chairman, this concludes my prepared remarks. I would be happy to respond to any questions that you or other members of the subcommittee may have at this time. For further information regarding this statement, please contact Keith Rhodes at (202) 512-6412, or [email protected], or Sushil K. Sharma, Ph.D., Dr.PH, at (202) 512-3460, or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. William Carrigg, Barbara Chapman, Crystal Jones, Penny Pickett, and Elaine Vaurio made key contributions to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In September and October 2001, contaminated letters laced with Bacillus anthracis were sent through the mail to two U.S. senators and members of the media. Postal facilities in New Jersey, Washington, D.C., and elsewhere became heavily contaminated. The anthrax incidents highlighted major gaps in civilian preparedness to detect anthrax contamination in buildings. GAO was asked to describe and assess federal agencies' activities to detect anthrax in postal facilities, assess the results of agencies' testing, and assess whether agencies' detection activities were validated. Federal agencies conducted several sampling activities, including developing a sampling strategy and collecting, transporting, extracting, and analyzing samples. They primarily collected samples from specific areas, such as mail processing areas, using their judgment about where anthrax would most likely be found--that is, targeted sampling. The agencies did not use probability sampling, which would have allowed agencies to determine, with some defined level of confidence, when all results are negative, whether a building is contaminated. The results of the agencies' testing in 286 postal facilities were largely negative--no anthrax was detected. However, agencies did not use validated sample collection and analytical methods. Thus, there can be little confidence in negative results. With a validated process, agencies and the public could be reasonably confident that any test results generated by that process would be reliable. The Department of Homeland Security (DHS) is the principal agency responsible for coordinating the federal response. Thus, in its 2005 report, GAO recommended that the Secretary of Homeland Security develop a coordinated approach to improve the overall process for detecting anthrax and increase confidence in negative test results generated by that process. DHS stated that while it has overall responsibility for coordinating the federal response during future biological attacks, other agencies have the lead responsibility for validation. Therefore, uncertainty over which agency would take the lead role--that is, who is in charge--in improving the overall process for detecting anthrax, including validation of the methods, continued after GAO issued its report. On the basis of these uncertainties, GAO recommended in its May 9, 2006, testimony that DHS's approach to validating the overall process start with a strategic plan that would include a road map outlining how individual agencies' efforts would lead to the validation of the individual activities as well as the overall process, noting that such a plan would assist DHS in monitoring progress and measuring agency performance toward improving the detection of anthrax and other prioritized threat agents. While DHS generally agreed with these recommendations, it stated that it cannot ensure validation studies would be done, since "there are legal limitations in DHS authority to direct the activities of other agencies." Also, since validation would require a sustained effort over a long period, DHS noted that it could not mandate commitment of other agencies' funds, over which it has no control. Until responsibility is accepted for ensuring that sampling activities will be validated, the fate of the validation process will remain uncertain. Without validation, if another anthrax attack were to occur tomorrow, federal civilian agencies would not be able to conclude with any given level of statistical confidence, in cases of negative results, that a building is free of contamination.
Pursuant to the Federal Property and Administrative Services Act of 1949, the General Services Administration (GSA) was created to manage and acquire federal government space and administrative and operating supplies in order to eliminate duplicative functions within government and to establish a professional resource that would maximize the government’s effectiveness in obtaining supplies and services. Today, GSA’s Federal Supply Service is responsible for supplying and procuring goods and services through three major programs—the special order, stock, and schedules programs. In the special order program, agencies order items from GSA; GSA places the agencies’ orders with vendors; and the vendors deliver the items to the agencies. In the stock program, GSA orders items from vendors who deliver the items to GSA’s warehouses. Agencies order the items from GSA and receive the items from the warehouses. In the schedules program, agencies place orders directly with vendors holding GSA contracts, who deliver the items directly to the agencies. The Federal Acquisition Streamlining Act of 1994 (FASA) revised and streamlined the procurement laws of the federal government. Section 1555 of FASA (40 U.S.C. 481 (b) (2)) gives GSA the authority to establish a cooperative purchasing program through which state, local, Indian tribal, and the Puerto Rican governments could use GSA’s federal supply schedules program to purchase needed goods and services. Under section 1555, eligible governments, upon their request, could purchase items directly from supply schedule vendors under the same terms and conditions that GSA has established for federal agency purchases. The conference report on FASA indicated that individual supply schedule vendors would not have to make the products or services on the supply schedules available to nonfederal users, such as state and local governments, unless the terms of the schedule contract would so provide. FASA explicitly precludes GSA from authorizing any state, local, Indian tribal, or the Puerto Rican government to order existing stock or inventory from federally owned and operated, or federally owned and contractor operated, supply depots, warehouses, or similar facilities. Thus, FASA excludes these governments from purchasing goods and services from GSA’s federal stock program. The federal supply schedules program is one of GSA’s largest programs for providing goods and services to federal agencies. In fiscal year 1996, GSA’s sales through the schedules program accounted for about 72 percent, or about $4.8 billion, of the approximately $6.6 billion in agency purchases through GSA’s schedules, stock, and special order programs. As shown in figure 1.1, fiscal year 1996 stock program sales of about $579 million accounted for only about 9 percent of GSA’s sales, while fiscal year 1996 special order program sales of about $1.3 billion accounted for only about 19 percent of GSA’s sales. Products from the supply schedules program are available on single-award schedules, multiple-award schedules, and new introductory product schedules, depending on the commodity. Single-award schedules consist of contracts with one vendor for the delivery of a particular product or service to a specified geographic area. Prospective vendors compete for the GSA contract to provide the product or service to government agencies, normally at the lowest price. Multiple-award schedules consist of contracts awarded to more than one vendor for comparable (but not necessarily identical) commercial supplies or services for delivery within the same geographic area. New introductory product schedules provide the means for new or improved products to enter the federal supply system. Once a vendor’s product is accepted for inclusion on a new introductory product schedule, if sufficient demand for that item is generated after a 3-year period, the item is to be transferred to one of GSA’s other supply programs. GSA’s Federal Supply Service negotiates and awards contracts for products and services available through the majority of federal supply schedules. The Service issues solicitations, receives offers from prospective vendors, negotiates with them on product and service prices as well as terms and conditions of sale, and awards the contracts. The contracts are indefinite-delivery contracts that give vendors the right to sell goods and services to the government during the period of time that the contract is in effect. Contracts commonly are in effect for more than a 1-year period. Federal agencies order products and services directly from a vendor and pay the vendor directly. In fiscal year 1996, there were 146 schedules. GSA has responsibility for managing 133 schedules, and it has given the Department of Veterans Affairs (VA) responsibility for managing 13 schedules, including the schedule for pharmaceuticals and 12 schedules for medical equipment, devices, and supplies and certain food items, such as cookies and cereals. Fiscal year 1996 sales through VA’s schedules totaled about $1.9 billion. A large number of vendors negotiate contracts with the Federal Supply Service or VA in order to provide products to federal agencies. Vendors include businesses that manufacture products as well as dealers or distributors that sell and service products. In fiscal year 1996, GSA had about 5,300 contracts with vendors that supply goods or services either through its single-award or multiple-award schedules, while VA had about 1,257 contracts. About 74 percent of these contracts were with small businesses. (See app. I for a listing of the 146 schedules as well as sales made through the schedules to large and small vendors.) The supply schedules program provides several advantages to both federal agencies and vendors. For example, agencies have the option of ordering small quantities of commonly used goods and services without using the traditional procurement process. Also, agencies know that GSA is responsible for ensuring that all procurement regulations have been followed in awarding the schedules contracts and making items available. For example, multiple-award schedules conform to the requirements of the Competition in Contracting Act and are competitive in that participation has been open to all responsible sources. In addition, prices negotiated by the Federal Supply Service and the vendors are to be based on each vendor’s best discounts within certain categories of customers and sales information on top-selling items within product or service groups. Vendors also benefit because their commercial products are exposed to a large number of potential customers. Also, the vendors expend less effort to sell products to federal agencies if their items are available through the schedules program because of the reduced paperwork. For example, a business would not have to prepare a separate offer in response to agency solicitations for every federal agency it wants to supply. Since 1994, GSA has taken several actions that were intended to make it easier for federal agencies to obtain commercial goods and services through its supply schedules program. For example, GSA has simplified ordering procedures to reduce the amount of paperwork involved. In addition, agencies have the option of placing orders of $2,500 or less with any schedule vendor of their choice. Also, when placing orders of more than $2,500, agencies are no longer required to fully justify when an item is not purchased at the lowest price. Instead, agencies are to review at least three price lists or consider other alternatives on the schedules. To further simplify ordering through the schedules program, GSA is in the process of deploying an electronic ordering system for customer access to the full range of GSA supplies and services. GSA plans to have this system, which is to be available through the Internet, fully operational by the end of fiscal year 1997. GSA is also making the use of the supply schedules program optional on the part of all executive branch agencies and is eliminating mandatory use provisions in its contracts. In addition, GSA is requesting that vendors be as expeditious as possible and identify items that can be delivered faster than both normal and expedited delivery times. Vendors are also requested to identify items that can be delivered overnight or within 2 days. Maximum order limitations are also being removed, and GSA has developed new procedures allowing vendors to accept “any size” order. In addition, customers are encouraged to request price decreases from vendors before placing orders exceeding a certain size. Also, vendors are allowed to offer individual agencies price reductions without passing these reductions on to all other federal agencies. The federal supply programs were initially for use primarily by federal agencies and the District of Columbia. However, since 1949, Congress has authorized a variety of other entities to use GSA’s procurement services, including the federal supply schedules. For example, the Foreign Assistance Act of 1961 provides that the president may authorize certain countries, international organizations, the American Red Cross, and voluntary nonprofit relief agencies to use GSA’s sources of supply. Many Indian tribal governments also have been authorized to make purchases from GSA under the Indian Self-Determination and Education Assistance Act of 1975. In 1978, Gallaudet College, Howard University, and certain other charitable institutions or nonprofit organizations; as well as fire fighting organizations cooperating with the Forest Service, were authorized to make purchases through GSA. In 1992, Congress provided the governments of American Samoa, Guam, the Northern Mariana Islands, the Trust Territory of the Pacific Islands, and the Virgin Islands the authority to make purchases through GSA. In 1993, Congress authorized law enforcement agencies involved in counter-drug activities to make purchases through GSA. The 1993 report of the National Performance Review (NPR) recommended that state and local governments, grantees, and certain nonprofit agencies be allowed to use federal supply sources. In addition, NPR recommended that federal agencies be allowed to enter into cooperative agreements to share state and local government supply sources. The basis for the recommendation was the belief that consolidated government procurement actions tend to maximize the economic advantage of volume buying with lower costs to the taxpayer. The concept of cooperative purchasing was not unique to NPR. Cooperative purchasing has existed in varying forms since at least the 1930s when various governments started joining forces for the purposes of making intergovernmental cooperative purchases. In addition to the tangible benefit associated with cost savings, other benefits cited by members of such cooperative purchasing groups include the exchange of procurement information. NPR’s report noted that even though federal agencies, the District of Columbia, and some other organizations were authorized by law to use federal supply sources, state and local governments generally were not authorized to use them. The report concluded that allowing governments to enter into agreements to use one another’s contracts would reduce administrative staffs and costs and that all levels of government would be able to negotiate better prices as a result of the increased volume of sales under the contracts. A cooperative purchasing program that would allow state, local, the Puerto Rican, or Indian tribal governments to use the federal supply schedules was enacted as section 1555 of FASA, which amended the Federal Property and Administrative Services Act. The section provided GSA with considerable discretion on the way the program is to operate and the specific federal supply schedules it may authorize these governments to use. The section also allowed GSA to charge state, local, Indian tribal, or Puerto Rican governments a fee for any administrative costs it incurs by allowing these governments to use the schedules. FASA stipulated, however, that these governments are not authorized to use GSA’s stock program. At the time that the provision was being considered by Congress, little debate occurred over any possible adverse effects of allowing state and local governments the use of GSA’s schedules program. On April 7, 1995, GSA published a Federal Register notice that presented and requested comments on its proposed implementation plan for section 1555. As proposed, GSA planned to make the schedules available to the authorized governments upon their requests unless a determination was made by the GSA contracting officers responsible for specific schedules that it would not be appropriate to do so. For example, schedules would not be made available to nonfederal users if doing so would raise prices that federal agencies pay for items on those schedules. Under GSA’s proposal, individual schedule vendors would be able to elect whether or not to make the products or services they sell through the schedules available to authorized nonfederal users. If vendors elect to make products available to nonfederal users, GSA officials said that this could be accomplished by modifications to their existing contracts. GSA planned that these nonfederal users would place orders directly with supply schedule vendors. As authorized by FASA, GSA also planned on charging the governments an administrative fee for the use of the schedules as GSA converts the supply schedules program from a federally appropriated program to an operation funded by fees charged for services. The administrative fee was to be included in the vendors’ prices for each schedule item. Vendors, in turn, would transfer fees collected to either GSA or VA. GSA does not envision that the supply schedules program, or items available through that program, would change significantly as a result of the cooperative purchasing program. In its April 1995 Federal Register notice, GSA cautioned that schedule contracts would be established only to meet the needs of federal agencies, and only to the extent that nonfederal users had a need for the same items or services would they be authorized to use the schedule contracts. GSA officials subsequently told us that GSA would determine, on a case-by-case basis, which schedules should be available to nonfederal users, taking into consideration the potential effect that opening up the schedule may have on the federal government. According to these officials, if allowing state or local governments the option of using a schedule could result in increased prices to federal agencies, GSA would not make the schedule available to nonfederal users. In its Federal Register notice, GSA announced that it had determined that two schedules—one for drugs and pharmaceutical products and one for medical equipment and supplies (in vitro diagnostic substances, reagents, test kits and sets)—should not be made available for use by nonfederal users because it would not be in the interest of the federal government. GSA based its determination on VA’s recommendation that these schedules not be made available because of unique statutory requirements imposed by the Veterans Health Care Act of 1992, which, according to GSA’s Federal Register notice, would result in increased prices for products on these two schedules. The potential effects of opening the pharmaceutical schedule on drug prices will be discussed in a separate GAO report. Following enactment of FASA, concerns emerged from several industries that because of either their market structure or other factors, they would be subject to adverse effects, such as lost sales, from cooperative purchasing. The Clinger-Cohen Act of 1996 suspended GSA’s authority to implement the cooperative purchasing provision of FASA. The 1996 act also mandates that we report on the implementation and effects of cooperative purchasing and that we submit a report to both GSA and Congress within 1 year of enactment. The 1996 act further requires GSA to submit comments to Congress on our report within 30 days. GSA’s authority to implement the cooperative purchasing program under section 1555 of FASA is suspended by the 1996 act until 18 months after the act’s enactment or until 30 days after GSA’s comments on our report are submitted to Congress, whichever is later. The objectives of this report were to assess: the potential effects of cooperative purchasing on state and local governments, the government of the Commonwealth of Puerto Rico, Indian tribal governments, and federal agencies; the potential effects of cooperative purchasing on industry, including small businesses and local dealers; and GSA’s plans to implement the cooperative purchasing program. The Clinger-Cohen Act of 1996 mandated that our report include assessments of the potential effect of the cooperative purchasing program on (1) state and local governments, the government of the Commonwealth of Puerto Rico, and Indian tribal governments; and (2) industry, including small businesses and local dealers. The Conference Report accompanying the 1996 act further directed that we include an assessment of the effects on costs to federal agencies of state and local governments’ use of the federal supply schedules. To assess the potential effect of the cooperative purchasing program on state, local, and Indian tribal governments and on federal agencies, we collected and reviewed data that described the procurements and procurement methods that each level of government used. To assess the potential effect on state governments, we conducted a September 1996 nationwide survey of states and territories to obtain information on state laws or practices that would encourage or inhibit states’ use of the federal cooperative purchasing program and the extent to which they would use the program and for what purposes. Responses were obtained from 48 states and 2 territories. We did not attempt to verify the responses made by state officials or the reasons given for their responses about their potential use of the federal supply schedules program. (App. II provides the results of this survey.) We also contacted associations that represent state and/or local governments, including the National Association of State Purchasing Officials, to obtain their members’ views on the cooperative purchasing program and to obtain any relevant data these associations had on the potential effect of the program. We obtained and reviewed available data from a nationwide survey conducted by the National Association of State Purchasing Officials in 1992 that asked whether the laws in the individual states would allow the use of the federal supply schedules; whether state purchasing officials expected to use the cooperative purchasing program; and what, if any, advantages and disadvantages these officials saw in the program. (App. III provides a listing of all associations whose views we obtained.) In addition, we reviewed comments made by state and local governments in response to GSA’s April 1995 Federal Register notice. To more fully understand factors that may influence state and local governments’ decisions on whether to make purchases through the federal cooperative purchasing program, we contacted 29 purchasing officials in California, Montana, New York, West Virginia, and Puerto Rico to obtain information on procurement practices. We selected these states with a view to obtaining diversity in geographic location and size, as well as in size of population. In addition to obtaining information from each state’s and Puerto Rico’s central purchasing offices, we selected 24 program agencies in the 4 states. These agencies included each state’s transportation department, a state university or university system, plus an agency suggested by the state procurement agency from which to obtain information. These program agencies also included three local government agencies so that we could provide similar information on those local agencies’ purchasing requirements and practices. We selected the program agencies to ensure a range of potential users of a cooperative purchasing program. We selected the local government program agencies, in consultation with state purchasing officials, to include both large and small local government entities. Our selection was not designed to produce a statistically valid sample of state and local government agencies that would be eligible to participate in a cooperative purchasing program. The purpose was to supplement our other information and provide an indication of the factors that would influence state and local agencies’ decisions on whether to use the federal cooperative purchasing program. In addition, we asked state and local officials in these four case study states if their procurement laws or policies would allow them to use the federal cooperative purchasing program and, if not, the nature of their procurement laws or policies that would prohibit or limit their use of the program. Although we did not attempt to determine if the views of the state and local officials regarding these laws and policies in these four states were necessarily correct, we did review the laws to understand the basis for their positions. We also obtained their views on whether they wanted access to the federal supply schedules and the reasons for their views. Further, we contacted the Puerto Rican Government’s central purchasing office to obtain Puerto Rico’s views on the cooperative purchasing program and information on its laws that may affect its use of the program. To determine the extent that state or local governments could or would be likely to use the program, we conducted case studies in the four states. We asked the 24 selected program agencies to provide procurement documentation (i.e., invitations for bids, contracts, purchase orders, invoices, etc.) used to make recent purchases. We asked that these purchases reflect items that the agencies were interested in purchasing through GSA, because the items were (1) routinely purchased (i.e., high volume); (2) consumed a large portion of the procurement budget (i.e., high-dollar volume); (3) difficult to procure; or (4) available through GSA’s schedule program, and the state or local agency believed the GSA vendor may be a better source. We received procurement documentation from 16 of the 24 agencies. We did not determine why the agencies selected the purchases for which they provided us documentation. We provided the procurement documentation to GSA, which had its contracting officers determine whether the same or comparable items were available through GSA’s supply schedules program and, if so, how GSA’s contract terms and conditions of sale, including price, compared to the terms and conditions of sale obtained by state and local agencies. We did not verify GSA’s determinations. (App. IV presents the results of this comparison.) Although the items represented by the procurement documentation obtained from state and local agencies do not comprehensively represent the types of goods or services these agencies could or would purchase through the federal cooperative purchasing program, they do provide an indication of the experience state and local agencies may encounter when considering making such purchases. Neither we nor GSA determined if the quantity of items purchased by individual state or local agencies was more than or less than vendors’ maximum order limits, and hence potentially eligible for additional discounts from the vendors’ list prices, or whether actual prices paid by federal agencies were less than schedule prices. To better understand how state and local law enforcement agencies have used a similar program that has given them access to federal supply schedules to support state and local drug enforcement activities, we contacted state officials in seven states. We selected them either because they participated in GSA’s pilot for this program or because of their geographic location. We asked these officials to describe their use of the program, including their experiences with the availability and prices of products on the federal supply schedules. In addition, to obtain similar information we contacted a purchasing official in the Virgin Islands, who already had access to federal supply schedules, and representatives of several cooperative purchasing arrangements under which state or local governments have agreed to pool their purchases of certain products. We also used the Input-Output Accounts for the U. S. economy provided by the Department of Commerce’s Bureau of Economic Analysis to provide data on the types of goods and services that state and local governments purchase and to compare their purchases with nondefense federal purchases. The Input-Output Accounts show the relationship among all industries in the economy (including the various levels of government) and all the commodities that they produce and use. We used these data to indicate the pattern of industry purchases made by the federal, state, and local governments and to determine the extent to which these governments’ patterns of purchases are similar or different. We also used these data to indicate the extent to which state and local governments purchase items from industries whose products might be available on federal supply schedules. We used these national data at an aggregate level to get a general indication rather than a precise measure of the pattern of federal, state, and local purchases among industry groups. We did not use these data to provide a precise measure of the relationship between the various levels of government and the industries that might be affected. First, the Input-Output accounts are organized along industry classifications that differ from those of GSA’s supply schedules. Second, as the Bureau of Economic Analysis notes, the most recent data in the Input-Output accounts are for 1987 and the patterns of purchases could have changed since that time. Our use of these data entails an assumption that there have not been major changes in interindustry relationships (including those between state and local governments and industries that supply these governments). We believe this to be a reasonable assumption given our use of the data for describing, in general terms, state and local purchases and comparing them with federal purchases. To assess the potential effect of the cooperative purchasing program on Indian tribal governments, we discussed the use of the federal supply schedules by Indian tribal governments with Bureau of Indian Affairs (BIA) officials in the Department of the Interior and with GSA officials. We also contacted three Indian tribal governments that have entered into agreements with the federal government to assume responsibility for programs that would otherwise be the responsibility of the federal government to determine whether these tribal governments have used their existing authority to use GSA as a source of supplies and services. We selected tribal governments on the basis of a BIA official’s recommendation that, as large tribes, these were likely to be among the heaviest users of GSA’s supply programs and thus the most knowledgeable about GSA’s programs. Although the tribal governments sampled do not represent all Indian tribal governments, they do provide an indication of Indian tribal procurement procedures and practices by Indian tribal governments that have entered into such agreements. To assess the potential effect of the cooperative purchasing program on costs to the federal government, we obtained information from the Departments of Defense, Health and Human Services, the Interior, Justice, and VA to determine whether they had conducted any assessments of the program and what effects they identified as likely. These departments were selected on the basis of their being among the largest users of GSA’s schedules program. In addition, we obtained the views of GSA’s Acquisition Management Center and VA on the effect of opening up the supply schedules on schedule vendors and federal agencies purchasing through the schedules program. To assess the potential effect of the cooperative purchasing program on industry, including small businesses and local dealers, we analyzed data from the Department of Commerce’s Input-Output Accounts (discussed previously) to estimate the government share of total sales for industry groups. We used these data to provide an indication of the extent to which various broadly defined industry groups rely on sales to the federal, state, or local governments rather than as a precise measure. We also obtained information from industry associations, including those that represent small business, to identify factors that may affect those industries; these associations included the American Small Business Association, the Environmental Industry Association, the Health Industry Manufacturers Association, and the National Retail Federation. (See app. III.) In addition, we selected vendors from selected GSA schedules to obtain vendors’ views on the potential effect of allowing nonfederal agencies to purchase through the federal cooperative purchasing program. We selected schedules on the basis of GSA officials’ views that nonfederal governments would have high interest in procuring products on them. These schedules included the computer schedules (including the telecommunications equipment schedule and the microcomputers schedule); special industry machinery schedule (copying equipment, supplies, and services); and furniture systems schedule. We also selected schedules and vendors in industries where associations representing the industry have informed GSA or us that they would or could be negatively affected should specific schedules be made available to nonfederal agencies. We also reviewed public comments GSA received from industry in response to its April 1995 Federal Register notice and contacted several businesses and trade associations that expressed concern over GSA’s proposed plan for implementing the program. This group included dealers and distributors of heavy equipment. We also contacted those companies that supplied items to state and local governments, which were identified through procurement documentation provided by state and local agencies (as described above), to obtain their views on how the program could affect sales their companies made to state and local agencies. Even though the industry groups and the companies contacted do not represent all industry groups or all companies, these groups and companies do provide an indication of possible effects that businesses expect from the federal cooperative purchasing program. To assess GSA’s plans for implementing the cooperative purchasing program, we held discussions with GSA’s Deputy Associate Administrator, Office of Acquisition Policy; the Director, GSA’s Acquisition Policy Division; the Assistant Commissioner, Federal Supply Service, Office of Acquisition; the Assistant Commissioner, Federal Supply Service; the Director, Acquisition Management Center, Federal Supply Service; as well as the director of GSA’s automotive center and contracting officers for the selected schedules mentioned previously. In addition, we contacted contracting officers for several schedules, including those schedules for which companies informed GSA that they would be negatively affected should specific schedules be made available to nonfederal agencies. We also talked with representatives of VA’s National Acquisition Center, which has primary responsibility for the pharmaceutical and medical equipment, supplies, and devices schedules, including Division Chiefs for 13 schedules. We recognize that there are limits to our ability to predict the effects of opening the supply schedules on state and local governments or on industry. Part of the limitations stem from the unavailability of data. For example, except for VA’s Pharmacy Prime Vendor programs, the various agencies generally do not have the detailed expenditure data that readily indicate what and how goods and services are purchased, and we do not have access to nonfederal contractors’ records. However, even with these data, we would not be able to predict how state and local governments would choose to utilize these schedules, how industry would respond to any changes in state and local purchasing arrangements, or how contract terms would change. We requested comments on a draft of this report from the Acting Administrator of GSA and the Secretary of the Department of Veterans Affairs; the Coalition for Government Procurement, which represents businesses supplying about 75 percent of federal purchases through the schedules program; and the National Association of State Purchasing Officials, which serves the purchasing administrators in the 50 states and U.S. territories. The Acting Administrator of GSA, VA’s Deputy Assistant Secretary for Acquisition and Materiel Management, and the Chair and Co-chair of the National Association of State Purchasing Officials’ Federal/State Relations Committee provided written comments, which are included as appendices V, VI, and VII of this report, respectively. The Executive Director and other representatives of the Coalition provided oral comments to us on January 9, 1997. Comments from these agencies and organizations are discussed at the end of chapters 2, 3, 4, and 5, as appropriate. We conducted our work from July to December 1996 in accordance with generally accepted government auditing standards. Many state and local governments we contacted want access to the federal supply schedules because they perceive potential benefits from the use of cooperative purchasing. However, these potential benefits may be limited because of (1) state or local laws, ordinances, or policies that direct how or where state or local purchases can be made; (2) the unavailability of needed goods or services through the schedules program; (3) higher costs or unattractive sales conditions for goods or services through the program; and (4) the need for nonfederal governments to maintain capacity to purchase items they do not buy from the schedules program. The federal cooperative purchasing program is not likely to have a substantial effect on Indian tribal governments because many tribes already have access to GSA’s federal supply schedules for many of their programs. Although GSA believes the cooperative purchasing program has the potential to result in lower schedule prices because of the increased sales that GSA vendors may be able to make through the program, the extent to which this will happen is unclear because of many factors, including those that may limit nonfederal government agencies’ use of the program and uncertainty over how many businesses will react. Given GSA’s plan to not open schedules when adverse effects on federal agencies are anticipated, there appears to be little risk that federal agencies will be adversely affected if GSA effectively implements the program. Most state and local governments we contacted indicated that they want the option of using the GSA supply schedules. State and local government officials we contacted said that such an option would provide several potential benefits, including the ability to obtain more competitive prices, a wider selection of goods and services, reduced purchasing turnaround times and administrative time and costs, and additional negotiating leverage with their traditional suppliers. The results of our nationwide survey of state purchasing officials, as well as discussions with 26 state, local, and Puerto Rican government purchasing officials, indicate that these state and local governments are generally in favor of having GSA supply schedules available for their use because of perceived benefits. Similarly, in its January 1997 report, GSA found that state and local governments want access to the federal supply schedules because of perceived benefits. In response to our survey, 34 of the 48 states and two territories that responded to our survey, including Puerto Rico, indicated that they would use the federal schedules program for making purchases. Even if the program were not used for making purchases, of the 50 respondents, 38 said they would use the schedules for price comparisons; 24 said they would use the schedules for benchmarking; and 15 said they would use the schedules to negotiate with vendors. It is important to note that many states may already have access to schedules information, including through the Internet, and may already be using the schedules for these three purposes. In addition to our nationwide survey, we contacted purchasing officials in 29 agencies in California, Montana, New York, Puerto Rico, and West Virginia to obtain their views on the federal cooperative purchasing program. We also reviewed comments GSA received from state or local agencies in response to its Federal Register notice. Of the 26 agencies’ purchasing officials who responded to our information requests, all 26 said they favored having access to the federal supply schedules. Purchasing officials from seven of the agencies said that the supply schedules offer the potential for obtaining lower prices on popular items they purchase, such as computers, furniture, and office equipment. For example, the Assistant Director of Facilities for the West Virginia Office of Higher Education stated that the supply schedules would complement what state colleges and universities are already doing by providing them an additional source of potentially lower prices that could result in better use of state funds. The Business Service Officer for the California Highway Patrol said the agency would benefit because some of GSA’s prices would be lower than those the agency can obtain, and knowing that it has access to GSA vendors will force the agency’s current contractors to be more competitive and possibly lower their prices. In comments on GSA’s Federal Register notice, the Purchasing and Material Manager for the City of Chandler, Arizona, stated that the federal cooperative purchasing program would benefit cities such as Chandler because it cannot obtain prices as favorable as GSA’s prices because of the smaller quantities it orders. In addition, several state and local government officials said that having the schedules available to them could provide them a greater selection of items. For example, in comments to GSA, the Executive Director of the Lexington-Fayette Urban County Housing Authority in Kentucky said that it had a high interest in using the supply schedules to purchase commonly used goods and services because small purchases could be simplified and the choice of items increased because of the large number of GSA vendors. He said his agency could benefit from the wide range of items available on the schedules because virtually every GSA schedule other than medical, dental, or laboratory contained items the housing authority used on a regular basis. A purchasing official from the New York State Office of General Services said state agencies can benefit from using the schedules because they provide greater choice of products, brand names, and sizes. Several state and local government procurement officials also said that they could realize administrative savings of both time and money by ordering through the federal supply schedules. For example, procurement officials from Albany, New York, and Missoula, Montana, said that the administrative functions and their associated costs could be reduced. These functions and costs include the time and cost necessary to develop formal solicitation packages; time and personnel costs to evaluate, negotiate, administer, and award contracts; and, in some instances, inventory costs to stock items. The Director of Purchasing for Puerto Rico’s territorial purchasing agency said that the agency would not have to spend as much time and money developing solicitations annually. Puerto Rico currently awards over 120 competitively bid contracts with local vendors, according to the director. In its comments on GSA’s Federal Register notice, the city of Chandler, Arizona, estimated that in fiscal year 1995 it spent from $1,500 to $2,000 per contract to obtain bids for items that were also available through the schedules program. The city Purchasing and Material Manager told us these items included computers, office supplies, janitorial supplies, plumbing, and electrical hardware. Procurement officials from the New York State Office of General Services and the city of Albany, New York, said that procurement lead times could also be shortened for state and local governments because they would be able to simply place a delivery order from an existing supply schedules contract. An official from Louisiana State University commented that the university could eliminate about 8 weeks from the time it usually takes to receive and review bids on systems furniture for its Computing Services Building if it could use the GSA contract on systems furniture. GSA asked the National Institute of Governmental Purchasing, which is an association of federal, state, and local government procurement officials, to survey its members to determine members’ interest in participating in the cooperative purchasing program. In its January 1997 summary of survey results, GSA also found that the majority of respondents indicated that they would participate in the program if it became available. Of the 131 respondents, 111 indicated that they would participate, even though 31 respondents indicated local ordinances and laws may be a barrier. The most cited reasons respondents gave for wishing to participate in the program were better pricing and administrative ease. Some concerns, however, were also cited about legal restrictions, quality, and price, as well as the administrative complexity of using the federal supply schedules. Schedules cited as being of most interest to respondents include the computer, furniture, office equipment, office supplies, and signs schedules. These schedules were each cited by more than 30 respondents. However, 15 respondents stated they were interested in all schedules. Overwhelmingly, respondents indicated a strong desire for some form of training on using the schedules—including video tape training and Internet training. GSA noted that it has videos available for training and would provide training programs. U.S. Department of Commerce data suggest that state and local governments could potentially benefit substantially from having access to federal supply schedules depending on the extent to which they use them. Commerce data for 1987 suggest that state and local governments collectively spend substantially more for several of the types of items that are available through the schedules than the federal government does. For example, Commerce data show that state and local governments spent $2.3 billion for paper and allied products in 1987 compared to about $243 million in federal, nondefense expenditures. Although purchasing officials from most states and the local agencies we contacted want to have the option of using the federal supply schedules, several factors could significantly limit the benefits they cited. These factors include (1) state or local laws, ordinances, or policies that direct how or where state or local purchases can be made; (2) the unavailability of certain items or products through the federal supply schedules program; (3) the availability of lower prices or better terms and conditions on items obtained from other sources, and (4) the likelihood that these nonfederal governments would need to maintain the procurement capacity to continue using their other supply sources for items they do not purchase through the schedules. State or local competitive bidding laws, ordinances, and policies; the requirement to use state contracts; or preferences to use special groups of vendors, such as local businesses, the disabled, or prisons, may direct how or where state or local purchases can be made. These laws, ordinances, and policies thus may limit the extent to which state or local agencies would be able or would want to use the federal supply schedules program. Because of this, the perceived benefits cited by local procurement officials, such as the ability to obtain more competitive prices, a wider selection of goods, and reduced time and costs, may be less than otherwise expected. In response to our survey of state purchasing officials, 28 of the 34 respondents who indicated that they would make purchases from the supply schedules said that some law, ordinance, or regulation would limit their use of the cooperative purchasing program. All four of the states we contacted had competitive bidding requirements for state agency procurements, and they generally mandated that state agencies use existing state contracts. All four of the states also had preference programs for unique vendor groups, such as local businesses, the disabled, and the prison industry. Although these types of requirements and preference programs would limit state and local use of federal supply schedules, the possibility exists that they could be changed in the future to allow greater use of federal supply schedules by state and local governments. According to the National Association of State Purchasing Officials, in an effort to obtain the lowest prices available, most state and local government procurement statutes, ordinances, and rules provide that procurements exceeding a specified dollar amount must be made through formal competition, with public notices, sealed bidding, and public bid opening. In its 1992 survey of states, the Association found that 46 states had statutes requiring the procurement of goods or services by competitive sealed bids. According to the survey, the dollar amount above which competitive solicitation was required varied widely among states, from $100 in one state to $50,000 in another. However, 17 states were required to use competitive sealed bids for purchases exceeding $10,000, and 9 states were required to use sealed bids for purchases exceeding $5,000. The federal supply schedule programs are considered to be competitive under the Competition in Contracting Act in that participation in the program has been open to all responsible sources. Although some states have amended their statutes to exempt purchases obtained through the federal supply schedule program from competitive bidding requirements of state laws, this is not the case in all states. As of September 1996, more than half of the states reported still having restrictions that would limit their using the federal supply schedules. In our survey, 27 of the 50 respondents indicated that state competitive bidding requirements would limit their states’ use of supply schedules programs. All four of the states included in our case studies said that they had state bidding requirements that would limit their use of the supply schedules program. Because state competitive bidding statutes apply only to purchases that exceed specified thresholds, however, state and local governments might be able to use GSA’s schedule program for purchases that were below these thresholds and for other limited purchases. For example, in its comments to GSA in response to the April 1995 Federal Register notice regarding GSA’s plan to implement the cooperative purchasing program, Kentucky said that its state law requires state agencies to make aggregate purchases in excess of $5,000 through competitive sealed bids. Because of this requirement, Kentucky said that its agencies would be able to use the federal supply schedules only in instances where competitive bidding could not be used, such as when only one source of supply was available or an agency requested a specific brand and no substitute was justifiable. Similarly, Salt Lake City, Utah, commented that it could use the supply schedules only for small, sole-source, and emergency purchases. Comprehensive data are not readily available for us to estimate the amount of state and local governments’ purchases that must be made using state competitive purchasing requirements. However, in its 1992 survey, the National Association of State Purchasing Officials estimated that 85 percent or more of state and local government expenditures resulted from competitive solicitation. Another factor that could limit state or local governments’ use of the federal cooperative purchasing program is a requirement to use statewide or local contracts. According to the National Association of State Purchasing Officials, all states and most local governments consolidate requirements and award contracts for the purchase of goods or services for multiple users in order to reduce administrative costs associated with the preparation and issuance of solicitations on the same or similar items and the receipt, handling, and evaluation of the responses. Although the use of these contracts may be optional for some state or local agencies, the contracts may be mandatory for others. The 1992 National Association of State Purchasing Officials survey found that the extent to which states and local governments rely on statewide contracts varied. For example, state purchases through statewide contracts ranged from 5 percent of total dollar volume to up to 90 percent. Of the 50 respondents to our survey, 16 states indicated that they could use GSA’s schedules program to procure items only if the items were not available through other state procurement arrangements, such as schedules. The four states included as our case studies also generally were required to use statewide contracts. For example, the New York State Office of General Services is the central procuring office for hundreds of New York state agencies. It annually awards about 2,100 contracts with an estimated purchasing value of $800 million. According to the Purchasing Director, Office of General Services, New York state finance law requires state agencies to first consider the use of the state contracts to acquire commodities. State contracts for services and technology are available for optional use. However, they are developed to specifically address the needs of New York state agencies. In addition, the Purchasing Director explained that the Commissioner of the Office of General Services is authorized to approve the use by state agencies of a contract let by the federal government. The Director said such approval would be the procedure used to enable a New York state agency to use a federal supply schedule that would be available under the cooperative purchasing program. More than half of the contracts are available for use by about 3,100 eligible nonstate agencies, including local governments, school districts, and fire districts. These agencies account for about 40 percent of the purchases made under the statewide contracts. However, even though a state may require state agencies to use statewide contracts, exceptions may exist when agencies can demonstrate they can obtain items elsewhere at a lower cost. In addition, the mandatory use of statewide contracts may not always apply. The Purchasing Director for the West Virginia Procurement Division said that even though state agencies are generally required to use state contracts, if a state agency can document that it can procure goods or services at a lower price elsewhere, the Procurement Division will, upon request from a state agency, grant a written waiver for the agency to do so. The Assistant City Manager for Charleston, West Virginia, said that the city makes its purchases using whatever methods or procedures will result in the lowest price. This could include using a state contract. The Chief of Purchasing for the Raleigh County Board of Education in West Virginia said that the board uses a combination of purchasing methods, including the use of statewide contracts, its own contracts, and spot purchases. According to this official, the driving factor determining which procurement method is used is obtaining the lowest price. State and local laws, ordinances, or policies that provide for contracts to be awarded on the basis of factors other than best price or best conditions of sale could also limit the potential benefits of the cooperative purchasing program. These include laws that direct contracts to local businesses or to certain groups, such as prisons; preferences to support local businesses over other businesses; and commitments to use cooperative contracts. The National Association of State Purchasing Officials found in its 1992 survey that 15 states had laws mandating preference for in-state vendors, and an additional 16 states had laws favoring products produced in-state. In addition, it found that 45 states that award contracts to manufacturers required that sales and services be rendered through local dealers. Of the four states we contacted in our review, two had local vendor preference statutes. According to the Business Manager for the West Virginia Department of Transportation’s Division of Highways, West Virginia’s vendor preference law provides in-state vendors with up to a 5 percent price advantage over out-of-state vendors. In addition, if at least 75 percent of an out-of-state vendor’s workforce is located within the state, the vendor is given a 2.5 percent price advantage. Similarly, according to the Chief of Procurement, Department of Administration, Montana’s vendor preference law provides in-state vendors with a 3 percent price advantage over out-of-state vendors. She explained that the Montana statute also provides vendors a 5 percent price advantage for products produced in Montana. Some states have laws to direct purchases to certain groups, such as the disabled or the prison system. Three of the four states included in our case studies had such preferences. For example, New York’s priority system for making purchases requires state agencies to first determine whether an item or service is available from one of the state’s established preferred sources, including Corcraft, New York State Department of Correctional Services, Division of Industries; the Industries for the Blind of New York State, Inc.; the New York State Industries for the Disabled; and the New York State Office of Mental Health. State law requires that purchases be made from one of the preferred sources when needed goods or services meeting the form, function, and utility requirements of the agency are available from those sources. Similarly, state officials told us that state agencies in California and West Virginia that are purchasing goods made by the state prison industry must attempt to purchase these goods from this industry before going to another source. Even though there may not be state laws that direct that purchases be made from certain groups, regardless of whether the price is competitive, state and local governments may prefer purchasing products and services from in-state or local vendors. Their reasons could include a need for customer support services and/or the desire to support the local economy. Of the 26 state and local agencies that provided information on their procurement practices, three state and four local agencies said that a need for customer support services or the desire to support the local economy affected their procurement decisions. At the state level, the West Virginia Department of Transportation’s Business Manager said that when developing requests for bids, the Department assigns point values to such things as vendor warranty, local vendor servicing, and local availability of spare and repair parts, as well as to the bid price when awarding contracts. An official of the University of California said that its campus system prefers to patronize local businesses in the communities where campuses are located because to do so helps support the local economy. At the local level, the Finance Director for the city of Missoula, Montana, said that all equipment the city purchases is from local sources because the city cannot afford to send equipment out of Missoula for repairs. In addition, the Purchasing Coordinator for the city of Elmira, New York, said that although contracts are awarded strictly on the basis of price, contract minimum requirements may stipulate that the vendor must arrange for repair parts and servicing to be provided by dealers within 150 miles of the city. The Director of Purchasing for the Puerto Rican government said that this government also has a practice of purchasing locally. Puerto Rico may find its potential use of the supply schedules similar to that of the Virgin Islands. That territory has been able to use the schedules since 1992. According to the Deputy Commissioner for the Virgin Islands Department of Property and Procurement, the Virgin Islands uses the supply schedules only for those items that its local vendors cannot supply. This is because territorial vendors complain to their local legislators if the government procures from businesses that are not on the island. As a result, most of the Virgin Islands’ purchases are not made through the federal supply schedules program, according to the Deputy Commissioner. Interstate and intrastate arrangements that state and local governments use to combine procurement needs and collectively procure items also have the potential to reduce the extent to which these governments procure certain items through the supply schedules program. These types of arrangements may require participating nonfederal governments to combine the needs for specific items for the purposes of soliciting bids and awarding contracts and to purchase those items through those contracts. According to the Manager of Contracts and Administration for the Metropolitan Washington Council of Governments, these arrangements may result in lower prices than those arrangements where the needs of participants are not combined for the purpose of soliciting offers. In its 1992 survey, the National Association of State Purchasing Officials found that 42 states had statutory authorization for entering into cooperative procurement agreements with different units of government, and 22 states had statutory authority to enter into cooperative procurement agreements with other states. In our discussions with 26 state and local agencies’ procurement officials, 3 of the 4 states—Montana, New York, and West Virginia—indicated that they were members of cooperatives. In our nationwide survey, 30 of 50 respondents indicated that they used cooperative purchasing agreements with states, and 36 indicated that they used such agreements with local governments. One example of a large-scale cooperative procurement arrangement is the National Financial Services Center’s National Cooperative Purchasing Alliance, which is affiliated with the National Association of Counties and relies on county purchasing agents across the nation to both select and bid on products and services on behalf of local governments in the United States. One of the Center’s programs currently uses the services of a number of purchasing entities across the country, including Fairfax, Virginia; Los Angeles, California; Orange, Florida; and Erie, New York. This program, which is in the early stages of development, has resulted in the award of one contract for office supplies, many of which may be available on federal supply schedules. Center officials said that they had not compared their cooperative purchasing contracts with those of GSA’s supply schedules. However, they believed that their contracts were competitive with GSA’s. Cooperatives also exist at the regional level. For example, the Washington Council of Governments comprises 18 of the largest jurisdictions in and around the Metropolitan D.C., area, including Fairfax, Loudon, and Prince William counties in Virginia; Prince George’s and Montgomery counties in Maryland; and the District of Columbia. According to the Manager of Contracts and Administration for the Council, as of September 1996, the Council had about 20 or more cooperative solicitation contracts for the purchase of such items as fuel oil (heating and diesel), road salt, and antifreeze. Members who have pooled their demands for those products must then use those contracts to purchase those products. The Council’s Manager of Contracts and Administration said that items suitable for such cooperative solicitations include those whose specifications are established by industry, such as fuel oil, and have great pooled demand among the governments. This official said that the Council had not compared its cooperative contracts to GSA’s supply schedule contracts. However, in general, he did not believe that having the option of using GSA’s contracts would change local governments’ purchasing practices. Similarly, a representative from a cooperative initiated by the city of Fort Lauderdale and Broward County, Florida, said that the cooperative is able to obtain highly competitive bids through the pooling of members’ needs. Currently, the cooperative has about 23 members. Members have pooled their demands to obtain such items and services as oils, greases, and lubricants; photographic film; diesel fuel; gasoline; office supplies; sod; brass valves and fittings; red clay for baseball fields; aggregate (for construction); field marking paint; mail presort services; athletic bleachers; paging services; uniforms; water testing; and trucks and vans. Once a member agrees to participate in a contract, the member agrees to purchase through the contract. Although this representative said that the cooperative had not compared its contracts to GSA’s supply schedules contracts, another cooperative representative said that the city of Coral Springs and the cooperative use some GSA schedules for benchmarking and price comparisons to determine if local vendors are quoting reasonable prices. These representatives said that they would like the option of using GSA’s schedules. One representative said that having the option of using GSA’s schedules program would be convenient for making those individual purchases that are sporadic in nature, where it would be too costly to solicit for bids, or when local vendors may not be able to supply city and county needs during times of a natural disaster. Although more than 4 million items are available through the federal supply schedules program, not all items needed by state and local governments would be available through the schedules. This could affect (1) whether state and local governments make purchases through the schedules program and (2) the extent to which these governments would incur benefits. We asked 24 state and local agencies in California, New York, Montana, and West Virginia to provide invoices of recent purchases to compare prices with similar items on GSA supply schedules. Of the 24 agencies, 16 provided documentation for 255 items that they indicated that they would be interested in buying through the supply schedules program. Of the 255 items, GSA determined that 84 were not available. GSA was unable to make a determination on whether 101 of the 255 items were available because the agencies provided insufficient information for GSA to make this determination. The fact that all goods and services needed by state and local governments are not available through the schedules program is not surprising, because GSA operates the schedules program to meet federal, not state, needs. State and local government agencies in California, Montana, New York, and West Virginia said that they were interested in buying a wide variety of items through the schedules program, including computers and computer hardware, office equipment and supplies, laboratory equipment, airline tickets, furniture, ammunition, asphalt, prestressed concrete beams and culverts, road salt, paint, diesel fuel, tires, automobiles, and heavy road maintenance equipment. Of the 154 items that state and local government agencies said they were interested in buying and that GSA could make a determination on whether the items were available through the federal supply schedules, GSA identified 70 that were available through the schedules program. Whether this would be true for all state and local government needs is not known because our sample was not designed to represent all potential users of the federal cooperative purchasing program. Items that were available include selected computer printers, certain types of computers, certain types of copiers, lawn mowers, de-icing road salt, and certain kinds of office supplies. Items that were not available include certain specific types of computers and computer hardware, some airline tickets, ammunition, automobiles, certain specific office equipment and supplies, asphalt, and diesel fuel. (App. IV contains a summary of GSA’s determinations on availability and pricing of these items on federal supply schedules.) According to the Director of GSA’s Acquisition Management Center, these items, as well as other items that state and local government agencies may be interested in purchasing, may not be available through the schedules program because the program is not intended to supply all federal agencies’ needs and is not designed to meet state or local government agencies’ needs. Rather, the program is intended to facilitate federal agencies’ purchases of commercially available items that are purchased frequently enough to warrant having them available through the schedules program. According to GSA officials, GSA will not be changing its basis for determining what items are available through the schedules program in order to accommodate state or local government agencies’ needs. U. S. Department of Commerce data suggest that those items for which state and local governments spend the most money are not available through the federal supply schedules. For example, Commerce data for 1987 show state and local governments spending their largest amounts of money on new construction, maintenance repair and construction, and electric utilities services—none of which are available through the schedules program. Of the items that accounted for the next two largest amounts of funds—other business and professional services and petroleum refining and related products—only a small portion of the former and none of the latter are available through the schedules. Experience among some law enforcement agencies that have been able to purchase items through the federal supply programs since 1994 also shows that some items are not available through the schedules program. Law enforcement agencies may make purchases through the program if items purchased are suitable for counter-drug activities. A North Carolina official said that some items that state or local law enforcement agencies want to purchase are on the schedules, while some are not. For example, while purchasing a portable thermal imaging unit suitable for use on helicopters, the state found that some of the components for the system were available through GSA’s supply schedules, and some were not. The state was able to purchase part of the system through the schedules program and obtained competitive bids for the remainder of the system. The North Carolina official said that the law enforcement agency that purchased the system was able to obtain the entire system for $90,000. Had GSA’s schedules not been available for the agency to obtain components of the system, he estimated that the system would have cost an additional $15,000. Since GSA’s policy is that it will not make items available if doing so would be contrary to the interests of its principal customers, which are federal agencies, state and local agencies may continue to find some products unavailable to them. In some cases, GSA may not make all schedule items available; and in other cases, the schedules may not include items that these nonfederal governments need. For example, GSA’s Federal Register notice proposed excluding the pharmaceutical schedule and one medical equipment and supply schedule from the cooperative purchasing program. GSA also does not intend to make its airline or fire fighting vehicles schedules available through the cooperative purchasing program because of its concern that doing so would lead to higher federal prices or adverse effects on businesses. (See. ch. 3.) The extent to which items are available through the supply schedules program but at higher prices than are available through other means or with less desirable servicing or sales conditions will limit the potential effect of the cooperative purchasing program on state and local governments. Our case studies conducted as part of this review and procurement work we have done previously demonstrate that GSA does not always have the lowest price or the most favorable sales conditions. As part of our review, we asked GSA to compare its schedules’ offerings with 255 items recently purchased by 16 state and local governments included in our case studies. GSA found that although some items were more favorably priced through the schedules program, others were not.Of 70 items that state and local governments said they would be interested in purchasing that are available through the schedules program, 20 were purchased by the state and local governments at lower sales prices or with more attractive sales conditions than those of the schedules program, 47 items could have been purchased at lower prices through the schedules program, and 3 items could have been purchased at the same price. For example, the Raleigh County, West Virginia, Board of Education purchased a Hewlett Packard Laserjet computer printer for $485; GSA’s schedule price was $446, or 8.04 percent lower. The City of Mountain View, California, purchased another type of computer printer (Laserwriter 16/600) for $2,046; the GSA schedule price was $2,104, or 2.83 percent higher. Fairmont, West Virginia, State College purchased a computer system upgrade for $510.75; GSA’s schedule price was $394, or 22.86 percent lower. The State of West Virginia purchased road de-icing salt for $36.90 per delivered ton; the GSA schedule price was $42.75 per delivered ton, or about 15.9 percent higher. According to state purchasing officials, GSA may not always have the lowest price. Of the 50 respondents to our survey, 33 indicated that they had analyzed some GSA schedule prices. Of those 33 respondents, 13, or about 39 percent, indicated that state prices were generally lower than those available through GSA for the items they compared, while 2 states, or about 6 percent, indicated that GSA generally had lower prices for the items they compared. In addition, of the 33 respondents, 18, or about 55 percent, said some state prices were higher and some were lower than GSA’s prices for the items they compared. We did not ask state purchasing officials to identify any specific items or prices they compared, nor did we verify their responses. Some officials we interviewed in the four states included in our case studies also indicated that GSA’s schedule prices were not always lower than their prices. For example, a purchasing director in the New York State Office of General Services said that state contract prices are frequently lower than GSA’s schedules prices. Also, the Chief of Purchasing for the Raleigh County, West Virginia, Board of Education said that at times he has compared GSA’s schedules prices to prices the board can obtain locally and found that GSA’s schedule prices have generally been higher. For this reason, he said that opening the federal supply schedules for state and local governments will probably have little effect, even on small local businesses. He said that state and local buyers are already seeking the best match between the product or service they need or want and the lowest price, and competitively bid prices are generally lower than the prices GSA obtains. States we contacted that are participating in GSA’s law enforcement schedules program have had similar experiences. According to a North Carolina official, prices are not always lower through GSA’s schedules program, particularly with the administrative fee that is included on schedules prices to pay for administrative costs. He said that law enforcement agencies can, at times, find items through state contracts that are less expensive, and having the supply schedules available would not likely result in any state or local firms being put out of business because the schedule prices are not always better. He also said that law enforcement agencies frequently have many reasons to purchase locally aside from price, such as the desire to support local businesses. According to a West Virginia official, West Virginia has found that GSA’s prices are not always the best prices. She said that statewide contracts or department contracts frequently are competitive with GSA’s prices. She explained, for example, that the state of West Virginia and GSA both have contracts with the same manufacturer for light bar assemblies, which are the racks used to mount lights on top of police cruisers, and the state’s contract had a per item cost of about $100 to $125 less than GSA’s list price. However, some law enforcement agencies have realized savings through the federal supply schedules. For example, according to the North Carolina Alcohol Law Enforcement’s Deputy Director for Purchasing, the agency has purchased radios as well as a camera through the program. He said that the agency could not have afforded the camera except at the price available through the schedules program. An official in California’s Counter Drug Activities Procurement Program said that of the $360,000 in purchases made through the program, it was estimated that about $60,000 had been saved. This official said that departments can save about 33 percent off the prices of such items as cameras and night vision goggles. Our previous work has also shown that GSA’s prices may not always be the best available to state or local governments. In 1993, we reported that about half of the top-selling GSA multiple-award schedule items we examined were less expensive when offered to the general public or certain state governments than they were through the program. GSA has pointed out that a number of factors must be considered when one makes price comparisons between the schedules program and other supply sources. One is that federal purchases must comply with all federal procurement laws. The Raleigh County, West Virginia, Board of Education purchased a computer system for $1,455, but the GSA price for a comparable system was $1,687. However, GSA said that the lower priced system included a particular computer monitor that GSA could not offer because federal acquisition of this item would not be in compliance with federal international trade law. Another factor GSA cites is the terms of sale. To illustrate this, GSA points out that all GSA prices on the office supplies schedule provide for delivery to the customer’s desk within 24 hours of purchase, while state or local prices often require customer pick-up. GSA pointed out that GSA schedule prices represent ceiling prices and that customers are encouraged and permitted to contact schedule contractors to negotiate lower prices when making a purchase. The extent to which state and local governments could reduce administrative costs through a cooperative purchasing program is unclear. Data compiled by the Center for Advanced Purchasing Studies at Tempe, Arizona, indicate that the costs of procurement and, therefore, any costs that state or local governments may save by purchasing through the schedules program vary considerably. For example, the Center’s 1994 studies on purchasing performance benchmarks for state, county, and municipal governments show that the cost to procure a dollar’s worth of goods or services varied widely, ranging from fractions of a cent to 4 cents of administrative costs per dollar of procurement. (We have not verified these data or assessed reasons for this variability.) As chapter 1 notes, GSA charges a 1-percent fee for purchases from schedule vendors, and VA charges a 1/2-percent fee. Whether this fee will be more or less than the expenses that nonfederal governments would still incur should they use the federal supply schedules is unknown. Further, since nonfederal governments would not likely be able to use the cooperative purchasing program to meet all their procurement needs, these governments would continue to have some administrative and personnel expenses for procurement purposes. Moreover, the extent to which they could reduce their administrative costs is also unknown. Allowing Indian tribal governments to use the federal supply schedules program would appear unlikely to have a substantial effect on many Indian tribal governments because many of these governments already have the authority to use not only GSA’s supply schedules program but its other supply programs as well. The Indian Self-Determination and Education Assistance Act of 1975, as amended, gives Indian tribes the authority to contract with the federal government to operate programs serving their tribal members, as opposed to having these programs administered by BIA in the Department of the Interior and the Indian Health Service in the Department of Health and Human Services. After entering into an agreement to assume federal responsibilities, tribal governments receive the authority to purchase items from federal supply schedules or from GSA’s stock program, which has a range of items available in a nationwide network of distribution centers. Since section 1555 of FASA does not provide these tribal governments with any additional authority, the section should have little or no effect on the tribal governments that have contracted with the federal government to operate programs serving their members. In fact, by allowing Indian tribal governments to purchase from GSA customer service centers, the 1975 act provides these governments with broader access to GSA procurement programs than would section 1555, which would allow nonfederal users to make purchases only from federal supply schedules. According to BIA officials, approximately 70 percent of BIA’s programs are operated by tribes or tribal organizations. However, BIA and GSA do not maintain data on the extent to which tribal governments use GSA’s programs. According to BIA officials, although BIA may help a tribal government that has assumed responsibility for federal programs set up an account with GSA, BIA is not involved in any transactions between the tribal government and the GSA schedule vendors. According to an official in GSA’s customer support center, although GSA is aware that Indian tribal governments have purchased items through GSA’s programs, including its stock programs, GSA does not have data to measure the total sales to Indian tribal governments or to indicate what products were purchased. Officials from three tribal governments we contacted confirmed that their governments use GSA’s supply programs, but they said that their reliance on the programs varies because of the availability of items and the competitiveness of supply schedule prices. These officials said that they could not readily identify the share of their total purchases that were made through the different GSA supply programs. Even so, they stated that in certain cases, items and prices that are available through the supply programs can be financially attractive. One tribal official considered access to the schedule program to be important and noted use of GSA’s airline schedule as an example of a benefit of having access to GSA’s schedules program. (As noted earlier, GSA does not plan to make this schedule available to state and local governments through the federal cooperative purchasing program.) Officials from the other two tribal governments said that they may use GSA supply programs if needed products are available and if the prices are better than prices offered by other suppliers. However, they said that their use of the programs varied widely. The purchasing officer for one tribal government said that GSA’s supply programs, including the schedules program, represent about three-quarters of the tribal government’s total purchases. In contrast, an official for another tribal government said that this tribe’s use of GSA’s supply programs, particularly the stock program, was limited to about 5 percent of the tribe’s purchases because GSA frequently did not have needed items in stock. Both officials noted that these were only rough estimates because they did not have records that would provide a breakdown of sales by source. Tribal governments that have not entered into an agreement under the 1975 act could gain access to GSA’s supply schedules under the federal cooperative purchasing program in FASA. In practice, however, the fact that BIA or the Indian Health Service remains responsible for providing services to the tribal governments effectively means that any effect on such a tribal government from this new access may be limited. Since federal agencies continue to be responsible for providing services, these agencies would have to purchase the goods and services needed to support those services. Thus, the tribal government may not need to purchase many items. GSA officials we contacted believe that if sales made through the federal supply schedules program increase, a net reduction in prices paid by federal agencies could result from the agencies having a stronger negotiating position and a reduction in the administrative fee. Procurement officials from the Departments of Health and Human Services, the Interior, and Justice said that they had not assessed the potential effects of cooperative purchasing. The Department of Defense did assess the potential effects of cooperative purchasing on pharmaceutical prices, but Defense procurement officials told us that the Department had not conducted a comprehensive assessment of the potential effects of cooperative purchasing on other types of products. Officials in these departments said that procurement actions are decentralized in their departments and detailed data on transactions are not maintained centrally. Because procurement actions are handled at lower levels throughout their agencies, they believed that the effects of price changes at the lower levels would be small. VA and the Department of Defense have expressed concern about a possible price increase by pharmaceutical companies if drugs were made available to state and local governments through the schedules program. GSA believes that an increase in the use of and an increase in the number of sales made through the federal supply schedules as a result of the federal cooperative purchasing program would have the potential to reduce the costs of federal purchases. However, the extent to which prices could be reduced may be limited to the extent that GSA already tries to obtain the “best customer price” on contracts, even though it may encourage some potential GSA vendors to negotiate lower schedule prices. According to the Acquisition Management Center’s Director, the Federal Supply Service is mandated to become a nonprofit, self-sustaining agency. A 1-percent charge on sales made by or through the Federal Supply Service is assessed to purchasers of goods or services. The provision to assess a 1-percent fee is included in GSA’s contracts with supply schedule vendors. The vendors collect this fee as part of their sales price and transfer the fee to GSA, which offsets its operating costs. The Director of GSA’s Acquisition Management Center said that fiscal year 1997 is to be the first year that fees assessed and collected will be sufficient to sustain the Federal Supply Service’s operations. According to the Director, if state and local agencies were to make purchases through the supply schedules program, the additional sales made through the supply schedules program could ultimately result in GSA’s lowering the 1- percent charge on sales, because revenues would be more than sufficient to pay for GSA’s administrative costs. The Director said that GSA will be monitoring the extent to which revenues exceed its costs to determine whether it may need to renegotiate contracts with its vendors to reduce the fee. GSA officials also said that the cooperative purchasing program could also benefit the federal government because if the program results in increased sales, GSA may be able to negotiate lower prices with its vendors for items available through the supply schedules. They believe vendors may be willing to reduce prices because of the increased volume of sales. Procurement officials from the Departments of Health and Human Services, the Interior, and Justice said that their departments had not conducted a formal assessment of the possible effects that cooperative purchasing might have on their budgets or purchases. The Department of Defense assessed the potential effects of cooperative purchasing on pharmaceutical purchases; Defense procurement officials told us that they had not conducted a comprehensive assessment of the possible effects on other purchases. Officials in these departments commented that such an analysis would be at best difficult, if not impossible, to conduct. In addition, these officials said that they did not have sufficient data on the use of the schedules program by their departments because ordering authority in their departments was dispersed. The departments authorized program managers to manage their budgets and purchase needed items using their budgets, but they do not maintain detailed, centralized data on all items purchased by the different components of their departments. An official in the Department of the Interior’s procurement office noted, for example, that Interior had over 900 authorized purchasing officials working throughout its bureaus and offices and that these purchasing officials were not required to report all the specific items purchased. This official noted that the new purchasing card program would compound the data limitations. The official said that Interior had issued over 14,000 purchasing cards and concluded that it would not be possible to assess the effects of cooperative purchasing on Interior with the limited available data. Similarly, officials at the Departments of Defense and Health and Human Services noted that their departments did not have data centrally on the individual items purchased by their components. One official at the Department of Defense told us that the department had maintained such records until about 10 years ago but that currently, maintaining systematic data is not feasible because of how purchases are made through the schedules program. Although noting that data limitations prevented them from developing definitive predictions of the effects of cooperative purchasing, some procurement officials identified several reasons why they felt it would be unlikely that the cooperative purchasing program would have a readily noticeable effect on their departments’ purchases. One reason was that purchasing authority was spread throughout the departments. Because of this dispersed purchasing authority, procurements are generally smaller in scale than major, departmentwide procurements. Thus, if the cooperative purchasing program did affect prices paid by their departments’ components, the effect may not be large enough to be observed in any particular purchase. Some officials also noted that many of the industries that are included in the schedules program are competitive industries where other vendors would have an incentive to underbid any vendor seeking to increase prices as a consequence of cooperative purchasing. One official in the Department of the Interior, for example, said that buyers seek to pay the lowest price available for an item. If the schedule price is the lowest price for a particular item, other buyers, including business buyers, would seek to pay that price. An official in the Department of Defense also said that it was unlikely that the Department would see any sizeable effect from cooperative purchasing because the items on the federal supply schedules are commercial items with many buyers and sellers, so a shift in how any particular group of buyers operates (such as state governments using the federal supply schedules rather than their own procurement process) would not necessarily be noticeable to other buyers (such as the Department of Defense). Defense officials further noted that since use of the multiple-award schedules is not mandatory for the Department, and since any departmental component may purchase items through contracts negotiated by any other component, the Department would be less likely to experience substantial effects of wider use of the multiple-award schedules under a cooperative purchasing program. As discussed in chapter 1, GSA officials have stated that GSA would not open up a schedule if it believes that doing so would negatively affect the federal government. Prior to publishing the April 1995 Federal Register notice, GSA was told by VA that opening up the pharmaceutical schedule and one medical supply and equipment schedule may result in an increased cost to VA. On the basis of VA’s recommendation, GSA announced in the Federal Register that it proposed to exclude the two schedules from the program. After the notice was published, the Department of Defense notified GSA that it concurred in GSA’s proposal to exclude these two schedules because of the potential for increased federal prices. Also after the notice was published, a GSA official said that discussions were held with airline companies, during which these companies indicated that if nonfederal governments were able to use the airline schedule, they may raise their schedule prices. This GSA official said that because the estimated cost to the federal government of increased airline fares could be substantial, GSA is not planning on opening this schedule for state and local use. GSA’s Acquisition Management Center Director also said that GSA is not planning on opening up the schedule containing fire fighting vehicles because of the perceived potential negative effect this may have. In their written comments on a draft of this report, GSA and VA agreed that many factors make it difficult to definitively assess the effects of the cooperative purchasing program on federal and nonfederal governments. In its comments, the National Association of State Purchasing Officials agreed that opening the use of federal schedules has the potential to create a positive effect on state and local governments. The Association further noted that there were also potential areas of concern. For example, the Association noted that there could be a perception among local contractors, particularly small businesses, of diminished opportunities to bid for state and local government contracts. It also noted that in some circumstances, the Federal Supply Schedule contract will not have the lowest price and said that in such cases, the current system of multiple contracts helps to ensure that the most competitive prices are obtained. Finally, the Association pointed to several conditions in addition to those we cited that could limit use of or benefits from the cooperative purchasing program or that could cause difficulties for nonfederal governments. These conditions included mandatory contract terms or restrictions required in many state and local procurement contracts that schedule contractors might have to agree to abide by and the possibility that reliance on federal contracts could adversely affect some nonprofit, nongovernmental entities, such as charities, schools, and hospitals, that, in some states, now have access to state contracts. These types of organizations would not be eligible to use federal schedules under cooperative purchasing. The potential effect of cooperative purchasing on industry, including small business and local dealers, is likely to vary. Department of Commerce data on industry sales suggest that a number of industries that supply large portions of their output to state and local governments will not be affected at all because the services or goods they provide are not available through the schedules program. The data also show that the extent of the effects of cooperative purchasing on other industries is likely to vary due to the differing portions of their output that are sold to state and local governments. Businesses we contacted also differed in their expectations of the potential effects. Some state and local contractors we contacted believe that cooperative purchasing will have a positive effect by increasing their sales and customer bases. On the other hand, some state and local contractors fear negative effects in the form of business lost to GSA vendors if the program were implemented. Also, certain industries—including medical supplies and equipment, heavy equipment, and airlines—have expressed concern that they may be negatively affected by the cooperative purchasing program. These effects include a potential for reduced profits and decreased customer support. Because of these potential adverse effects, GSA plans to exclude some schedules that contain those industries’ goods or services. Other state and local contractors do not foresee any effect on their business, citing the unique specifications of the products they sell or their ability to offer competitive prices as the reasons they would not be affected. Finally, some contractors did not know how cooperative purchasing would affect them, citing uncertainties about how the program would be carried out and the potential for both gains and losses. Reflecting the diversity of views among individual businesses about how they would be affected by the cooperative purchasing program, associations representing industry have taken a range of positions on the program. However, these associations generally did not provide conclusive data that would provide the basis for a prediction of the effects of cooperative purchasing. Department of Commerce data on interindustry relationships for 1987—the most recent data available—provide a broad perspective on the extent to which different industry groups might be affected by the cooperative purchasing program. These data suggest that the effects are likely to vary among different industries. Some industries that supply large portions of their output to state and local governments—such as construction and service industries—generally are not available through the schedules program. Other industries that provide relatively large portions of their output to state and local governments provide products that generally are available on the schedules. According to Commerce’s data, few industries that supply goods to state and local governments rely on these governments for a large share of their sales. According to the most recent data, only 28 industries out of the 89 industries identified in the Commerce data supplied more than 3 percent of their total industry output to state and local governments. Of these 28 industries, only 14 supplied goods or services that are available through the schedules program. However, these data are national averages for broad industry groups, and particular firms, specific products, or geographical areas could have a much higher reliance on state and local purchases than suggested by these figures. Several of the industries that provided a relatively large share (6 percent or more) of their total output to state and local governments are not likely to be affected much, if at all, by cooperative purchasing according to Commerce’s data. The output that these industries supply to state and local governments was generally not available through the schedules program. For example, maintenance and repair construction, new construction, electric utility services, petroleum refining products, computer and data processing services, other printing and publishing services, and railroads and related services are the industries that supplied 6 percent or more of their output to state and local governments, as demonstrated by table 3.1. However, the types of output provided by seven of these industries were not available through the federal schedules program as of fiscal year 1996. In contrast, four industries that supplied 6 percent or more of their output to state and local governments produce output that was available through the schedules program, including ophthalmic and photographic equipment; drugs; miscellaneous manufactured products, such as signs, pens, mechanical pencils, and hard surface floor coverings; and farm, construction, and mining machinery. GSA has a photographic equipment and supplies schedule; a construction and highway maintenance schedule; and several material handling equipment schedules containing such items as forklifts and material handling equipment. It also has office supply schedules and a resilient flooring schedule. This could suggest that cooperative purchasing may have more of an effect on those industries. An additional 17 industries supplied over 3 percent, but less than 6 percent, of their output to state and local governments. These industries include furniture and fixtures, scientific equipment, industrial chemicals, computer and office equipment, and electrical equipment. Some types of computers, office equipment, and office furniture are sold in high volumes through the schedules program and, as noted in chapter 2, are products that state and local government purchasing officials would be interested in having access to through the schedules program. The remainder of the industry groups included in the national statistics sold less than 3.1 percent of their goods to state and local governments. These include various machinery industries (e.g., metalworking and electrical equipment); transportation-related equipment (e.g., engines and turbines, aircraft, and other transportation equipment, such as ships and railroad equipment); and a wide range of other services or products. Although these data provide an indication of the extent of the potential effect of cooperative purchasing on industries, the magnitude of the effect on industries within specific geographical areas could be larger or smaller than the effect suggested by the national data. In addition, the size of the effects on specific suppliers of subindustries could be larger or smaller than the averages for the industry groups included in table 3.1. For example, while the national data indicate that 3.6 percent of computer and office equipment sales could potentially be affected by the federal cooperative purchasing program, effects could vary significantly among office equipment suppliers depending on the locations of these firms, the types of office equipment they sell, and the importance of state and local governments as their customers. A discussion of the potential effect of the federal cooperative purchasing program on individual businesses follows. Representatives of 22 of the 59 state or local government contractors we contacted said that the cooperative purchasing program would have a positive effect on their businesses, although they provided no data to support their views. Of these 22 businesses, 11 said they were small businesses. The 22 businesses primarily sell computer equipment, furniture, photographic equipment and supplies, and office equipment, including copying machines, all of which are available through the schedules program. A majority of these businesses—15 of the 22—are either GSA vendors or dealers for GSA vendors. Representatives from these businesses said that allowing nonfederal governments access to the federal supply schedules would increase their sales, profits, customer base, or exposure to potential additional customers or could reduce the administrative time and effort associated with state or local governments’ competitive bidding processes. For example, nine state or local government contractors that supply photographic equipment and supplies, office equipment, or furniture said that opening the federal supply schedules to state and local governments would increase the number of buyers using the schedules. Because these contractors are also GSA vendors or dealers for GSA vendors, most noted that their businesses could expand their current customer bases, which would ultimately benefit their businesses. Further examples of businesses that perceived potential benefits include two contractors that sell office equipment. One contractor, located in Virginia, that is also a GSA vendor of office equipment has a nationwide network of dealers that provides sales and service support for products it sells. An official for this contractor said that all dealers in its network would be able to participate in sales to nonfederal agencies if the cooperative purchasing program is implemented. Another contractor, located in New York, told us that cooperative purchasing will result in increased revenues from product sales and servicing with the additional customers purchasing products off the schedule. A third state and local government contractor, located in New York, that sells office equipment said that it does not fear the competition from GSA’s vendors because there would be enough buyers in the marketplace allowing them to compete in a larger market. Finally, two businesses that sell heavy equipment both through GSA schedules and to state and local governments believed they would benefit from cooperative purchasing. One of the firms, located in Georgia, that sells forklifts said that under cooperative purchasing it would not have to bid separately on state and local contracts and that the company uses the same procedures and dealership network regardless of whether the purchasing agency is federal or nonfederal. According to the company, its sales to governmental agencies are about 2 percent of its total sales. The other company, located in New Jersey, that represents 13 different manufacturers of lawn and garden equipment said that the company’s contracts with GSA, state, and local governments are essentially identical and provide the same sales conditions. The products sold, however, rely little on a dealership network. As a result, this company believed that it would be beneficial to the company, the manufacturers it represents, and nonfederal governments to make sales only through the schedules. The contracting officers for some of GSA’s federal supply schedules, including the telecommunications equipment, office furniture, copying equipment, microcomputer, and office supply schedules, said that they expected companies that provide supplies through these schedules would benefit from the cooperative purchasing program. For example, the contracting officer for the telecommunications schedule said that this schedule should be opened to nonfederal users because, in his opinion, GSA, the contractors, and state and local governments would all benefit. He said GSA would benefit because its vendors would be selling to a broader market, thereby increasing sales, which should lower prices further in the future. He said state and local governments would also benefit by saving time and money in their purchases. Similarly, the contracting officer for the office supply schedule said that GSA would benefit since its contractors would be able to sell to a broader market, thereby increasing sales, which should lower prices further in the future. In addition, he said GSA would benefit from the 1-percent fee it receives to cover its costs. In his opinion, the office supply schedule would likely be one of the better schedules to open to state and local governments because the manufacturers currently on the schedule must be able to supply nationwide, and because the GSA vendors include five large office supply companies. He explained that upon receipt of an order, the companies contact their warehouses and the order is immediately shipped to the customer for “next-day delivery.” This contracting officer said that he has heard of no concerns on the part of the contractors about this schedule being opened to state and local governments. The other three contracting officers similarly said that they have heard of no concerns from their respective contractors, including microcomputer contractors, systems furniture contractors, and copying equipment contractors. Some of the state and local contractors we contacted said that the cooperative purchasing program could have a negative effect on them. In addition, the medical equipment and supplies, airline, and heavy equipment industries have expressed concern about the adverse effect cooperative purchasing may have on them. Because of the possible adverse effects cited by these industries, which include a potential for reduced profits, and the resulting possibility of increased prices, GSA plans to exclude, or is considering excluding, those schedules that contain these industries’ equipment or services. Of the 59 state and local contractors we contacted, 10 contractors said that the cooperative purchasing program may have a negative effect on their businesses. Of these 10 contractors, 7 said they were small businesses. These contractors supply state and local governments with furniture, photographic equipment, computer equipment, paper products, paint, and heavy equipment. Almost all of the contractors said they could lose business to GSA vendors because state and local governments would have access to the federal supply schedules under the cooperative purchasing program. For example, a small paper products distributor and a small computer equipment distributor in West Virginia said that their companies would lose business because agencies could purchase directly from the manufacturers if the federal supply schedules were opened to nonfederal agencies rather than purchase from their companies. Also, a representative from a furniture store in West Virginia said that his company buys products from manufacturers and then sells them to state and local governments at a retail price. If nonfederal governments were able to use the federal supply schedules for furniture, his company would not be able to compete with the manufacturers’ prices. A paint manufacturer in Montana was also concerned about negative effects on his business and on the customer. He explained that he believes that decentralized purchasing is better, as the needs of the local government entity are not the same as those of the federal government. He has seen that local governments often do not want to use products that they can obtain through state contracts because the terms of the state contract will not meet their needs. However, purchasing agents are likely to use the GSA schedules because it is easier than going through another procurement process. Thus, he could lose sales to GSA vendors, and the customer could get an unsuitable product. A different concern was expressed by a representative from a small woman-owned company in California that supplies products such as reflective sheeting to the California Department of Transportation. The representative said that the company would lose sales if state and local agencies had access to the federal supply schedules because the business had minority status in the state of California, and many of the state contracts the company had been awarded through competitive bidding were based on its small, minority status in the state. Several industries are opposed to GSA’s planned cooperative purchasing program because they believe the program will have an adverse effect on them. These industries are represented on some of the 13 schedules managed by VA and on 3 of the 133 schedules managed by GSA. For example, the medical equipment and supply industries fear that cooperative purchasing will disrupt their distribution networks or cause them to increase prices to the federal government. The airline industry is concerned about loss of revenues from a greater use of discounted fares. The heavy equipment industry is concerned about negative effects on dealers who currently service the state and local government market. VA has already recommended to GSA that 2 of the 13 schedules it manages—the pharmaceuticals and one medical equipment and supply—be excluded from cooperative purchasing. In its April 1995 Federal Register notice, GSA proposed excluding those two schedules, based on VA’s recommendation. Since that notice, GSA officials told us that they plan to exclude the airline schedule and the schedule containing fire fighting vehicles. GSA has not, however, made any final decisions on excluding other schedules. However, the Director of GSA’s Acquisition Management Center said that GSA intends to exclude those schedules or portions of those schedules from the cooperative purchasing program where significant controversy exists about the potential adverse effects. Several associations, manufacturers, and dealers raised concerns to GSA and us about the potential adverse effects cooperative purchasing of medical equipment and supplies may have on their companies. They cited a disruption of the distribution network, reduction in profits, and an increase in federal supply schedule prices as possible effects. An association representing public hospital pharmacies, on the other hand, pointed to potential savings and diminished needs for government subsidies as possible benefits of cooperative purchasing. The Health Industry Manufacturers Association and the Health Industry Group Purchasing Association, representing medical equipment and supply manufacturers and purchasing organizations, oppose cooperative purchasing. These associations sponsored individual studies to determine the impact of opening the federal supply schedules. Both studies concluded that opening the federal supply schedules would decrease the federal government discount and increase the cost of medical and surgical equipment and supplies. According to the contractor who conducted these studies, his research found that large, infrequently purchased expensive equipment with long life cycles may offer little opportunity for discounting. However, the medical and surgical supply industry is more complex. Since the medical supply industry includes a broad range of products and categories with varying discounts, the contractor that conducted the studies found that some individual product lines can be discounted significantly but others cannot. The conclusions in these studies are based on the assumption that the medical equipment and supply industry would react to the cooperative purchasing program in the same manner as the pharmaceutical industry. We did not verify the data or analyses contained in these studies. The Health Industry Distributors Association, which represents over 700 companies, many of which are small businesses, opposes the cooperative purchasing program because public hospitals could not select their own distributor to meet their needs, and health care providers and distributors would incur an increased administrative and recordkeeping burden. Manufacturers expressed the same concerns. The manufacturers we spoke with said that the distribution network for federal and state or local government customers is different. They said that federal government orders are usually shipped directly from the manufacturer to the buyer, while sales to state and local governments are generally handled through a local dealer. According to manufacturers, cooperative purchasing could put local dealers who rely heavily on sales to state and local governments out of business to the extent that the manufacturers would ship directly to state and local governments. With sales no longer being handled by local dealers, manufacturers also were concerned about the increase in the administrative burden that would be placed on VA vendors if they had to fill orders for state and local governments. According to one manufacturer, making the schedules available to state and local governments could increase this burden to the point where he would have to consider reducing the products he sold on the schedule or raising prices. In contrast, the Public Hospital Pharmacy Coalition, representing hospitals owned or funded by state or local governments, supports cooperative purchasing because it anticipates lower prices and reduced administrative expenses for eligible hospitals. Noting that public hospitals rely heavily on government payers and subsidies, the coalition said that cost reductions would lessen their dependence on state and local governments. Officials at VA’s National Acquisition Center, which manages the medical equipment and supply schedules, said that distribution networks at the state and local level would likely vary considerably, depending on the size of the customer. In some cases, the manufacturer might be directly supplying the state or local customer. The officials said that one would have to check with each state or local customer to determine if they received products from a distributor or manufacturer. A VA official also stated that in her opinion, one of the real issues was not the disruption of the distribution network; rather, it was that manufacturers would have to break their established agreements with dealers and distributors for state and local customers in order to serve that market themselves. The VA officials did not agree that a manufacturer’s administrative burden would increase significantly. Most companies would be tracking their sales regardless of whether the sale was made through a federal supply schedule or through a state or local agency procurement. Manufacturers we spoke with said that there is a higher cost of doing business with state and local government customers, a cost that the manufacturer cannot recoup at the federal supply schedule price. They said that implementing cooperative purchasing could result in manufacturers raising prices on the federal supply schedule. Manufacturers and distributors are also concerned that nonfederal governments would expect VA vendors to perform additional services, such as warehousing, training, or filling small orders. The manufacturers and distributors do not have to perform these services for federal agencies, and the schedule prices do not include costs that would be associated with providing such services. GSA officials agreed that some medical equipment suppliers provide more services to nonfederal governments, such as training, and that this service is not available through federal contracts. According to GSA officials, should state or local governments want additional services, they would have to separately contract and pay for them. VA officials also said that vendors should not be expected to provide services beyond what the federal supply contracts specify at the schedule prices. The Public Hospital Pharmacy Coalition also agreed that state and local customers may require additional services. It said that if distributors and dealers can justify higher prices by providing such services, the state and local customers would be less likely to use cooperative purchasing. In addition, manufacturers and distributors are also concerned that nonfederal government agencies would not promptly pay bills for medical equipment and supplies ordered through VA vendors and instead take 2 to 3 months to pay their bills as opposed to 15 days. Although the VA officials at the National Acquisition Center acknowledged that some state and local governments do not always have good payment histories, they reiterated that any entity using the federal supply schedules would have to abide by the terms and conditions specified, which include prompt payment provisions. According to GSA, it is considering having federal prompt payment provisions apply under cooperative purchasing unless a state has a prompt payment law, in which case the state provisions, including recourse for noncompliance, would apply. GSA officials further pointed out that vendors would be informed of these provisions and could refuse to sell to a nonfederal government if vendors chose not to do so. Two other issues raised by the Health Industry Distributors Association were how vendors would determine whether a nonfederal organization was eligible to purchase products under the schedules program, and what monitoring would be done to determine whether vendors were selling only to eligible organizations. GSA’s current plan is to establish an eligibility determination process under which nonfederal organizations wishing to participate in cooperative purchasing would submit an application to GSA. GSA would then determine eligibility and list those eligible nonfederal governments in an electronic data base. According to GSA officials, GSA has not yet determined how it will monitor adherence to program requirements, including eligibility requirements, under the cooperative purchasing program, and it could change its approach for implementing several aspects of the program, including prompt payment provisions and the eligibility determination process, when it finalizes its implementation plan. As discussed in chapter 2, GSA proposed to exclude two schedules maintained by VA because VA believed that if these two schedules were included in the cooperative purchasing program, the industries selling items on these schedules would increase prices charged to the federal government. These schedules include the pharmaceutical schedule and one of the medical equipment and supply schedules—in vitro diagnostic substances, reagents, test kits, and sets. As indicated previously, the issues surrounding pharmaceuticals will be discussed in a separate GAO report. VA recommended that one schedule that includes certain medical equipment and supplies be excluded from the cooperative purchasing program because prices for some items on that schedule were also governed by the Veterans Health Care Act of 1992. Since making its initial recommendation to GSA, VA has concluded that items available through this schedule are not governed by the 1992 act. However, VA officials fear that businesses that manufacture and sell some products that are available through this schedule would increase their schedule prices. GSA accepted VA’s recommendation on the basis that the schedule contained some items that were covered by the 1992 act. This schedule also contains other medical equipment and supplies—such as needles and pipettes—and these types of products may or may not be affected by the cooperative purchasing program as much as other products on this schedule. Industries represented on the other schedules managed by VA may or may not be similarly affected. Among other items, medical equipment and supply schedules include wheelchairs, antiseptic soap, and dental equipment. According to VA officials responsible for managing these schedules, they did not review other schedules when GSA’s implementation of cooperative purchasing was suspended because they did not know if the program would be implemented. According to GSA, the airline industry also raised objections to federal airline fares being made available for the cooperative purchasing program. A GSA official told us that airline company representatives expressed concern about the loss of revenue from greater use of the discounted federal fares and about controlling the use of GSA fares for state and local government employees. Further, she said that airline company representatives told her that the companies are concerned that some nonfederal employees may abuse the GSA fares and use these fares for nonbusiness-related travel. This GSA official said that GSA was concerned that if the schedule were opened to state and local governments, airlines would no longer be willing to participate, increasing travel costs for federal agencies substantially. Even though GSA has not made a final determination on whether the airline schedule will remain closed to state and local governments, as noted in chapter 2, GSA officials told us that they do not intend to make the schedule available for cooperative purchasing. In comments provided to GSA in response to its April 1995 Federal Register notice, representatives of the heavy equipment industry expressed their concerns that the cooperative purchasing program would negatively affect the industry. GSA has subsequently received additional comments expressing this concern from the heavy equipment industry since publishing its notice. Heavy equipment includes products such as road sweepers; emergency vehicles, such as fire trucks; tractors; and turf equipment, which are sold through about six GSA schedules. In their comments, several manufacturers and dealers that sell various products on some of these schedules said that local dealers’ profits could be adversely affected if the schedules containing these products are opened to state or local governments. According to these companies, profits would be reduced because dealers would receive lower fees for sales through schedules in their geographic areas, and profits from warranty work would not be sufficient to sustain operations. Several dealers said they would be forced out of business or would have to lay off employees, and local governments would lose the benefit of the training assistance they provide as part of their sales efforts. We confirmed that these concerns remain, at least for a number of such businesses. For example, three heavy equipment manufacturers whose equipment is available through the federal schedules program told us that sales to state and local governments through the federal schedules program would take business away from their dealers and present serious financial difficulties for many dealers. These manufacturers sell directly to federal agencies and pay their local dealers for any necessary set-up, delivery, and related servicing. Several dealers told us that what manufacturers pay them is not enough to keep them operating. Similarly, a fire truck manufacturer that is a GSA vendor said that nearly all of its fire truck sales are to state and local governments through a dealership network. The manufacturer pays dealers a fee or commission for each sale in the dealers’ geographic sales areas, and this fee is reduced for sales through the schedules program. Although acknowledging that it would have the option of not participating in the cooperative purchasing program, this manufacturer expressed concern that its competitors would do so, thus forcing it to do the same. During the course of our review, several other dealers that sell fire fighting vehicles contacted us expressing concern about significant adverse effects they would experience due to the high proportions of their sales that are to state and local governments and the limited or nonexistent fees or commissions they would receive for schedule purchases. At our request, GSA’s contracting officers for five heavy equipment schedules reviewed comments GSA received in response to its April 1995 Federal Register notice. According to the contracting officers, the majority of the comments focused on one GSA schedule—the construction and highway maintenance equipment schedule. Subsequent to GSA’s Federal Register notice, GSA received numerous comments from another heavy equipment industry represented on GSA’s fire fighting vehicles and waste disposal vehicles schedule. The contracting officer for these two schedules said that he did not believe that those two schedules should be available to nonfederal governments. First, he was concerned that the cooperative purchasing program may have a detrimental effect on the dealers because a high proportion of sales are made to state and local governments, and this may affect the manufacturers’ relationships with their dealers. His second concern was that the GSA vendors on these schedules may elect to cancel their GSA contracts, or increase the prices under the federal schedules program. In contrast, contracting officers for other schedules that were mentioned in the industry comments to GSA, including the aerial lift equipment, turf equipment, generators, and air compressor schedules, said that few companies commented that they were concerned about equipment sold through these schedules. The Director of GSA’s Acquisition Management Center stated that as of January 1997, GSA was planning on excluding the schedule that contains fire fighting vehicles from the cooperative purchasing program based on information we provided GSA as well as the responsible contracting officer’s assessment of the potential impact opening this schedule may have on the industry as well as the federal government. According to the Director, both the industry and the federal government could be negatively affected. Even though GSA has not yet made final decisions on other schedules, such as the construction and highway maintenance schedule, the Director said that GSA intends to exclude those schedules or portions of those schedules from the cooperative purchasing program where significant controversy exists about the potential adverse effects. Representatives from 13 of the 59 state or local government contractors we contacted said that the cooperative purchasing program would have no effect on their companies. Of these 13 contractors, 6 said they were small businesses. The 13 contractors sell, among other things, office supply equipment; computer equipment; furniture; and road construction supplies, such as stone and asphalt. Among the reasons they cited were the unique specifications of products they sell to state or local governments, competitive prices, or the desire of local governments to have local servicing. For example, a hot mix asphalt contractor and a concrete products contractor told us that opening the GSA schedules to state and local governments would not have any effect on their companies. The contractors said that the products they supply had different specifications for different applications or projects, so state and local government agencies would have to continue to request bids for their projects, as the specifications and requirements would be unique to a project. For example, the mixture needed to repair a dam surface would be different from that needed to pave a parking lot. In another example, a contractor for the state of West Virginia who supplies office equipment said that even though state government agencies’ requests for procurements are much narrower and more localized than those of federal agencies, there would likely be little effect if the GSA schedules were opened because state and local government agencies are successful at obtaining competitive prices, and these agencies always seek out the best price. As a result, the contractor said that it was doubtful that state or local agencies would change the way they procured goods and services and instead buy through the federal schedules program. As a result, he said that his small business would likely see little impact from the cooperative purchasing program. A contractor in Montana that supplies computer equipment said that customers want “today’s technology at today’s prices.” He also said that because GSA’s contracts with vendors are long-term contracts, his contracts with the City of Missoula, Montana, and the University of Montana are such that his small business can react more quickly to the dynamics of the fast-changing computer industry. Because of this ability, he did not believe that the cooperative purchasing program would affect his business. A spokesman for another office supply company, which is both a GSA vendor and a state contractor in California and Nevada, also said that the cooperative purchasing program would have little effect on his company because the state of California already has a schedules program very similar to the federal schedules program. However, he said the program could assist some states, such as Nevada, by reducing the amount of time required to procure office equipment from several months to only a few weeks. Finally, a spokesman for a road sweeper company said that the cooperative purchasing program would not affect his company because even though the company holds the GSA contract to supply sweepers to the federal government, he believes local governments would not buy this type of equipment through a federal supply schedule. According to this spokesman, local governments will not buy sweepers through the road-clearing and equipment schedule because such governments can get comparable prices through competitive bidding at the dealership level. He also said that contracts that local dealers have with local governments provide for extensive training and servicing, which would not be provided under this manufacturer’s contract with GSA. Of the 59 state and local government contractors we contacted, 14 said that they did not know what effect the cooperative purchasing program would have on their companies. Of these 14 companies, 7 said they were small businesses. These companies include those that sell furniture, computer equipment, laboratory equipment and supplies, photographic supplies, and heavy equipment. These companies cited uncertainties about how others would react to cooperative purchasing and noted that the program offered both potential gains and losses. For example, according to one computer equipment supplier in West Virginia, it was difficult to predict what impact cooperative purchasing would have because it would depend not only on the difference in pricing between GSA’s vendors and state and local contractors but also on other factors, such as the servicing and warranty arrangements that were included as part of manufacturers’ contracts with GSA. In addition, a furniture contractor in California, who is a dealer for a furniture manufacturer that is a GSA vendor, told us that it was difficult to determine what effect the cooperative purchasing program might have on his company. According to a dealership official, the GSA contract is not very profitable for the dealership because the manufacturer sets the price for GSA contract sales, which usually results in a lower profit margin than the dealership would like. This lower price can hurt the servicing of the contract, because there is not sufficient profit for the local dealer to provide proper service. However, the dealership official said that although contracts with local government agencies are more profitable than GSA schedule program sales, his company incurs substantial costs by bidding on local government agency procurements. The process has become very complex and expensive, and costs had ranged from $5,000 to $10,000. Consequently, quite often this particular dealership had not bid on local government procurement solicitations. Reflecting the diversity of views among individual businesses about how they would be affected by the cooperative purchasing program, associations representing industry have taken a range of positions on the program. With the exception of the medical equipment and supply industries, these associations did not provide data that would provide the basis for their predictions of the effects of cooperative purchasing. Some industry associations told us that they are in favor of the cooperative purchasing program. The Information Technology Industry Council, for instance, said that it supported the program but noted that its members would want some flexibility in its implementation. The Coalition for Government Procurement, representing over 300 businesses that supply about 75 percent of the federal government’s purchases, told us that about half of the Coalition’s membership supports the program and the other half opposes it. In some other cases, however, industry associations told us that they have not taken a position on cooperative purchasing or that they had mixed opinions on the program. In some cases, association officials told us that they were not sufficiently familiar with cooperative purchasing to take a position. Several associations that represent small businesses, including the American Small Business Association and National Small Business United, said that they did not have enough information to form positions. As discussed earlier, several associations representing heavy equipment manufacturers and dealers and manufacturers of medical equipment and supplies opposed the program. These associations, most of which expressed their opposition in comments on GSA’s Federal Register notice, included the Associated Equipment Distributors, the Environmental Industry Association, the Material Handling Equipment Distributors Association, the National Retail Federation, the Health Industry Distributors Association, and the Health Industry Manufacturers Association. In their oral comments, representatives from the Coalition for Government Procurement agreed with the contents of this chapter. They also raised concerns about some aspects of the cooperative purchasing program as currently proposed. For example, the Coalition said that it did not agree with GSA’s tentative plan to apply state prompt payment provisions in those cases in which states have them. It said that this could significantly increase industry’s burden because businesses would have to work under many different state laws rather than a uniform law—the federal prompt payment statute. In addition, the Coalition raised some concerns about possible problems that could develop as the cooperative purchasing program is implemented. For example, it cited the possibility of some nonfederal governments (1) bypassing the cooperative purchasing program by asking GSA vendors to sell them products at schedule prices or at schedule prices less the administrative fee, without the GSA vendors remitting the administrative fee to GSA; or (2) purchasing products through the cooperative purchasing program but, instead of using the products themselves, reselling the products at higher prices than they paid. Finally, the Coalition noted that although it recognized that some businesses could save administrative costs by not having to compete separately for state or local contracts, some nonfederal government procurement processes required relatively little administrative effort by businesses. In their comments, both GSA and VA acknowledged the uncertainties of and lack of data associated with the effects cooperative purchasing would have on businesses. Similarly, the National Association of State Purchasing Officials noted a number of uncertainties, such as the extent to which businesses would be willing to abide by various state requirements and that the value of state contracts to some businesses could be diminished if some state agencies used the federal schedules rather than state contracts. GSA’s plans for implementing the cooperative purchasing program continue to evolve. These plans include, among other things, determining whether the potential negative effects on small business that might be associated with opening up particular supply schedules are outweighed by the potential positive effects on nonfederal government agencies. GSA’s determinations will entail judgments about trade-offs of positive and negative effects, and the data necessary to conclusively predict these effects are not likely to be available. GSA recognizes these trade-offs exist, but Congress, state and local governments, and industry would have better information on how GSA would make its determinations if GSA improved its implementation approach in several ways. As noted in chapter 1, in its April 1995 Federal Register notice, GSA indicated that schedules would be made available to nonfederal agencies upon their request unless the contracting officer responsible for the applicable schedule determined that it would not be appropriate to do so. Individual schedule vendors would be able to elect whether or not to make the products or services they sell through the schedules available to authorized nonfederal users. In addition, the notice stated that schedule contracts would be established only to meet the needs of federal agencies and proposed that two schedules—one for pharmaceuticals and one for certain medical equipment and supplies—would not be opened to state and local governments. GSA officials said that GSA took no further actions to finalize the Federal Register notice after its authority to implement section 1555 was suspended. GSA officials, however, told us that GSA was considering a number of changes to how it would implement the program. GSA stated that it developed these changes after meeting with representatives of the National Association of State Purchasing Officials, the National Institute of Governmental Purchasing, as well as several industry associations; and after reviewing public comments received after publishing its initial implementation plan. First, as a matter of policy, individual supply schedules would not be made available for use by nonfederal governments if opening that schedule would adversely affect the support provided to federal agencies in terms of price, quality of products or services, or delivery. Second, rather than assigning responsibility to the contracting officer for making case-by-case determinations regarding opening individual schedules, GSA officials were considering assigning responsibility to the Federal Supply Service’s Assistant Commissioner for Acquisition. In making these determinations, the Assistant Commissioner would be expected to consider the recommendation of the contracting officer responsible for particular schedules and to consult, as appropriate, with other interested parties or associations representing them. The contracting officers’ recommendations would be based on an evaluation of the potential effects on federal agencies and whether opening the schedule would be likely to have an adverse effect on local small business concerns or dealers that would not be offset by benefits to nonfederal agencies. With respect to VA’s schedules, GSA officials told us that GSA is considering assigning responsibility for making decisions to VA. The option of excluding individual schedules or classes of schedules from the cooperative purchasing program has come up in both the Federal Register notice and in our discussions with GSA. The Federal Register notice proposed excluding two schedules (pharmaceuticals and one medical equipment and supply schedule) from the cooperative purchasing program. According to a GSA official, GSA also does not intend to open the fire fighting vehicle schedule or the airline fare program to state and local participation. GSA officials said that they proposed to exclude the pharmaceutical schedule and one medical equipment and supply schedule in the Federal Register notice and plan to exclude fire fighting vehicles and airlines because of concern that opening up these schedules to nonfederal users would not be in the interest of the federal government. In these cases, GSA anticipated that costs to federal agencies would rise for products on these schedules if the schedules were opened. For other schedules, GSA officials said that GSA would decide on opening up schedules on a case-by-case basis. According to GSA officials, GSA could also exclude portions of individual schedules from the program while opening the remaining portions of the individual schedules. Once GSA decides that it may be appropriate to open a schedule to nonfederal agencies, GSA officials said that GSA would publish notices in the Commerce Business Daily and/or the Federal Register to obtain input from interested parties, such as industry associations; federal, state, and local government agencies; and schedule vendors. According to GSA, it would also use associations as a vehicle to provide information to individual interested industries and state or local governments. These notices would identify which schedule or schedules GSA would consider opening up for use by nonfederal agencies and explain how the program would work. The notices would include a contract clause that would have to be included in vendors’ contracts in order for these vendors to sell to nonfederal agencies through the schedules program. Each contractor on each federal supply schedule (about 6,600 contractors in total) that GSA or VA would propose to open would have the option to sell to state and local governments. GSA officials stated that such a process will allow GSA and VA to gauge the interest on the part of state and local governments in using the schedule and willingness of schedule contractors to sell to state and local governments under the schedule contract. They said that the process will also provide the opportunity for nonschedule contractors and federal agencies to express their views. According to GSA, after GSA considers the input of potentially affected parties and if it decides to open a particular schedule, schedule contracts will be modified to permit use by state and local governments. The state and local governments that applied for authorization to use the federal supply schedules would subsequently be notified that the schedule was open for use. According to GSA officials, this approach would allow for interested parties to provide input before a decision is made and allow GSA to make an assessment of the appropriateness of opening a particular schedule while minimizing the costs of implementing the cooperative purchasing program. The approach to implementing the cooperative purchasing program that GSA officials told us about appears reasonable in several respects. For example, it makes the program optional for GSA vendors and recognizes that nonfederal governments cannot be compelled to use the program, acknowledges that there may be trade-offs associated with opening up a particular schedule, recognizes that GSA’s primary mission is to meet the needs of federal agencies, provides a process for informing many potentially affected businesses, allows for schedule-by-schedule consideration, establishes decisionmaking authority at a higher level than initially proposed, and identifies the trade-off decisions that have to be made. However, although GSA is considering changes to the implementation plan in the Federal Register notice, GSA has not completed a detailed, written plan that sets forth all its current thinking on how it intends to implement cooperative purchasing. Since the suspension of section 1555’s authority for the cooperative purchasing program is temporary, we believe that it would be prudent for GSA to be prepared to implement the program by having a detailed, written implementation plan. Such a plan would provide information to Congress, state and local governments, and industry that would better enable them to evaluate the likely effects of GSA’s determinations. Further, it would provide guidance to GSA and VA staff to facilitate consistency in these determinations. Our work indicates that a successful plan would require, at a minimum, several components: guidance on the data that should be sought and analysis conducted in determining the (1) expected effects on federal agencies; (2) expected effects on nonfederal governments; and (3) expected effects on businesses, including non-GSA vendors; identification of potentially affected parties and the various means to be used to notify them when schedules will be considered for opening to nonfederal governments; designation of an official at an appropriate level of responsibility to make final determinations on whether individual schedules should be made available to nonfederal governments, particularly when businesses express concerns about significant adverse effects; provisions for evaluating the actual effects of opening schedules; and provisions for opening part of a schedule. As indicated above, GSA initially published a Federal Register notice containing several elements of its planned implementation approach for the cooperative purchasing program. When the program was suspended, GSA discontinued work on completing a formal, written plan. However, GSA officials appropriately continued to consider how it would implement the program and identified changes to its initial planned approach set forth in the Federal Register. Several state and local governments and industry associations we contacted, as well as several of GSA’s contracting officers, did not know how GSA planned to implement the program, or what information GSA would use to make its decisions on whether schedules would be opened up to nonfederal governments. Limited or nonexistent data make assessing the potential effects of the cooperative purchasing program a difficult task. Not having information on how GSA intends to implement the program made it difficult for affected parties to assess the potential effects of cooperative purchasing and is likely to make it difficult for GSA’s and VA’s contracting officers to act consistently when they seek and consider information on possible effects. The lack of this type of information is also likely to hamper Congress in any further deliberations it may want to have on cooperative purchasing. GSA, in deciding whether or not to make products or services available on federal supply schedules to nonfederal governments, will be required to make judgmental decisions regarding (1) the extent to which vendors and nonfederal governments will exercise their option of participating in the program; (2) the likelihood of vendors responding in such a manner that prices, the quality of products or services, or delivery will be affected from the standpoint of federal agencies; and (3) trade-offs between any expected potential benefits to nonfederal governments and any expected potential adverse effects on businesses. The Director of GSA’s Acquisition Management Center said that GSA has not yet provided guidance to its or VA’s staff, industry, or nonfederal governments on the data and analysis to be considered for making these judgmental decisions. For example, while many of the associations we contacted had views on the possible effects of cooperative purchasing, they generally provided no conclusive, detailed data to support their views. This guidance would help GSA and VA staff, including contracting officers, as well as affected businesses, industry associations, and nonfederal governments, know the data and analyses that are to be considered to make decisions and should help GSA staff make decisions that are as informed and consistent as possible. Although data availability is likely to remain a challenge for GSA, having a process that facilitates gathering appropriate data and developing an analytical framework to analyze these data would enhance the process of making those decisions. Our work indicates that some data, such as the share of an industry’s output that is sold to state and local governments, can provide some insight on potential effects, even if a particular measure, such as the share of output, alone cannot provide a precise quantitative prediction of the effects. Similarly, analysis of some characteristics of an industry, such as the ability of firms in an industry to charge different buyers different prices, may also help provide some insight on potential effects, such as the potential for increased prices to federal agencies. Explicitly identifying its priorities in weighing potential benefits and adverse effects would enhance GSA’s efforts to make its decisions on a consistent basis. Although GSA has indicated that its first priority is that its federal customers not face adverse effects, it has not yet indicated how it would address any recurring benefits or adverse effects compared to any one-time effects. Even with guidance for GSA and VA staff, industry, and nonfederal entities, however, our findings suggest that sufficient data may not be available to GSA or VA for them to make quantitative assessments of expected benefits and negative effects. This indicates that GSA would often have to make judgmental and trade-off decisions based largely on views of affected parties. In some cases, GSA’s decisions to open schedules may have significant adverse effects on some businesses, and GSA would have to make judgments about whether expected benefits to nonfederal governments outweigh expected adverse effects on these businesses. Excluding schedules, however, may prevent state and local governments from realizing some potential benefits. Given this situation, a sensible plan would detail the process to be used to identify potentially affected parties and solicit and consider data and views from them. GSA officials told us they plan to announce their intentions to open schedules in the Commerce Business Daily, whose purpose is to announce federal government contracting opportunities, and/or the Federal Register, as well as work with associations representing state and local governments and industry. It is unclear, however, whether these actions could reach a sufficient number of potentially affected groups or would sufficiently target those groups that may be most affected by GSA’s opening up individual schedules. It is unclear that these groups would routinely be aware of Commerce Business Daily announcements or Federal Register notices, even though this latter publication is intended to reach a broader audience. Further, while GSA states that it plans to use associations representing industry as a means to get information to individual interested parties, it is unclear that consulting with industry associations alone would provide GSA with an understanding of the effects that opening a schedule may have on individual businesses. During the course of our work, we found that some industry groups, state and local contractors, and state and local governments were not aware of the cooperative purchasing program, despite the April 1995 Federal Register notice. In addition, several associations told us that their memberships had conflicting views on the program, which, in some cases, prevented the association from taking a position. To recognize the judgment inherent in the decisions GSA may be making when determining whether schedules should be opened and the potential lack of sufficient data with which to make these decisions, GSA acknowledges that it may need to elevate the level at which decisions are made. In its Federal Register notice, GSA indicated that its contracting officers may be making decisions on opening schedules. However, GSA is now considering assigning this responsibility to the Assistant Commissioner for Acquisition, Federal Supply Service. The Assistant Commissioner would receive recommendations from contracting officers regarding requests to make schedules available to nonfederal users. In those instances where GSA has delegated authority to award schedule contracts to another agency, such as VA, GSA is considering also delegating authority to decide on opening schedules to that agency’s Senior Procurement Executive. Decisions to open schedules are policy decisions that could have significant adverse effects on some businesses or industries. In our opinion, policy decisions that can have such significant effects should be made at a higher level than the contracting officer level. Neither GSA’s Federal Register notice nor changes GSA officials told us they were considering included a provision for evaluating GSA’s implementation of the cooperative purchasing program, including the effects of opening schedules to state and local governments, even though GSA officials said that at one time it had considered implementing the program in a series of “pilots.” Because the effects of cooperative purchasing are likely to vary by industry or even product or service, the uncertainties over the extent to which state and local governments and business will actually exercise their options to participate in the program and purchase items from vendors listed on the schedules, and because it will likely be very difficult to get sufficient data before implementation to predict effects, we believe evaluations would be helpful to GSA. Such evaluations should help GSA (1) determine actual effects, (2) better gauge the types of data needed to make decisions, (3) identify the best means for obtaining relevant input from potentially affected organizations, and (4) provide a basis for GSA to reverse any decisions that may turn out to have more negative than positive effects. These evaluations could also provide objective information on whether the program may be lowering prices or administrative costs. A related improvement to GSA’s implementation approach would be to include in its plan steps to be taken in the event a decision to open a schedule is found to have unexpected adverse effects. This situation was not addressed in GSA’s Federal Register notice. Possible steps could include reversing its decision or taking some other action to mitigate the adverse effects. Another element that would enhance the potential for the implementation plan to be successful would be a provision for opening part of a schedule to nonfederal governments when a schedule contains a mix of products that could be affected differentially by cooperative purchasing. For example, the fire fighting and waste disposal vehicles schedule contains products that are made by two different industries, as does the construction and highway maintenance equipment schedule. According to GSA’s contracting officer for these two schedules, the effects of the cooperative purchasing program would be quite different on the various industries contained in those schedules. The fire fighting vehicle industry relies almost exclusively on sales to nonfederal governments, while the waste disposal vehicle industry produces many types of products that are sold not only to nonfederal governments, but private industry as well. Similarly, the in vitro diagnostic medical equipment and supply schedule contains a diverse mix of products. According to a VA official, when VA requested GSA to exclude this schedule from the cooperative purchasing program, it was concerned with potential price increases for only three of the items on the schedule because they represent most of the costs related to the schedule. In their written comments on a draft of this report, GSA and VA agreed that assessing the potential effect of cooperative purchasing will be difficult because of questions about how nonfederal governments and businesses would react to the program and the lack of data on which to predict the potential effects; the agencies agreed that an implementation plan that would consider the effects on all affected parties would enhance the decisionmaking process for the program. Both agencies further said that the uncertainty about the program would make it important that the determination to open or not open a particular schedule to cooperative purchasing be on a case-by-case basis. GSA said that it believed using a process like the one we recommended would provide enough information for GSA to make informed decisions. GSA said that it would base its decisions on the best available information. VA also noted the importance of having a good decisionmaking process and implementation plan and said that it was considering industry conferences for schedules that were candidates for the cooperative purchasing program. In its comments on the draft report, the Coalition for Government Procurement said that it generally agreed with our conclusion that GSA’s approach to implementing cooperative purchasing appears reasonable in several respects, but it expressed some concerns about GSA’s tentative plan. In particular, as indicated in chapter 3, it disagreed with GSA’s tentative plan that state prompt payment provisions would be applied for states having such laws, noting the potentially increased administrative burden of requiring sellers to work under multiple laws rather than a uniform law—the federal prompt payment law. The Coalition also disagreed with a part of GSA’s tentative plan regarding how businesses could exercise the option not to accept orders. Under GSA’s tentative plan, vendors would have the option to modify their contracts with GSA to enable nonfederal governments to purchase goods and services. Once a vendor had agreed to the modification, it would have 5 days after receiving orders from nonfederal governments to decline a particular order. The Coalition said that this would not be adequate time for some businesses to make this decision for new customers and that this could also create uncertainty among the nonfederal governments placing orders. The National Association of State Purchasing Officials also expressed concern with this aspect of GSA’s tentative plan, noting that such a provision could leave state agencies without a readily available supply source. The Coalition suggested that GSA involve representatives from nonfederal governments and from businesses in developing its implementation plan and phase in its implementation of cooperative purchasing. Similarly, VA pointed out that some effects of cooperative purchasing, such as lower federal product prices or lower vendor administrative costs, will not be known until some experience is gained under the program. During the course of our review, GSA officials told us they were aware of concerns potentially affected parties had with cooperative purchasing and have worked with and will continue to work with VA and these parties in developing its implementation plan. Further, it appears that GSA’s and VA’s intention to consider opening schedules on a schedule-by-schedule basis would, in effect, provide for a phase-in approach that would provide them experience with opening some schedules before a large number are opened. The potential benefits and negative economic consequences of opening up federal supply schedules to nonfederal governments are likely to vary considerably among state and local government agencies as well as among industries and individual businesses. Since the effects of cooperative purchasing will depend in large part on how GSA implements the program, it is important for GSA to provide Congress with a detailed implementation plan. Such a plan could show how GSA would decide whether or not to open a particular schedule to nonfederal users and how it would seek to find a balance of the benefits and adverse effects of cooperative purchasing. Such information would provide a stronger basis than is currently available for Congress in its consideration of whether it should take any action while GSA’s authority for the cooperative purchasing plan remains suspended. The potential effects of the cooperative purchasing program are likely to vary among state, local, and the Puerto Rican governments. Since participation is voluntary, these governments would use the schedules only if they perceived benefits from doing so. Some state and local governments are likely to benefit from lower prices for some products, less administrative burden, and shortened procurement cycle times as a result of cooperative purchasing, although the extent to which these benefits would materialize is unclear and depends on several factors. The expected benefits are likely because several state and local governments and some businesses want the schedules opened and because some schedule prices are lower than nonfederal governments’ prices. Also, some state and local governments and businesses agree that reduced administrative effort and cycle times are a likely result of cooperative purchasing. In addition, some nonfederal law enforcement agencies that have had access to the schedules said that they experienced benefits from having such access. Several factors are likely to affect the extent to which these expected benefits would materialize. These include state or local laws, policies, or preferences that could preclude or constrain use of the schedules in some instances; the unavailability of some items through the schedules program; the frequent ability of state and local agencies to get better prices or contract terms through other sources; and the relatively small proportion of state and local expenditures that are made for some items available through the schedules program. These factors will vary among and within states and localities, making precise predictions of effects quite difficult, if not impossible. Predictions are even more difficult given the possibility that some state and local governments could change their laws, ordinances, or policies in the future to permit greater use of federal supply schedules, and businesses could change their practices as well. These possibilities remain speculative at this point. Indian tribal governments are not likely to experience significant effects from cooperative purchasing. This is because many have already had access to federal supply schedules, and federal agencies would remain responsible for providing services to tribes in program areas for which tribal governments do not already have access to the schedules. Cooperative purchasing’s effects on businesses are likely to vary among industries and individual firms, including firms in the same industry. It appears reasonable to us that at least some of the benefits perceived by some businesses, including small businesses and dealers, may occur. These potential benefits would include increased sales, profits, or exposure to additional markets and reduced administrative costs as a result of businesses not having to compete separately for some contracts with various state and local governments. Those companies that are already GSA vendors and that sell to both federal and nonfederal governments would likely see the greatest administrative savings since these companies would not have to separately compete for the federal, state, and local contracts. For a particular firm, these administrative savings would depend on the nature of the business, the extent to which it supplies state or local governments, and the extent to which state and local governments exercise the option of buying through the cooperative purchasing program. Thus, the potential for administrative savings cannot be predicted. The full extent to which businesses would elect to exercise the option of selling to nonfederal governments through the program also cannot be predicted. On the other hand, some industries, including small businesses and dealers, could experience reduced sales or profits, a reduction in operations, or even closure if the schedules containing products they sell are opened for nonfederal buyers. While the extent to which these effects would occur cannot be predicted, two factors that can influence the results are the proportion of an industry’s or firm’s sales to state and local governments and how that industry takes into consideration its dealership network in its contracts with GSA. Those manufacturers that sell higher proportions of their products to state and local governments and whose dealerships receive no or reduced fees or commissions for sales made through the federal supply schedules program appear to have the greatest potential for experiencing significant adverse effects, along with their dealers. The effects can be even more severe if dealers are expected to provide extensive service in connection with these types of sales. The optional nature of the program, however, should limit the extent to which manufacturers would want to participate in the program when doing so would negatively affect their dealership networks. In those cases when competitive forces could influence decisions, however, these effects could be further mitigated by GSA through its plan to exclude schedules from the program when adverse effects on federal agencies are likely or if the adverse effects on businesses are likely to exceed expected benefits to nonfederal governments. Regardless of whether the actual effect on different industries would be positive or negative, several factors would tend to limit the magnitude of the effect. Various industries sell varying proportions of their output to state and local governments, and, as previously discussed, several conditions would limit the volume of purchases nonfederal governments would make through the schedules. Also, some businesses are not likely to be affected at all because prices already offered to state and local governments may be comparable to or better than schedule prices, their product or service is not available through the schedules, or state or local governments may not choose to buy through the schedule to retain such benefits as service or training from their current contractors. These variables, together with the lack of available data to independently predict how nonfederal governments or their suppliers would respond to the cooperative purchasing program in the future, make it impossible to accurately predict the overall effect of the program on individual businesses. All of the uncertainties at the state, local, and business level make it difficult, if not impossible, to determine the effect of cooperative purchasing on the federal government. Although it appears likely that Puerto Rico and some state and local governments and businesses would use the program, it is not clear whether this expanded use of the schedules would lead to lower schedule prices or lower federal administrative fees. On the other hand, it is doubtful that the federal government would experience adverse effects since GSA plans to exclude schedules when such effects are anticipated and would be able to act if unexpected negative effects arise. GSA’s policy that it will continue to administer the federal supply schedules program primarily for its federal customers is consistent with GSA’s mission. GSA’s plan for implementing the cooperative purchasing program is evolving and has not yet been put into a final written document. Although this is understandable given the legislative suspension of authority for the program, Congress, GSA, and any affected parties will need a written plan before implementation of cooperative purchasing. In our view, such a plan is essential for Congress to be able to judge whether GSA is taking appropriate steps to fairly balance the potentially beneficial and adverse effects of cooperative purchasing, without compromising the interests of its federal customers. The implementation approach GSA has been developing seems reasonable in several respects, including recognition that effects will vary and judgment will be involved in making trade-off decisions. However, these trade-off decisions are likely to be quite difficult in a number of situations in which some or many businesses perceive significant adverse effects, while state or local governments desire access to the schedules. A written plan would provide a basis for GSA to ensure that its staff is making decisions in a manner consistent with all available information. The plan could indicate, for instance, that GSA would consider the share of industry output that is sold to state and local governments as one data element that would contribute to GSA’s decision. It could also discuss how GSA would weigh the views of affected parties in situations without adequate quantitative data. Further, should GSA delegate decisionmaking authority to VA’s Senior Procurement Executive, a written plan could provide a mechanism for consistent decisionmaking at GSA and VA. GSA’s decisions will be further complicated in some cases because businesses in the same industries have differing views about the program, and there may not be sufficient quantitative data to enable GSA to weigh the benefits and adverse effects. This makes it critical for the parties that are potentially affected to have a clear understanding of how GSA intends to implement the program and how it will consider the views of affected parties as well as any available quantitative data. Such understanding will be crucial for the credibility of GSA’s decisions should the program be implemented as the law now provides. We believe that certain elements in the approach GSA has been considering should particularly be incorporated into its final written plan. These include such items as the optional nature of the program, designation of a high-level official to make final decisions on opening schedules, provision for opening parts of schedules when effects for different industries may vary significantly, and use of the Commerce Business Daily and/or the Federal Register to announce its intention to open schedules. However, we believe that GSA’s plan should also include (1) guidance to its and VA’s staff on considering benefits and negative effects, (2) steps that will be taken in addition to using the Commerce Business Daily and/or the Federal Register to notify potentially affected parties, (3) provisions for evaluating the actual effects of decisions made to open schedules, and (4) steps that will be taken if the actual effects of opening schedules are different from those GSA projected. We recommend that as part of GSA’s report on the cooperative purchasing program to Congress mandated by the Clinger-Cohen Act of 1996, the Administrator provide a detailed plan setting forth the steps that GSA will take to implement the program. In particular, the Administrator’s report should provide Congress with a written implementation plan that emphasizes the optional nature of the program and includes guidance that will be provided to GSA and VA staff on the available quantitative data, affected parties’ views, and other factors that need to be considered in assessing benefits and negative effects of opening up schedules; identifies appropriate processes for obtaining and considering information and views from a full range of affected parties; designates a high-level official or officials who are to make final decisions on opening schedules, especially when businesses express significant concern about potential adverse effects; provides for evaluating the actual effects of decisions to open schedules, and a means for addressing the effects if the data so warrant; and allows for partially opening schedules when appropriate. GSA and VA agreed with our conclusions and recommendation. Both agencies said that the uncertain effects of cooperative purchasing illustrated the importance of having a process that would enable them to make informed decisions on a case-by-case basis. GSA agreed that such a plan would assist Congress and others in understanding the program and evaluating its potential impact and benefits. The National Association of State Purchasing Officials agreed with our conclusion that allowing nonfederal governments to use federal supply schedules can lead to positive effects for state and local governments. It noted, however, that any potential positive effects would be limited by the exclusion of certain contracts from the program. The Association also agreed that GSA should use communication tools in addition to the Commerce Business Daily for states and small businesses. The Coalition for Government Procurement generally agreed with our conclusions and recommendation and emphasized the importance of an implementation plan and good evaluations of the program’s effects. The Coalition suggested that GSA involve business and nonfederal government representatives in formulating this plan and that GSA phase in the implementation.
Pursuant to a legislative requirement, GAO assessed the potential effects of a cooperative purchasing program administered by the General Services Administration (GSA) on nonfederal governments and federal agencies, and on industry, including small businesses and dealers. GAO found that: (1) the potential effects of the cooperative purchasing program are likely to vary among state, local, and the Puerto Rican governments; (2) since participation is voluntary, these governments would use the schedules only if they perceived benefits from doing so; (3) most of the nonfederal entities GAO surveyed anticipated that they would participate; (4) although some of these governments may experience benefits, several factors may limit the extent of these benefits; (5) the program is likely to have little if any effect on Indian tribal governments because the schedules program is already available to them under separate authority; (6) if the GSA effectively implements its plan to exclude schedules from the program when adverse effects on federal agencies are indicated, there is little risk that the program will negatively affect the federal government, but whether it will have positive effects depends largely on whether increased use of the schedules by state and local governments would lead to lower prices and reduced administrative charges by GSA; (7) it is unclear at this time whether either of these would occur; (8) the potential effects of the cooperative purchasing program on industry, including small businesses and dealers, are also likely to vary, although sufficient data are not available to conclusively predict these effects; (9) some businesses, particularly GSA vendors, expect to benefit from increased sales or reduced administrative costs, while other businesses expect to lose sales or have lower profits; (10) still other businesses do not believe they will be affected by the program; (11) most of the concerns that businesses have expressed about significant adverse effects involve only a few GSA schedules; (12) GSA's plan to implement the cooperative purchasing program is still evolving; (13) in 1995, GSA published its initial approach and has been considering changes while implementation has been suspended; (14) GSA has not yet completed a more current, detailed plan, but such a plan would better enable Congress to weigh the merits of cooperative purchasing since so much depends on implementation decisions; (15) although the approach GSA has been considering appears reasonable in key respects, GAO believes a number of improvements would better position GSA to make decisions on making particular schedules available to nonfederal users; and (16) these improvements include the preparation of a written implementation plan and guidance to staff on factors to consider when making decisions.
According to the Defense Health Board’s Task Force on the Future of Military Health Care, rising health care costs result from a multitude of factors that are affecting not only DOD but also health care in general. These factors include greater utilization of health care services, increasingly expensive technology and pharmaceuticals, growing numbers of users, and the aging of the retiree population. Additionally, in 2009, the Defense Business Board reportedcosts are taking up more of the defense budget, and its health care programs may eventually compete with other critical defense acquisition and operational programs. Figure 1 illustrates the actual and projected that defense health care future cost growth for DOD’s MHS according to the Congressional Budget Office. DOD operates a large, complex health system that provides health care to 9.6 million beneficiaries. DOD employs almost 140,000 military, civilian, and contract personnel who work in medical facilities throughout the world. Beneficiaries fall into different categories: (1) active duty servicemembers and their dependents, (2) eligible National Guard and Reserve servicemembers and their dependents, and (3) retirees and their dependents or survivors. In fiscal year 2009, active duty servicemembers and their dependents represented 32 percent of the beneficiary population, eligible National Guard and Reserve servicemembers and their dependents represented 14 percent, and retirees and their dependents or survivors made up the remaining 54 percent. The management of DOD’s MHS crosses several organizational boundaries. Reporting to the Under Secretary of Defense for Personnel and Readiness, the Assistant Secretary of Defense for Health Affairs is the principal advisor for all DOD health policies, programs, and force health protection activities. Health Affairs issues policies, procedures, and standards that govern management of DOD medical programs and has the authority to issue DOD instructions, publications, and directive-type memorandums that implement policy approved by the Secretary of Defense or the Under Secretary of Defense for Personnel and Readiness. It integrates the services’ budget submissions into a unified medical budget that provides resources for DOD’s MHS operations. However, Health Affairs lacks direct command and control of the services’ military treatment facilities. See figure 2 for the current organizational structure of DOD’s MHS. Operationally, DOD’s MHS has two missions: supporting wartime and other deployments, known as the readiness mission, and providing peacetime care, known as the benefits mission. The readiness mission provides medical services and support to the armed forces during military operations, including deploying medical personnel and equipment throughout the world, and ensures the medical readiness of troops prior to deployment. The benefits mission provides medical services and support to members of the armed forces, retirees, and their dependents. DOD’s dual health care mission is delivered by the military services at 59 military treatment facilities capable of providing diagnostic, therapeutic, and inpatient care, as well as hundreds of clinics and private sector civilian providers. The military treatment facilities make up what is known as DOD’s direct care system for providing health care to eligible beneficiaries. The Departments of the Army and the Navy each have a medical command, headed by a surgeon general, who manages each department’s respective military treatment facilities and other activities through a regional command structure. The Navy’s Bureau of Medicine and Surgery supports both the Navy and Marine Corps. The Air Force Surgeon General, through the role of medical advisor to the Air Force Chief of Staff, exercises similar authority to that of the other surgeons general. Each service also recruits, trains, and funds its own medical personnel to administer the medical programs and provide medical services to beneficiaries. For the management of military treatment facilities within the National Capital Region and the execution of related Base Realignment and Closure (BRAC) actions in that area, an additional medical organizational structure and reporting chain was established in 2007. This structure is known as the Joint Task Force National Capital Region Medical, whose Commander reports to the Deputy Secretary of Defense, and the two inpatient medical facilities in the area are considered joint commands assigned to the task force. DOD also operates a purchased care system throughout the country that consists of a network of private sector civilian primary and specialty care providers. The TRICARE Management Activity, under the authority, direction, and control of Health Affairs, is responsible for awarding, administering, and managing these contracts. For many years, GAO and other organizations have highlighted a range of long-standing issues surrounding DOD’s MHS and its efforts to reorganize its governance structure. For example, in 1995, we reported that interservice rivalries and conflicting responsibilities hindered improvement efforts. We further noted that the services have historically resisted efforts to change the way military medicine is organized, including consolidating the services’ medical departments, in favor of maintaining their own health care systems, primarily on the grounds that each service has unique medical activities and requirements. Since the 1940s, there have been over 20 studies that have addressed military health care organization. DOD has identified 11 initiatives aimed at slowing medical cost growth, but it has not fully applied results-oriented management practices to its efforts. Specifically, it has developed an implementation plan and related estimates of potential cost savings for only 1 of the 11 initiatives. As a result, DOD has limited its effectiveness in implementing and monitoring these initiatives and achieving related cost savings and other performance goals. The Senior Military Medical Advisory Council—a committee that functions as an executive-level discussion and advisory group, has approved 11 initiatives that it believes will help reduce rising health care costs. (See table 2 for a list of these initiatives.) These 11 initiatives consist of changes to MHS clinical and business practices in areas ranging from primary care to psychological health care to purchased care reimbursement practices. DOD’s initiatives generally reflect broader concepts that were discussed by health care experts, business leaders, and public officials at two separate forums convened by GAO in 2004 and 2007 on ideas for responding to cost and other challenges in the health care system. For example, in the 2004 forum, 55 percent of participants strongly agreed that the U.S. health care system is characterized by both underuse of wellness and preventive care and overuse of high-tech procedures. In addition, the plenary speakers at the 2004 forum observed that unwarranted variation in medical practices nationwide points to quality and efficiency problems. Similarly, DOD developed initiatives that seek to increase the productivity of and to ease access to primary care and encourage wellness, preventive, and evidence-based health care. Further, in the 2007 forum, 77 percent of participants strongly agreed that the federal government should revise its payment systems and leverage its purchasing authority to foster value-based purchasing for health care products and services. Similarly, MHS officials discussed potential changes that led to the fourth and fifth initiatives as listed in table 2. Both initiatives involve changes to payment for medical care to reward quality of care and health outcomes instead of volume of services rendered. Another of the 11 initiatives aims to reduce costs by keeping patients as healthy as possible during treatment and recovery. With this initiative, MHS officials hope to reach the goal of reducing hospital readmissions by 20 percent and hospital acquired infections by 40 percent by 2013 from the baseline year of 2010. DOD has not fully developed results-oriented management plans for implementing its health care initiatives, which could help ensure the achievement of these initiatives’ cost savings goals. Specifically, we found that as a start to managing the implementation of its initiatives, DOD has developed a dashboard management tool that will include elements such as an explanation of the initiative’s purpose, key performance measures, and funding requirements for implementation. In December 2011, the Senior Military Medical Advisory Council approved six dashboards that were significantly, but not entirely completed. A Health Affairs official stated that DOD currently lacks net cost savings estimates for all but one of the initiatives. Cost savings estimates are critical to successful management of the initiatives so that DOD can achieve its goal of reducing growth in medical costs as stated in the 2010 Quadrennial Defense Review. Further, DOD developed an implementation plan to support the dashboards. The implementation plan has a set format to include such information as general timelines and milestones, key risks, and estimated cost savings. DOD currently has one completed implementation plan, which also contains the one available cost savings estimate among all the initiatives. See table 2 for the progress made for each of these initiatives. As table 2 shows, DOD had completed a dashboard, an implementation plan, and a cost savings estimate for only 1 of its 11 initiatives as of January 13, 2012. As DOD completes its dashboards, implementation plans, and cost savings estimates, it could benefit from the application of the six characteristics of a comprehensive, results-oriented management framework, on which GAO has previously reported, including a thorough description of the initiatives’ mission statement; problem definition, scope, and methodology; goals, objectives, activities, milestones, and performance measures; resources and investments; organizational roles, responsibilities, and coordination; and key external factors that could affect the achievement of goals. DOD has completed an implementation plan for 1 of its 11 initiatives—the Patient Centered Medical Home initiative, which seeks to increase access to DOD’s primary care network. Based on DOD data, we estimate that this initiative will have a net cost Using the desirable savings of $39.3 million through fiscal year 2016.characteristics of a results-oriented management plan, we assessed the one approved implementation plan, and our analysis of this plan showed that DOD addressed four of the characteristics and partially addressed two other characteristics. For an overview of the six desirable characteristics of comprehensive, results-oriented management plans and our assessment of the extent to which DOD’s Patient Centered Medical Home implementation plan incorporates these desired characteristics, see table 3. Our review of the Patient Centered Medical Home implementation plan found that DOD partially addressed the desired characteristic regarding resources and investments. While DOD acknowledged that some staff will be committed full-time to working on this initiative, it did not show in the plan, as prescribed, the number of personnel needed in total to implement the initiative. A DOD official noted that the section in the plan that asks for the number of personnel needed was intended for officials to show if additional personnel and funding beyond the current level were needed. However, the absence of information concerning DOD’s use of current staff renders the size of the initiative’s impact on utilization of personnel unclear. In addition, the Patient Centered Medical Home implementation plan’s annual cost savings estimate did not reflect net losses when they occur in a given fiscal year. For example, in fiscal years 2012 and 2013, DOD’s investment in the Patient Centered Medical Home initiative is larger than savings, but the implementation plan does not show the net losses for those early years.those years. A DOD official responded by noting that DOD interpreted estimated savings to only include actual savings in any given year and not net losses. However, without accounting for both cost savings and investments, decision makers lack a comprehensive understanding of a program’s true costs. Instead, it shows zero cost savings for Additionally, our review of this implementation plan found that DOD partially addressed the desired characteristic of discussing the key external factors that could have an impact on the achievement of goals. While it provided an extensive overview of internal and external challenges, DOD did not outline a specific process for monitoring such developments. Further, the implementation plan does not fully explore the effect of such challenges on the program’s goals or explain how it takes such challenges into account, such as by outlining a mitigation strategy to overcome them. As DOD further develops its dashboards and implementation plans and incorporates the desired characteristics, it will be in a stronger position to better manage its reforms and ultimately achieve cost savings. For example, DOD was experiencing a 5.5 percent annual increase in per capita costs for its enrolled population according to data available as of December 2011, but DOD had set its target ceiling for per capita health care cost increases for fiscal year 2011 at a lower rate of 3.1 percent. According to DOD calculations using 2011 enrollee and cost data, if DOD had met its target ceiling of 3.1 percent increase as opposed to a 5.5 percent increase, the 2.4 percent reduction would have resulted in approximately $300 million in savings. As DOD’s initiatives evolve and each of these management tools is completed for each of the initiatives, they may provide DOD with a road map to improve its efforts to implement, monitor progress toward, and achieve both short-term and longer-term financial and other performance goals. DOD also has not completed the implementation of an overall process for monitoring progress across its portfolio of health care initiatives and has not completed the process of identifying accountable officials and their roles and responsibilities for all of its reform efforts. Our work on results- oriented management has found that a process for monitoring progress is key to success. We have also reported that clearly defining areas of responsibility is a key process that provides management with a framework for planning, directing, and controlling operations to achieve goals. In addition, as MHS leaders develop and implement their plans to control rising health care costs, they will need to work across multiple authorities and areas of responsibility. As the 2007 Task Force on the Future of Military Health Care noted, the current MHS does not function as a fully integrated health care system. As we reported in October 2005, agreement on roles and responsibilities is a key step to successful collaboration when working across organizational boundaries, such as the military services. Committed leadership by those involved in the collaborative effort, from all levels of the organization, is also needed to overcome the many barriers to working across organizational boundaries. For example, Health Affairs centrally manages Defense Health Program funds for the military services, but it lacks direct command and control of the military treatment facilities. Additionally, we that the commitment of agency managers reported in September 2005to results-oriented management is an important practice to help increase the use of performance information for policy and program decisions. DOD’s one approved implementation plan for the Patient Centered Medical Home initiative provides further information on how DOD has applied a monitoring structure, defined accountable officials, and assigned roles and responsibilities in the case of this initiative. Senior officials stated that they plan to monitor performance, specifically cost savings, and said that if projected cost savings were not realized, senior leadership would reconsider further investment in the program. We reported that in some instances, up-front investments are needed to yield longer-term savings and that it is essential for officials to monitor and evaluate whether the initiative is meeting its goals. However, DOD has not completed this process for the remainder of its initiatives. Without sustained top civilian and military leadership which is consistently involved throughout the implementation of its various initiatives and until DOD fully implements for all of its initiatives a mechanism to monitor performance and identify accountable officials, including their roles and responsibilities, DOD may be hindered in its ability to achieve a more cost-efficient MHS and at the same time address its medical readiness goals, improve its overall population health, and improve its patients’ experience of care. Beyond the medical initiatives designed to slow medical cost growth, DOD has taken steps to implement several other initiatives designed to improve MHS governance. However, DOD officials have not fully employed several key management practices to help ensure that these medical governance initiatives will achieve their stated goals. DOD has to varying degrees taken steps to implement some of the seven governance initiatives approved by the Deputy Secretary of Defense in 2006 with the goal of achieving economies of scale, operational efficiencies, and financial savings as well as consolidating common support functions and eliminating administrative redundancies. In 2007, after the initiatives were approved, we recommended that DOD demonstrate a sound business case for proceeding with these initiatives to include providing detailed qualitative and quantitative analyses of benefits, costs, and associated risks. Initially, DOD expected that the seven initiatives would save at least $200 million annually once implemented. However, more than 5 years later, DOD officials have projected estimated financial savings for only one of the seven initiatives concerning the governance and management of the MHS—an initiative to consolidate the command and control structure of its health services within the National Capital Region. Similarly, as part of a separate initiative aimed at increasing efficiency and conserving funds, DOD consolidated its operations at the Naval Health Clinic Great Lakes with the Department of Veterans Affairs’ (VA) North Chicago Veterans Affairs Medical Center, but has not measured its progress in achieving financial Officials said that many of the governance initiatives have savings.significant potential for cost savings, and some of these governance initiatives have already achieved various efficiencies. However, financial savings have not been demonstrated for the majority of the initiatives because most have not been fully implemented. For those that have been implemented, such as the Joint Medical Education and Training Campus in San Antonio, Texas, officials stated that they were unable to develop baseline training costs against which to measure future costs and potential savings. However, the governance structure to command, control, and manage operations at the campus has resulted in the consolidation of 39 of 64 courses. According to officials, this has resulted in efficiencies such as the standardization of pharmacy clinical policy across the services. Table 4 lists the steps DOD has taken to implement the seven governance initiatives, the results of those actions, and potential opportunities to achieve additional cost savings and efficiencies. Although DOD has achieved varying levels of implementation of its MHS governance initiatives, it did not consistently employ several key management practices found at the center of successful mergers, acquisitions, and transformations. Further, BRAC implementation requirements drove implementation progress for a number of initiatives. At a GAO forum in September 2002, leaders with experience managing large-scale organizational mergers, acquisitions, and transformations identified at least nine key practices and lessons learned from major private and public sector organizational mergers, acquisitions, and transformations. During the course of our work examining DOD’s health care initiatives, we determined that six of the key practices identified at our 2002 forum were especially important to ensure that DOD has the framework needed to implement its governance initiatives: (1) a focus on a key set of principles and priorities that are embedded in the organization to reinforce the new changes, (2) coherent mission and integrated strategic goals to guide the transformation, (3) implementation goals and a timeline to build momentum and show progress from day one, (4) a communication strategy to create shared expectations and report related progress, (5) a dedicated implementation team with the responsibility and authority to drive the department’s governance initiatives, and (6) committed and sustained leadership. To its credit, DOD developed a set of guiding principles to facilitate its transformation of DOD’s medical command structure. A clear set of principles and priorities can serve as a framework to help the agency create a new culture and drive employee behavior. For example, a set of core values can become embedded in every aspect of the organization and can serve as an anchor that remains valid and enduring while organizations, personnel, programs, and processes change. Senior DOD officials developed a set of guiding principles to direct efforts throughout the governance transformation. These principles and goals were included in the November 2006 memorandum: (1) provide a healthy, fit and protected force; (2) create a trained, ready, and highly capable medical force that delivers superior medical support; and (3) ensure efficient delivery of a comprehensive health benefit to eligible beneficiaries. Although DOD provided initial guidance and strategic goals in its November 2006 memorandum, it did not follow leading results-oriented strategic planning guidance by establishing performance measures. As we have previously reported, effective implementation includes adopting leading practices for results-oriented strategic planning and reporting, such as establishing specific and measurable performance measures for the transformed organization. In addition, intermediate measures can be used to provide information on interim results and show progress toward intended results. DOD provided initial guidance, which includes strategic goals to assist in the implementation of the governance transformation. For example, the memo provided that lessons learned from the consolidation and realignment of health care delivery within the National Capital Region and San Antonio be used as the basis for establishment of similar structures in other multiservice medical markets. However, MHS officials stated that Health Affairs did not fully monitor and evaluate the progress of its governance initiatives using performance measures. Specifically, DOD leaders stated that specific measures to evaluate the outcomes of the different governance approaches taken in these two locations had not been established. Therefore, DOD lacked information that would be useful in deciding if governance changes are needed in other multiservice medical markets. Such measurable outcomes provide the information DOD needs to determine if it is meeting its goals, make informed decisions, and track the progress of the governance transformation activities. The November 2006 memorandum provided a brief, initial 3-year timetable for the implementation of the governance transformation initiatives; however, this timetable is high level and did not contain interim dates indicating progress. Besides meeting the approval date of the memorandum, MHS officials did not meet any of the other major dates that were set in the timetable. We have reported that establishing implementation goals and a timeline is critical to ensuring success, as well as pinpointing performance shortfalls and gaps and suggesting midcourse corrections. A transformation, such as changing DOD’s MHS governance, is a substantial commitment that could take years before it is completed and therefore must be carefully managed and monitored to achieve success. At a minimum, successful mergers and transformations should have careful and thorough interim plans in place well before the effective implementation date. However, the timetable lacked any interim goals. While DOD has made progress in implementing the three initiatives that were related to BRAC recommendations, this is most likely because DOD was required by law to complete most implementation of BRAC recommendations by September 15, 2011, and to have a monitoring process in place to support these efforts. These three initiatives are (1) create governance structures to command, control, and manage the combined operations at the military treatment facilities in the National Capital Area and in the San Antonio, Texas, area; (2) create a governance structure to command, control, and manage the Joint Medical Education and Training Campus in San Antonio, Texas; and (3) colocate Health Affairs, TMA, and the services’ medical headquarters staff. However, the latest completion date for the colocation of the Health Affairs, TMA, and the services’ medical headquarters staff is the summer of 2012. DOD’s governance initiatives may have been better implemented if MHS officials had maintained a long-term focus on the transformation by setting both short- and long-term goals to show progress and developing a more complete and specific timetable to guide the efforts. DOD has not established an effective and ongoing communication strategy to allow MHS officials to distribute information about its governance changes early and often. Key practices suggest that a transforming organization develop a comprehensive communication strategy that reaches out to employees, customers, and stakeholders and seeks to genuinely engage them in the transformation process. This includes communicating early and often to build trust, ensuring consistency of message, encouraging two-way communication, and providing information to meet specific needs of employees. While MHS officials communicated their transformation initiatives in the 2007 TRICARE Stakeholders’ Report, subsequent reports did not contain any references to the governance initiatives. In addition, the 2008 Military Health System Strategic Plan references a goal to “improve governance by aligning authority and accountability” as a strategic priority; however, the plan does not elaborate on how this goal will be met, and it has not been reissued since. Furthermore, the lack of a communication strategy is evident based on the fact that officials in San Antonio responsible for the initiatives related to establishing the Joint Medical Education and Training Campus and San Antonio Military Health System told us they were unaware of the approved governance initiatives. DOD has not developed an approach to communicate its governance transformation initiatives with stakeholders to ensure that they have a basic understanding of their role and involvement. Without a comprehensive communication strategy, MHS officials will remain limited in their ability to gain support for the governance transformation. Further, this lack of communication can create confusion or a lack of awareness among stakeholders, which can place the success of DOD’s initiatives at risk. DOD did not form an overarching implementation team for all seven of its initiatives to direct their progress. Our prior work has shown that a dedicated team vested with necessary authority and resources to help set priorities, make timely decisions, and move quickly to implement As we have decisions is critical for a successful transformation.previously reported, a strong and stable implementation team responsible for day-to-day management is important to ensuring that a transformation effort receives the focused, full-time attention needed to be sustained and successful. The Deputy Secretary of Defense’s November 2006 memorandum directed DOD to build such a team by 2007. Instead, according to a DOD official, it initiated independent transition teams to guide the implementation of some of its initiatives, such as the Joint Task Force National Capital Region Medical and the colocation of the MHS’s and the services’ medical headquarters staff. The lack of an overarching implementation team likely hampered progress and contributed to uneven progress in the implementation of the initiatives. GAO-03-669. Defense internal efficiencies review. Further, officials told us that the lack of Senate-confirmed, presidentially appointed leadership also presented challenges in moving forward with governance changes. For example, the position of the Under Secretary of Defense for Personnel and Readiness was vacant from January 2009 to February 2010, and the position of Assistant Secretary of Defense for Health Affairs was vacant from April 2009 to January 2011. According to officials, these vacancies hindered progress toward greater unification, as someone temporarily filling the position may be reluctant to make major decisions to change the strategic direction of the MHS. Without involved and sustained military and civilian leadership being held accountable to guide and sustain progress of its initiatives, it may be difficult for the department to fully and successfully achieve its governance transformation. Overall, DOD did not consistently employ key management practices to help improve the implementation of its MHS governance initiatives or to evaluate the extent to which it accomplished the initiatives’ costs savings and other performance goals. As a result, the gaps we identified may have created risks that undermined DOD’s efforts as it began to implement its plans. Specifically, without key management practices in place, DOD lacks both a day-to-day and long-term focus on achieving its goals and accountability to guide and sustain progress of its initiatives. If military health care costs continue to rise at their current rate, they will consume an increasingly large portion of the defense budget and potentially divert funding away from other critical DOD priorities. MHS medical-related and governance-related initiatives represent potential opportunities to implement more efficient ways of doing business, reduce overhead, and slow the rate of cost growth while continuing to meet the needs of military personnel, retirees, and their dependents. While DOD has developed a number of medical initiatives aimed at slowing health care cost increases, successful implementation will depend upon incorporating characteristics of results-oriented management practices, sustaining top military and civilian leadership that holds officials accountable for achieving agency goals, and establishing clear cost savings targets where applicable. By fully employing the characteristics of results-oriented management with greater attention to its investments and resources and key external factors that could affect the achievement of its goals, DOD will gain more assurance that it is effectively managing its health care initiatives and saving money. Additionally, opportunities exist for an improved governance structure that can result in direct cost savings but also help to drive clinical savings. As DOD moves forward with its governance, clinical, and other initiatives, significant financial savings as well as other efficiencies may be possible with the appropriate level of management attention to ensure success. With sound decision making and analysis and by consistently employing key management practices throughout their implementation, DOD officials will be in a position to make informed decisions, to better measure DOD’s progress toward its cost and performance goals, and to be more assured that their efforts yield necessary improvements and achieve efficiencies within the MHS. In order to enhance DOD’s efforts to manage rising health care costs and demonstrate sustained leadership commitment for achieving the performance goals of the MHS’s strategic initiatives, we recommend that the Under Secretary of Defense for Personnel and Readiness direct the Assistant Secretary of Defense for Health Affairs, in conjunction with the service surgeons general, to take the following three actions: Complete and fully implement, within an established time frame, the dashboards and detailed implementation plans for each of the approved health care initiatives in a manner that incorporates the desired characteristics of results-oriented management practices, such as the inclusion of performance metrics, investment costs, and cost savings estimates. Complete the implementation of an overall monitoring process across DOD’s portfolio of initiatives for overseeing the initiatives’ progress and identifying accountable officials and their roles and responsibilities for all of its initiatives. Complete the implementation of the governance initiatives that are already under way by employing key management practices in order to show financial and nonfinancial outcomes and to evaluate both interim and long-term progress of the initiatives. In written comments provided in response to a draft of this report, DOD concurred with our findings and recommendations. Regarding our first recommendation to complete and fully implement, within an established time frame, the dashboards and detailed implementation plans for each of the approved health care initiatives in a manner that incorporates the desired characteristics of results-oriented management practices, DOD concurred and noted that it anticipates that these dashboards and detailed implementation plans will be fully implemented within a year. Regarding our second recommendation to complete the implementation of an overall monitoring process across DOD’s portfolio of initiatives for overseeing the initiatives’ progress and identifying accountable officials and their roles and responsibilities, DOD concurred and noted that such a system is being implemented and it anticipates that the overall monitoring process will also be fully implemented within a year. Regarding our third recommendation to complete the implementation of the governance initiatives that are already under way by employing key management practices in order to show financial and nonfinancial outcomes, DOD concurred and noted that the department will take further action once the legislative requirements concerning its submitted task force report on MHS governance have been fulfilled. DOD noted that it will employ key management practices in order to identify financial and nonfinancial outcomes. DOD’s comments are reprinted in their entirety in appendix II. We are sending copies of this report to the Secretary of Defense, the Deputy Secretary of Defense, the Under Secretary of Defense for Personnel and Readiness, the Assistant Secretary of Defense (Health Affairs), the Surgeon General of the Air Force, the Surgeon General of the Army, the Surgeon General of the Navy, the Commander, Joint Task Force, National Capital Region Medical, and interested congressional committees. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-3604 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. To obtain general background information, we obtained and reviewed various directives, instructions, and policies that defined the organization, structure, and roles and responsibilities of the Military Health System’s (MHS) key leaders. To determine the extent to which the Department of Defense (DOD) has identified initiatives to reduce health care costs and applied results- oriented management practices in developing plans to implement and monitor them, we interviewed DOD officials concerning their approach to this challenge and examined documentation of related plans and policies. Specifically, we interviewed DOD officials in the Health Budgets and Financial Policy Office and in the Office of Strategy Management, within the Office of the Assistant Secretary of Defense for Health Affairs (Health Affairs), as well as officials in the TRICARE Management Activity concerning their 11 health care initiatives and obtained and reviewed documentation concerning their efforts. We compared DOD’s efforts to our prior work on the desirable characteristics of comprehensive, results- oriented management and noted any differences. We compared DOD’s one available implementation plan, concerning the Patient Centered Medical Home initiative, to key practices that guide federal agencies’ approaches to strategic planning efforts by examining the extent to which the implementation plan contained the desirable characteristics of a comprehensive, results-oriented management framework. To perform this comparison, we developed a data collection instrument that contained desirable characteristics and elements that help establish comprehensive strategies using information from prior GAO work examining national strategies and logistics issues. The data collection instrument included the following six desirable characteristics: 1. Mission statement: A comprehensive statement that summarizes the main purposes of the plan. 2. Problem definition, scope, and methodology: Presents the issues to be addressed by the plan, the scope of its coverage, the process by which it was developed, and key considerations and assumptions used in the development of the plan. 3. Goals, objectives, activities, milestones, and performance measures: The identification of goals and objectives to be achieved by the plan, activities or actions to achieve those results, as well as milestones and performance measures. 4. Resources and investments: The identification of costs to execute the plan and the sources and types of resources and investments, including skills and technology and the human, capital, information, and other resources required to meet the goals and objectives. 5. Organizational roles, responsibilities, and coordination: The development of roles and responsibilities in managing and overseeing the implementation of the plan and the establishment of mechanisms for multiple stakeholders to coordinate their efforts throughout implementation and make necessary adjustments to the plan based on performance. 6. Key external factors that could affect the achievement of goals: The identification of key factors external to the organization and beyond its control that could significantly affect the achievement of the long-term goals contained in the plan. These external factors can include economic, demographic, social, technological, or environmental factors, as well as conditions that would affect the ability of the agency to achieve the results desired. We used the data collection instrument to determine whether each characteristic was addressed, partially addressed, or not addressed. Two GAO analysts independently assessed whether each element was addressed, partially addressed, or not addressed, and recorded their assessment and the basis for the assessment on the data collection instrument. The final assessment reflected the analysts’ consensus and was reviewed by a supervisor. We also obtained available documentation and interviewed DOD officials to determine DOD’s approach for monitoring the initiatives’ progress, identifying accountable officials, and defining their roles and responsibilities. We compared DOD’s efforts to our prior work on results- oriented management and noted any differences. We did not assess the reliability of any financial data associated with this objective since we used such data for illustrative purposes to provide context of DOD’s efforts and to make broad estimates about potential costs savings from these efforts. We determined that these data did not materially affect the nature of our findings. To determine the extent to which DOD implemented its seven medical governance initiatives approved in 2006, we first identified the governance initiatives approved by the Deputy Secretary of Defense, and then we visited locations where the initiatives were being implemented to review available documentation related to the status of the efforts and interviewed officials concerning any progress made. Specifically: To determine the extent to which command and control structures in the National Capital Region and San Antonio areas had been established, we met with officials from the Joint Task Force National Capital Region Medical and officials from the 59th Medical Wing, Brook Army Medical Center, and the Army Medical Command in San Antonio, Texas. We obtained and reviewed the charter establishing the Joint Task Force and the memorandum of agreement establishing the San Antonio Military Health System. Based on the interviews and the reviews of the charter, memorandum of agreement, and other documents provided by officials, we determined each organization’s staffing, management structure, responsibilities and authorities, and financing. We compared the resulting organization with the guidance contained in the approved governance initiative to determine if the organization complied with the intent of the approved governance initiative. Furthermore, we interviewed officials and obtained any information available to document and determine if any financial savings had been generated from the change in governance structure. To determine the extent to which a command and control structure for the Joint Medical Education and Training Campus had been established, we met with officials from the Medical Education and Training Campus. We obtained and reviewed the memorandum of agreement establishing the Medical Education and Training Campus. Based on this interview and the reviews of the memorandum of agreement and other documents provided by officials, we determined the organization’s staffing, management structure, responsibilities and authorities, and financing. We compared the resulting organization with the guidance contained in the approved governance initiative to determine if the organization complied with the intent of the approved governance initiative. Furthermore, we interviewed officials and obtained any information available to document and determine if any financial savings had been generated from the change in governance structure. To determine the extent to which the MHS’s and services’ medical headquarters staff had been colocated, we interviewed officials from Health Affairs, and we obtained briefings on the status of the colocation as well as the latest Base Realignment and Closure (BRAC) business plan developed for the colocation. Furthermore, we obtained and examined the recommendation from the 2005 BRAC Commission that mandated the colocation. To determine the extent to which DOD consolidated all medical research and development under the Army Medical Research and Material Command, we interviewed Health Affairs officials responsible for medical research and development funded by the Defense Health Program appropriation to learn the extent to which these funds had been consolidated under the Army Medical Research and Material Command. We reviewed the interservice support agreement that documents how Health Affairs and the Army Medical Research and Material Command agreed to interact to manage the research funded by the Defense Health Program appropriation. We reviewed DOD’s 2008 assessment of medical research and development investments conducted for the Guidance for Development of the Force (fiscal years 2010–2015) for background on how DOD handled medical research and development funds in the past and to document the need for additional research and development funds. To determine the extent to which DOD realigned the TRICARE Management Activity to establish a Joint Military Health Services Directorate and establish an agency to focus on health insurance plan management, we interviewed Health Affairs officials to determine what efforts had been made to accomplish these two initiatives and examined the proposed Military Heath System Support Activity organization put forth in the Defense Health Program’s fiscal year 2012 budget request. To assess the extent to which DOD created governance structures that consolidate command and control of the military treatment facilities in locations with more than one DOD component providing health care services, we interviewed officials at Health Affairs to determine what efforts had been made and what future plans they may have in this area. To determine the extent to which DOD employed key management practices while implementing the medical governance initiatives, we compared DOD’s approach to implementing the approved governance initiatives with key management practices that GAO has found to be at the center of successful mergers, acquisitions, and transformations. Although the GAO report on key practices for transformation listed nine practices, we found that six of the nine had the most relevance to our review. The six key practices we used in our analysis were ensure top leadership drives the transformation, establish a coherent mission and integrated strategic goals to guide the transformation, focus on a key set of principles and priorities at the outset of the transformation, set implementation goals and a timeline to build momentum and show progress from day one, dedicate an implementation team to manage the transformation establish a communication strategy to create shared expectations and report related progress. We decided to exclude the following three practices: (1) the use of the performance management system to define responsibility and assure accountability for change, (2) the involvement of employees to obtain their ideas and ownership for the transformation, and (3) the adaptation of leading practices to build a world-class organization. Rather, we assessed DOD’s use of each of the six of the practices because DOD either employed a practice to some degree or the practice was appropriate given DOD’s position in the transformational process. However, this exception on our part does not suggest that DOD should not employ these three practices in the future. As DOD progresses through the change process, DOD should consider employing all of the key practices to help ensure a successful transformation. We determined the extent to which DOD employed the above key management practices in implementing the medical governance initiatives by comparing them to the actions taken by MHS officials. Specifically, we reviewed the November 2006 action memorandum signed by the Deputy Secretary of Defense that laid out the way ahead, provided some initial guidance, and identified the seven next steps. We examined the 2008 Military Health System Strategic Plan, the Under Secretary of Defense for Personnel and Readiness Fiscal Year 2012-2016 Strategic Plan, MHS stakeholders’ reports, the MHS Strategic Imperatives Scorecard, Defense Health Program budget estimates, memorandums of agreement, an interservice support agreement, charters, BRAC business plans, and memorandums providing the status of implementations efforts. To complete our understanding of DOD’s approach in implementing the seven approved governance initiatives, we interviewed officials from the Office of the Under Secretary of Defense for Personnel and Readiness, Health Affairs, the TRICARE Management Activity, the Joint Task Force National Capital Region Medical, the Medical Education and Training Campus, Brook Army Medical Center, Army Medical Command, and Air Force 59th Medical Wing. We compared this information to key management practices for successful mergers, acquisitions, and transformations and examined any differences. Finally, we also interviewed officials who participated in the Office of the Under Secretary of Defense for Personnel and Readiness’ review of military health care and its impacts on the health of the force and the Deputy Secretary of Defense’s review of MHS governance options. We also obtained the final report from the Task Force on MHS Governance, analyzed its methodology and findings, and discussed the results and its recommendations with DOD officials. We conducted this performance audit from March 2011 through February 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Lori Atkinson, Assistant Director; Rebecca Beale; Stacy Bennett; Grace Coleman; Elizabeth Curda; Kevin Keith; Charles Perdue; Adam Smith; Amie Steele; and Michael Willems made key contributions to this report.
DOD’s health care costs have risen significantly, from $19 billion in fiscal year 2001 to $48.7 billion in its fiscal year 2013 budget request, and are projected to increase to $92 billion by 2030. GAO reviewed DOD’s efforts to slow its rising health care costs by changing selected clinical, business, and management practices. Specifically, GAO determined the extent to which DOD has (1) identified initiatives to reduce health care costs and applied results-oriented management practices in developing plans for implementing and monitoring them and (2) implemented its seven medical governance initiatives approved in 2006 and employed key management practices. For this review, GAO analyzed policies, memorandums, directives, and cost documentation, and interviewed officials from the Office of the Secretary of Defense, from the three services, and at each of the sites where the governance initiatives were under way. The Department of Defense (DOD) has identified 11 initiatives aimed at slowing its rising health care costs, but has not fully applied results-oriented management practices in developing plans to implement and monitor its initiatives. Results-oriented management practices include developing plans that identify goals, activities, and performance measures; resources and investments; organization roles, responsibilities, and coordination; and key external factors that could affect goals, such as a decrease of funding to a program. At the conclusion of GAO’s review, DOD had completed and approved a detailed implementation plan, including a cost savings estimate, for just 1 of its 11 initiatives. Developing cost savings estimates is critical to successful management of the initiatives for achieving the 2010 Quadrennial Defense Review’s call for reduced growth in medical costs. DOD also has not completed the implementation of an overall process for monitoring progress across its portfolio of health care initiatives and has not completed the process of identifying accountable officials and their roles and responsibilities for all of its initiatives. Without comprehensive, results-oriented plans, a monitoring process, and clear leadership accountability, DOD may be hindered in its ability to achieve a more cost-efficient Military Health System, address its medical readiness goals, improve its overall population health, and improve its patients’ experience of care. Additionally, DOD has another set of initiatives, which were approved in 2006 to change aspects of its medical governance structure. GAO found that DOD had implemented some of the initiatives but had not consistently employed several key management practices that would have helped it achieve its stated goals and sustain its efforts. DOD approved the implementation of the seven governance initiatives with the goal of achieving economies of scale and operational efficiencies, sharing common support functions, and eliminating administrative redundancies. Specifically, DOD expected the initiatives to save at least $200 million annually once implemented; however, to date, only one initiative has projected any estimated financial savings. DOD officials stated that the other governance initiatives have resulted in efficiencies and have significant potential for cost savings. Further, the governance initiatives that are further developed were driven primarily by requirements of Base Realignment and Closure Commission recommendations and their associated statutory deadlines for completion. Additionally, GAO found that DOD had not consistently employed several key management practices, which likely hindered the full implementation of the initiatives. For example, the initiatives’ initial timeline was high-level and generally not adhered to, a communication strategy was not prepared, an overall implementation team was never established, and performance measures to monitor the implementation process and achievement of the goals were not established. With more emphasis on the key practices of a successful transformation, DOD will be better positioned in the future to realize efficiencies and achieve its goals as it continues to implement the initiatives. GAO recommends that DOD (1) complete and fully implement comprehensive results-oriented plans for each of its medical initiatives; (2) fully implement an overall monitoring process across the portfolio of initiatives and identify accountable officials and their roles and responsibilities; and (3) complete its governance initiatives and employ key management practices to show financial and nonfinancial outcomes and evaluate interim and long-term progress. In written comments on a draft of this report, DOD concurred with each of these three recommendations.
Ex-Im is an independent agency operating under the Export-Import Bank Act of 1945, as amended. Its mission is to support the export of U.S. goods and services overseas, through the provision of loans, loan guarantees, and insurance, thereby supporting U.S. jobs. Ex-Im is generally prohibited by law from financing any credit sale of defense articles and services for any country. However, in an exception to this rule, Ex-Im was granted authority to facilitate the financing of U.S. exports of defense articles and services, provided that it determines that these items are nonlethal and primarily meant for civilian end use. Such items are known as dual-use exports. Ex-Im’s Engineering and Environment Division, with assistance from the General Counsel, Congressional and External Affairs Division, and the Policy and Planning Division, is responsible for implementing the dual-use authority. According to Ex-Im’s Military Policy, its definitions of “defense articles” and “defense services” are based on who the end user is, and then by the nature of the item and the use to which it will be put. In addition, if the item is designed primarily for military use, it is presumed to be a defense article. For example, according to Ex-Im, furniture sold to a military organization for military use (e.g., for offices or homes occupied by military personnel) is deemed a defense article. However, according to Ex-Im, helicopters sold to a private firm or civilian police force are not defense articles. According to Ex-Im policy, an export is eligible for financing as a dual-use item if convincing evidence exists that the export is nonlethal in nature and will be used mainly for civilian activities. The determination of eligibility for dual-use financing may require applicants for Ex-Im financing to provide additional information beyond the contract and transaction data the bank normally requires for a loan or guarantee. For example, before approving the three fiscal year 2012 dual-use export transactions, Ex-Im obtained written certification from each borrower that the items to be exported were nonlethal and would be primarily for civilian use. Ex-Im may also seek to corroborate the information submitted by applicants by contacting other U.S. government agencies, such as the Department of State. Eutelsat, primarily for telecommunications (television and internet service). Ex-Im is treating this item as dual-use because a small portion of the satellite is being used by the Qatari military for communications purposes and is not being financed by Ex- Im. 3. 150 pieces of new and used construction equipment from U.S. manufacturers exported by Hoffman International, Inc., for the government of Cameroon. Three satellites for the government of Mexico: one fixed service satellite and two mobile service satellites. Fixed service satellites provide service to a geographically fixed end user, such as a television station, and mobile service satellites provide service to mobile end users, such as personnel in vehicles. The satellites are for civilian communications and for humanitarian and drug interdiction operations by the Mexican military. These three transactions accounted for $1.03 billion in Ex-Im financing, or just under 3 percent of Ex-Im’s financing for fiscal year 2012 of $35.8 billion. The Mexico satellite transaction is the bank’s single largest dual- use financing transaction to date, with a loan guarantee of $922 million. Prior to approving the fiscal year 2012 transactions, Ex-Im vetted the Eutelsat and Cameroon transactions with the Department of State, and vetted the Mexico transaction with the U.S. embassy military attaché in Mexico. government of Cameroon’s military engineering corps for civilian infrastructure development and a small number of military construction projects. Ex-Im generally requires buyers of dual-use exports to submit an annual “end-use certification and report,” which describes the civilian and military use of the exported item(s) and includes a certification by the buyer that the item(s) are being used primarily for civilian purposes. In addition, credit and guarantee agreements for dual-use transactions may include provisions for annual, semiannual, or periodic “progress reports” and “technical operating reports” to monitor the status and usage of the items. Such information helps inform bank officials and provides them greater assurance about the exports’ end use. These reports are to be submitted to Ex-Im until the loan or guarantee is repaid. According to Ex-Im officials, the credit agreements for the three fiscal year 2012 dual-use transactions were negotiated on a case-by-case basis to incorporate the specific circumstances related to the buyer and the item(s) being exported. The agreements with the governments of Mexico and Cameroon require an annual end-use certification and report. The agreements for the two satellite transactions require the buyer (Eutelsat and the government of Mexico) to submit to Ex-Im (1) periodic progress reports covering the satellite(s)’ construction, launch, and in-orbit testing, and (2) technical operating reports that include information concerning the operation and maintenance of the satellite(s) and related telemetry, tracking and command stations, and transponder capacity and use. Ex-Im addressed weaknesses in monitoring the end use of dual-use items by revising and implementing its guidance for monitoring these items, consistent with our August 2014 recommendation. The revised guidance clarified the responsibilities of Ex-Im staff for monitoring the end use of exported dual-use items after the bank’s board of directors authorizes an export transaction. As a result of implementing the revised guidance, Ex-Im has received in a timely manner all documents required through June 2015 since our last report, issued in August 2014. Ex-Im revised its 1997 memorandum on the implementation of its dual- use policy for military applications–-which had not been updated prior to our 2014 review–-and disseminated the revised memorandum to relevant staff on March 11, 2015. The revised memorandum adds to the responsibilities of Ex-Im staff for monitoring the end use of exported dual- use items after the bank’s board of directors has authorized an export transaction. Specifically, the new memorandum calls for the engineer assigned to monitor the transaction to take the following actions: Notify buyers. The engineer is to communicate with the bank’s Asset Management Division–-Ex-Im’s primary contact with the buyer after the bank approves the transaction–-to ensure that a process is established to notify the buyer in advance of any reporting due to be submitted to the bank. The memorandum specifies that if a dual-use report or related document becomes overdue, the assigned engineer, in conjunction with the asset management officer, will notify the buyer and alert the bank’s Office of the General Counsel within 30 days of the date when the report or related information was due so that appropriate action can be taken to expedite the submission of the required information. Document monitoring activities. The engineer is to keep a record of his or her activities in an electronic folder, which is to contain a number of specified documents, including requirements set forth in the bank’s loan or guarantee agreement pertaining to the scope and frequency of post-authorization reporting, all post-authorization reports on end use, and any correspondence between Ex-Im and the buyer or end user relating to the end use of the exports. Determine compliance. The engineer is to make an annual determination as to whether information received during the previous year was adequate to demonstrate that the transaction complied (or failed to comply) with the requirements of the bank’s dual-use policy, as set forth in the financing agreement and the bank’s charter. Should the engineer determine or become concerned that the dual-use transaction is or may be out of compliance with these requirements, the engineer should notify the Vice President of the Engineering and Environment Division, who oversees the monitoring of dual-use transactions, and other Ex-Im officials; these determinations and any such referrals and related correspondence should be documented in the electronic files. We found that the engineers responsible for monitoring the three fiscal year 2012 dual-use transactions financed by Ex-Im had taken the actions called for in the revised dual-use policy memorandum. In accordance with its revised guidance, Ex-Im has established an internal e-mail reminder system that automatically alerts the appropriate Ex-Im officials to notify the buyer that the annual end-use certification and report is coming due. The engineers received an e-mail alert several months prior to the due date and communicated with the asset management officer, who then notified the governments of Cameroon and Mexico that the due date for this documentation was approaching. As a result, Ex-Im received the 2014 end-use certification and report from each of these governments in a timely manner. Ex-Im also received on time any applicable progress and technical operating reports for the fiscal year 2012 dual-use transactions. For a summary of the reports required in the financing agreements, their due dates, and when they were received, see figure 1. While Ex-Im’s financing agreement with the government of Mexico calls for separate progress and technical operating reports for each satellite, Ex-Im officials decided to allow the government of Mexico to combine the progress reports for the two Mexican satellites that are not yet operational with the technical operating report for the one satellite already in use. The Vice President for Engineering and Environment approved this decision, and the engineer responsible for monitoring the transaction documented it. The combined report adheres to the timing specified in the Mexico financing agreement for the progress reports and exceeds the timing specified for technical operating reports. Submitting one combined report means that the technical operating report is submitted more frequently than required–-twice a year instead of annually, according to Ex-Im officials. They also stated that combining these reports is more efficient because it allows Mexican officials to submit and Ex-Im officials to review a single document instead of three separate documents submitted at different times. In accordance with Ex-Im’s revised guidance, the engineers responsible for monitoring each dual-use transaction have created an electronic folder system to document their monitoring activities. This system contains separate folders for each of the three fiscal year 2012 dual-use transactions. We examined these folders and found that they contained the required information. Each transaction folder contained, among other things, information associated with monitoring the end use of the exported item(s), such as annual end-use certification and reports; the engineer’s annual determination regarding compliance with the bank’s dual-use policy; correspondence determined by the engineer to be key to monitoring end use, such as an e-mail from the buyer transmitting required reports; and any applicable progress or technical operating reports or documentation of trips to inspect end use. In accordance with Ex-Im’s revised guidance, the engineers responsible for monitoring the Cameroon and Mexico transactions have each made a determination that the information received during the previous calendar year (2014) was adequate to demonstrate that the transaction complied with the requirements of the bank’s dual-use policy. The engineer responsible for monitoring the Eutelsat transaction did not make such a determination because none was required. Further details for each transaction are described below. Cameroon. The engineer monitoring the Cameroon transaction conducted a detailed analysis of equipment usage data within the 2014 calendar year in Cameroon’s end-use certification and report and determined that the use of the equipment was overwhelmingly civilian in nature and thus met the bank’s dual-use requirement of being used primarily for civilian purposes. After meeting with military officials during a trip to Cameroon in early June 2015 and inspecting their operations, visiting current projects and assessing current and future needs, the engineer confirmed his determination that the use of the equipment was overwhelmingly civilian in nature. Mexico. The engineer monitoring the Mexico transaction determined that, while all three satellites are not yet fully operational, the government of Mexico continues to project a ground terminal allocation of 40 percent military and 60 percent civilian as the basis for its dual-use compliance. The Mexican government submitted an annual end-use certification and report for 2014 listing the civilian and military entities to which a total of 115,424 terminals would be allocated and the number of terminals allocated to each entity; 46,324 (40 percent) were allocated to military entities. This information is identical to that submitted by the Mexican government for 2013. Ex-Im officials stated that any end-use monitoring the bank might conduct would likely involve meeting with the U.S. military attaché in Mexico and obtaining information from a small sample of terminals. The officials acknowledged that once all the satellites are operational, it will be very difficult to ascertain actual end use, since that would involve monitoring over 100,000 ground terminals, obtaining logs showing bandwidth usage, and examining the bandwidth use associated with each terminal. They said the engineer responsible for monitoring this transaction would instead base his annual determination on the percentage of ground terminals allocated to military entities, although they identified limitations with using this percentage measure. However, the officials said that, based on pre- approval vetting with the U.S. military attaché in Mexico, the bank determined that it is highly unlikely that the satellites would be used for mostly military purposes. According to the bank’s board memorandum authorizing the Mexico satellite transaction, and as we noted in our August 2014 report, the Mexican Secretariat of Communications and Transportation–-and not the Mexican military–-is operating the satellites. One of the mobile service satellites was destroyed during a failed launch in May 2015, leaving two satellites–-the fixed service satellite launched in 2012 and already in use, and one more mobile service satellite with a planned launch in the fall of 2015. According to the engineer, the three-satellite system requires only one mobile service satellite to operate at a time, with the other functioning as a spare; the two mobile service satellites would thus be able to exchange operational and spare roles as needed. He said the government of Mexico has not yet determined whether to replace the destroyed satellite. Eutelsat. No annual determination regarding end use is required for the Eutelsat transaction because, as we noted in our August 2014 report, once the satellite became airborne in 2013, the number and capacity (bandwidth) of the military transponders on the satellite could not be modified. As we previously reported, these transponders, which are dedicated to nonlethal military use by the government of Qatar, were not financed by Ex-Im and represent only 6 of the satellite’s 46 transponders and less than half its total transponder capacity. A senior Ex-Im official stated that if Eutelsat were to sell any of its transponders on the satellite to the Qatari government–-thereby increasing the number of transponders that could potentially be used by that government’s military for military purposes–-the French company would have to report this information in its technical operating report and in financial reports required in its credit agreement with the bank. According to Eutelsat’s two most recent technical operating reports, submitted in July 2014 and February 2015, there has been no change in transponder ownership. Ex-Im did not finance any new exports under its dual-use authority in fiscal year 2014, according to Ex-Im officials and our review of relevant data on Ex-Im authorizations. According to Ex-Im officials, each application for financing requires the entry of numerous data elements for the application record. Several of these elements relate to whether there are any military implications in the application, and one field relates to whether or not the application would go forward under the bank’s dual- use authority. The Engineering and Environment Division, which administers the bank’s military policy and consequently its dual-use policy, is responsible for filling in this data field. We provided Ex-Im a draft of this report and Ex-Im provided comments on June 12, 2015 (see app. II). Ex-Im agreed with our findings and stated that it is the bank’s understanding that it has implemented the recommendation in our August 2014 report. This understanding is correct and we will close that recommendation. We also received technical comments from Ex-Im officials. The officials updated information on their monitoring of the end use of the Cameroon equipment and clarified information about progress and technical operating reports and the expected use of the Mexican satellites. We made changes to our report in response to these comments where appropriate. We are sending copies of this report to interested congressional committees. We are also sending copies to the President and Chairman of Ex-Im, the Secretary of Defense, and the Secretary of State. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8612 or [email protected]. To examine how the Export-Import Bank of the United States (Ex-Im) addressed weaknesses in monitoring the end uses of the dual-use exports it continued to finance in fiscal year 2013, and to identify what dual-use exports, if any, Ex-Im reported it financed in fiscal year 2014, we reviewed Ex-Im documentation regarding its dual-use policy, including a 1997 memorandum on implementing that policy; a revised 2015 memorandum; Ex-Im documentation associated with each of the three dual-use transactions Ex-Im financed in fiscal year 2012; and data on dual-use determinations. In addition, we examined Ex-Im’s new electronic filing system for dual-use transactions, including the folder and subfolder structure and enclosed documents, so that we could determine what documents were filed in the system, and interviewed the official who created the system. We also observed a demonstration of the automatic e-mail reminders that prompt the appropriate Ex-Im official to notify buyers of upcoming due dates for end-use documents. We interviewed Ex-Im officials in Washington, D.C., who review applications for the financing of dual-use exports and monitor end-user compliance with dual- use requirements, including the Vice President of the Engineering and Environment Division. We did not independently verify the information provided to Ex-Im or assess the appropriateness of the metrics or the effectiveness of the controls Ex-Im was using to determine end use. Through interviews with cognizant agency officials about Ex-Im’s procedures for identifying and categorizing dual-use transactions in its Application Processing System, we determined that Ex-Im data were sufficiently reliable for the purpose of identifying dual-use exports financed under Ex-Im’s dual-use authority in fiscal year 2014. We conducted this performance audit from February 2015 to June 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Kimberly Gianopoulos, (202) 512-8612 or [email protected]. In addition to the contact named above, Adam Cowles, Assistant Director, and Kay Halpern made key contributions to this report. In addition, Ashley Alley, Nabajyoti Barkakati, Tina Cheng, Debbie Chung, Martin De Alteriis, Michael Kaeser, and Hai Tran provided technical assistance. Export-Import Bank: Status of Actions to Address GAO Recommendations since the Bank’s 2012 Reauthorization. GAO-15-557T. Washington, D.C.: April 15, 2015. Export-Import Bank: Monitoring of Dual-Use Exports Should Be Improved. GAO-14-719. Washington, D.C.: August 28, 2014. Export-Import Bank: Financing of Dual-Use Exports. GAO-13-628R. Washington, D.C.: May 29, 2013. Export Promotion: The Export-Import Bank’s Financing of Dual-Use Exports. GAO-12-628R. Washington, D.C.: April 12, 2012. Export Promotion: The Export-Import Bank’s Financing of Dual-Use Exports. GAO-10-1052R. Washington, D.C.: September 15, 2010. Export Promotion: The Export-Import Bank’s Financing of Dual-Use Exports. GAO-08-1182R. Washington, D.C.: September 30, 2008. Ex-Im Bank: The U.S. Export-Import Bank’s Financing of Dual-Use Exports. GAO-07-1234R. Washington, D.C.: September 27, 2007. Export-Import Bank: The U.S. Export-Import Bank’s Financing of Dual- Use Exports. GAO-01-1110R. Washington, D.C.: August 31, 2001. Export-Import Bank: The U.S. Export-Import Bank’s Financing of Dual- Use Exports. NSIAD-00-231R. Washington, D.C.: September 1, 2000. International Affairs: U.S. Export-Import Bank’s Financing of Dual-Use Exports. NSIAD-99-241R. Washington, D.C.: September 1, 1999. International Affairs: U.S. Export-Import Bank’s Financing of Dual-Use Exports. NSIAD-98-244R. Washington, D.C.: September 1, 1998. U.S. Export-Import Bank: Process in Place to Ensure Compliance With Dual-Use Export Requirements. NSIAD-97-211. Washington, D.C.: July 17, 1997.
Since 1994, Ex-Im has had the authority to facilitate the financing of dual-use exports, which include construction equipment used by foreign militaries to build roads. After a 9-year hiatus, Ex-Im financed three dual-use exports in fiscal year 2012, accounting for $1.03 billion, or just under 3 percent of Ex-Im's $35.8 billion financing for that year. The Consolidated and Further Continuing Appropriations Act, 2015, extends a provision for GAO to report annually on the end uses of dual-use exports financed by Ex-Im during the second preceding fiscal year. In August 2014, GAO reported that monitoring-related documents from borrowers required by the financing agreements were missing or late and that Ex-Im's dual-use monitoring policy did not specify what actions Ex-Im officials should take if the bank did not receive the required documents. GAO recommended that Ex-Im establish steps staff should take in cases where borrowers do not submit required end- use documentation within the time frames specified in their financing agreements and ensure that these efforts are well documented. Ex-Im agreed with GAO's recommendation and revised its guidance. This report (1) examines how Ex-Im addressed weaknesses in monitoring the end uses of the dual-use exports it finances and (2) identifies what dual-use exports, if any, Ex-Im reported it financed in fiscal year 2014. GAO reviewed Ex-Im documents and interviewed Ex-Im officials. The Export-Import Bank of the United States (Ex-Im) addressed weaknesses in monitoring the end use of exported “dual-use” items by revising and implementing its guidance for monitoring these items, as GAO recommended in August 2014. Dual-use items are defense articles and services that Ex-Im has determined are nonlethal and primarily meant for civilian use. Specifically, Ex-Im revised its 1997 memorandum on the implementation of its dual-use policy for military applications and disseminated it to relevant staff on March 11, 2015. The updated memorandum clarified the responsibilities of Ex-Im staff for monitoring end use, and GAO found that bank staff have now taken the following steps: made determinations , in what is to be an annual process, as to whether the information received was adequate to demonstrate that the transaction complied or failed to comply with the bank's dual-use policy. As a result, Ex-Im has received in a timely manner all documents required since GAO's last report, issued in August 2014. Ex-Im did not finance any new exports under its dual-use authority in fiscal year 2014, according to Ex-Im authorizations data and Ex-Im officials.
For tax years beginning after 2000, the Economic Growth and Tax Relief Reconciliation Act of 2001 applied a new 10-percent income tax rate to a portion of an individual’s income that was previously taxed at 15 percent. To stimulate the economy more rapidly than would be achieved if taxpayers had to wait until they filed their tax year 2001 tax returns to realize the full impact of this rate reduction, the Act provided for eligible taxpayers to receive an advance 2001 tax refund. To be eligible for an advance refund, taxpayers (1) had to have a federal income tax liability on their tax year 2000 return, (2) could not be claimed as a dependent on someone else’s tax year 2000 return, and (3) could not be a nonresident alien. The amount of advance tax refund that taxpayers could receive depended on the filing status and amount of taxable income shown on the taxpayer’s tax year 2000 return. The maximum refund amount was $600 for a married couple filing jointly or a qualified widow(er), $500 for a head of household, and $300 for a single individual or married person filing separately. Before issuance of the advance tax refund checks, IRS was to send every individual who filed a return for tax year 2000 a notice either informing them of the refund amount they were to receive and the week in which they were to receive it or telling them that they were ineligible for a refund and why. FMS was to issue the advance tax refund checks for IRS with assistance from the Defense Finance and Accounting Service (DFAS).Before issuing any check, IRS and FMS were to reduce the amount of the check by the amount of any delinquent federal tax or certain other debts, such as delinquent child support payments, owed by the taxpayer. Most advance refund checks were to be issued over a 10-week period from the week of July 23, 2001, through the week of September 24, 2001, based, in general, on the last two digits of a taxpayer’s Social Security number (SSN). For example, taxpayers with 00 through 09 as the last two digits of their SSN were to receive their checks the week of July 23, 2001, while taxpayers with 90 through 99 as the last two digits of their SSN were to receive their checks the week of September 24, 2001. Taxpayers who filed their tax year 2000 returns after April 16 were to receive their advance tax refund checks later in the fall. All checks were to be issued by December 31, 2001. IRS, through FMS, mailed out advance tax refunds according to a schedule that called for taxpayers to begin receiving checks the week of July 23, 2001. As shown in table 1, from then through the end of September, about 84.1 million taxpayers were to have received about $35.5 billion in advance tax refunds. According to IRS officials, it cost IRS about $104 million to administer the advance tax refund program through the end of fiscal year 2001. Included in these costs were $36 million for contract costs, $33 million for postage, $30 million for labor, and $5 million for printing. IRS expected to incur an additional $12 million in labor costs during fiscal year 2002 related to the advance tax refunds, because refund payments were to be made through the end of December 2001. FMS expected to incur about $34 million in total costs to issue the checks on behalf of IRS, including the assistance provided by DFAS. To administer the advance tax refund program, IRS, among other things, had to develop the computer programming necessary to determine taxpayer eligibility for the refund and the amount of each refund, including any related federal tax offset; arrange for printing and mailing of notices that informed taxpayers prepare adjustment notices for refunds that were offset due to federal tax whether they would receive a refund; respond to telephone calls and correspondence from taxpayers concerning the refund; resolve undelivered and returned advance refund checks; and debts. According to an IRS official, it took about 3 months between March 2001 and June 2001 to develop the necessary computer programming and to arrange for printing and mailing of notices needed to implement the advance tax refund program. IRS temporarily reassigned staff from other functions to assist with taxpayer telephone calls and correspondence related to the advance tax refunds. For example, IRS recalled furloughed staff at its forms distribution centers to assist taxpayers who called IRS with questions about the advance refund that were relatively easy to answer. In addition, IRS used submission processing staff from its Philadelphia Service Center to help respond to over 90,000 written inquiries from taxpayers concerning the advance tax refunds. For any taxpayer whose account involves a federal tax debt, IRS is to offset the advance tax refund due to the taxpayer, either in whole or in part, to collect the debt. In addition, FMS is to offset the advance tax refunds to collect other types of debt via the Treasury Offset Program.The notice IRS sent to taxpayers who were eligible to receive an advance tax refund included a statement that the amount of the refund could be reduced by any outstanding debt owed, such as past due federal and state taxes or child support. According to data obtained from IRS and FMS, the two agencies had offset the advance tax refunds by almost $2.7 billion because of taxpayer debt. As of September 30, 2001, IRS had offset about $2.1 billion to recover delinquent federal tax. As of October 31, 2001, FMS had offset about $469 million for the following reasons: $263 million for delinquent child support. $191 million for federal debts other than delinquent taxes. $15 million for delinquent state taxes. The following problems were encountered in implementing the advance tax refund program: A computer programming problem resulted in about 523,000 taxpayers receiving inaccurate refund notices. About 5.3 million taxpayers received untimely refund notices because of IRS’ procedures for processing returns and the way programming was developed to generate advance refund notices. About 2 million notices were returned to IRS due to incorrect addresses and, as of October 30, 2001, IRS had about 300,000 undeliverable checks for which it was seeking updated addresses. Taxpayers who called IRS during the advance tax refund period had greater difficulty reaching IRS assistors than did taxpayers who called during the same timeframe in 2000 or during the 2001 tax filing season. A small number of taxpayers received duplicate checks in the early stages of the program. TIGTA identified an IRS computer programming problem that resulted in about 523,000 taxpayers receiving inaccurate advance refund notices. As noted earlier, the maximum amount of a taxpayer’s advance refund was to be $600, $500, or $300 depending on the taxpayer’s filing status. However, the actual amount of the advance refund was limited to the lesser of (1) 5 percent of the taxable income on the taxpayer’s tax year 2000 return and (2) the net income tax from the tax year 2000 return after subtracting nonrefundable credits, such as the credit for child and dependent care expenses, child tax credit, credit for the elderly, and education credit. TIGTA found that IRS had erred in developing its computer program by not limiting advance refund amounts to the net income tax after credits, thus resulting in the inaccurate advance refund notices. TIGTA informed IRS of this problem on July 3, 2001, and IRS was able to correct the problem before any advance refunds were issued—thus avoiding overpayments of about $118 million. IRS also sent corrected notices to the affected taxpayers. TIGTA also determined that 5.3 million taxpayers who filed their tax returns by the April 16 filing deadline would have delays from 1 week to 9 weeks in receiving their advance refund notices. TIGTA attributed the delays to the following two reasons. IRS’ normal procedure is to process income tax returns filed by taxpayers who are due to receive a tax refund before processing income tax returns filed by other taxpayers. Thus, many nonrefund returns filed by April 16 had not been processed by the time IRS prepared the list of taxpayers who would receive the first mailout of advance refund notices. When IRS developed the programming to generate the advance refund notices for taxpayers affected by the above processing procedure, it decided to have the notices mailed to the taxpayers just before they were to receive their advance refund checks instead of having the notices mailed as soon as the tax return was processed. In response to a TIGTA recommendation, IRS issued a news release explaining that some taxpayers might experience a delay in receiving their advance tax refund notices. One problem that IRS encountered throughout the implementation of the advance tax refund program involved undeliverable refund notices and checks due to incorrect addresses. Undeliverable advance refund notices were to be returned to IRS’ Philadelphia Service Center, and undeliverable advance refund checks were to be returned to the FMS payment center from which they were issued. Through September 30, 2001, almost 2 million advance tax refund notices were returned to IRS as undeliverable, including about 1.1 million notices sent to taxpayers who were to receive a refund and about 900,000 notices sent to taxpayers who were ineligible for a refund. According to an IRS official, the undeliverable notices were sorted and counted by type of notice and then destroyed. Because these notices were sent to taxpayers via first class mail, the Postal Service was to forward notices for which taxpayers had provided an address change. Therefore, IRS decided that it would not be cost effective to follow up on the undeliverable notices. Even if a notice to a taxpayer who was to receive an advance tax refund was returned as undeliverable, a check would still have been sent to that taxpayer. In a news release dated October 30, 2001, IRS indicated that there were almost 300,000 undeliverable advance refund checks valued at about $95 million for which they urged taxpayers to contact IRS so that the checks could be reissued to the correct address. According to an FMS official, undeliverable tax refund checks are cancelled and information concerning the cancelled checks is sent to IRS. IRS is to then research a taxpayer’s account to determine if there is an updated address to which another check can be sent. IRS updates taxpayer addresses each week through a National Change of Address Database maintained by the Postal Service. Taxpayers can also update their addresses with IRS by submitting an IRS Form 8822 “Change of Address.” In addition, IRS authorized its customer service representatives to accept change of address information over the telephone from taxpayers who call about their advance tax refund. According to IRS Philadelphia Service Center officials, much of the written correspondence they received involved address changes from taxpayers who wanted to ensure that they would receive their advance refunds. Although the number of undeliverable advance refund checks was substantial, the percentage of checks returned as undeliverable (less than 1 percent) was less than the approximate 4 percent rate that an FMS official indicated was normal for undeliverable tax refunds. At the time we prepared this report, we had data on the accessibility of IRS’ telephone assistance for the first 3 months of the advance tax refund period. The data showed that taxpayers calling IRS during those 3 months had problems reaching an assistor. Overall, when compared with the same 3-month period in 2000 and with the 2001 tax filing season, the accessibility of IRS’ telephone assistance during the advance tax refund period generally declined. IRS had a two-pronged approach for responding to the increased demand for telephone assistance that it expected during the advance tax refund period. The first prong of IRS’ strategy was to handle as many calls as possible through automation, thereby freeing up assistors to handle calls that required live assistance. To accomplish this, IRS publicized its TeleTax phone number on notices sent to taxpayers and though an announcement played on IRS’ main telephone assistance line. The TeleTax line had recorded information on the rebate program and an interactive service that told the taxpayer the expected date the check would be mailed based on the last two digits of the Social Security number entered by the taxpayer. The second prong of IRS’ strategy was to increase the staffing devoted to telephone assistance. We are continuing to obtain information on the extent of IRS’ efforts in this regard. Among other things, however, IRS’ forms distribution centers recalled 450 employees from furlough and trained them to handle simpler calls related to the rebate. Despite IRS’ efforts to meet the increased demand for telephone assistance during the advance tax refund period, taxpayers had greater difficulty in accessing that assistance. IRS has four measures for judging its performance in providing access to telephone assistance. As shown in table 2, IRS’ performance during the first 3 months of the advance tax refund period declined for all four measures compared with the same time period in 2000 and declined for three of the four measures compared with the 2001 filing season. We are inquiring into reasons for the decline in telephone accessibility during the advance tax refund period and will include that information in our final report. However, one possible explanation is that the demand for telephone assistance exceeded IRS’ expectations. Although we did not have usable information on IRS’ expectations when we wrote this report, we did have IRS data on actual demand. IRS measures demand in two ways—total call attempts and unique telephone number attempts.According to IRS data for both of those measures, the demand for telephone assistance was about twice as high during the first 3 months of the advance tax refund period as it was during the same time period in 2000. In the first 3 months of the advance tax refund period, IRS received about 23.8 million total call attempts and about 13.3 million unique number attempts, compared to about 11.4 million and 7.1 million, respectively, during the same period in 2000. In commenting on a draft of this report, the Commissioner of Internal Revenue said that although IRS made “extraordinary efforts to handle advance refund calls,” the high volume of telephone calls resulted in a reduced level of service. The Commissioner’s letter, which is in appendix I, cites various statistics to document the increase in demand. Because those statistics include calls to TeleTax, they differ from the statistics cited in the preceding paragraph. Another problem related to the advance tax refunds was identified within the first 2 weeks of the advance refund period and promptly corrected. The problem involved duplicate checks sent to taxpayers by one of the three DFAS centers that assisted FMS in issuing the advance tax refund checks. The problem surfaced when two taxpayers who had received duplicate checks tried to cash the second check and a third taxpayer notified IRS about receiving a duplicate check. Once the problem was identified, FMS decided to suspend use of the DFAS center from which the duplicate checks had emanated. According to FMS, due to significantly lower check volumes than originally anticipated, the DFAS center was subsequently retained as a contingency site, rather than being returned to full check production. According to an FMS official, as of November 2001, about 25 instances of duplicate checks had been identified. Of the 25 taxpayers who received duplicate checks, 14 taxpayers had either fully repaid the extra payment or had returned the duplicate check and 2 taxpayers had partially repaid the extra payment. FMS was in the process of recovering the duplicate payments from the other nine taxpayers. In commenting on a draft of this report, the Commissioner of Internal Revenue and the Commissioner of FMS provided some clarifying information that we used to revise the report where appropriate. In commenting on the advance tax refund program in general, the Commissioner of Internal Revenue said that IRS was able to accomplish what it did by “applying the maximum resources possible and giving it top priority management attention.” The Commissioner of FMS said that the program was “extremely successful particularly considering the time constraints placed upon us to plan and execute this critically important and highly visible program.” As we agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. We will then send copies to others who are interested and make copies available to others who request them. This report was prepared under the direction of David J. Attianese, Assistant Director. If you have any questions about this report, please contact me or Mr. Attianese at (202) 512-9110. Key contributors to this assignment were Ronald W. Jones and Robert C. McKay.
The Economic Growth and Tax Relief Reconciliation Act of 2001 directed the Treasury to issue advance 2001 tax refunds to individual taxpayers who filed a tax year 2000 return. As a result, the Internal Revenue Service (IRS) had to identify eligible taxpayers so that checks could be sent to these taxpayers by December 31, 2001. The Department of the Treasury's Financial Management Service was to issue the checks on behalf of IRS, with the first checks to be received during the week of July 23, 2001. As of September 30, 2001, 84 million taxpayers were to have received $36 billion in advance tax funds. IRS offset about $2.1 billion from these advance tax refunds to recover delinquent federal taxes. IRS spent $104 million to run the program through September 2001, which included IRS staffing costs as well as the costs associated with contracts, postage, and printing. The Treasury Inspector General for Tax Administration identified two initial problems that affected either the accuracy or timeliness of the advance refund notices. One involved computer programming errors that resulted in 523,000 taxpayers receiving notices indicating that they would receive larger advance tax refunds than they were entitled to receive. The IG also determined that 5.3 million taxpayers who had filled their tax returns by the April 16 filing deadline would have delays of up to nine weeks in receiving their advance refund notices. Two problems that continued throughout the advance tax refund period involved (1) undeliverable refund notices and checks due to incorrect addresses and (2) taxpayer difficulties in reaching IRS telephone assistors. Another problem that was quickly identified and corrected during the early stages of the advance tax refund period involved duplicate checks sent to about 25 taxpayers.
The financial regulatory system consists of numerous regulators with varying missions and functions. They promulgate regulations via federal rulemakings. In particular, the Dodd-Frank Act includes specific rulemaking and coordination requirements. In the banking industry, the specific regulatory configuration generally depends on the type of charter the banking institution chooses. Depository institution charter types include commercial banks, which originally focused on the banking needs of businesses but over time have broadened their services; savings associations (also known as thrifts), which include federal savings banks and certain state savings banks, and savings and loans and were originally created to serve the needs—particularly the mortgage needs—of those not served by commercial banks; and credit unions, which are member-owned cooperatives run by member- elected boards with a historical emphasis on serving people of modest means. All depository institutions that have federal deposit insurance have a federal prudential regulator, which generally may issue regulations and take enforcement actions against institutions within its jurisdiction. The prudential regulators are identified in table 1. Holding companies that own or control a bank or thrift are subject to Federal Reserve supervision. The Bank Holding Company Act of 1956 and the Home Owners’ Loan Act set forth the regulatory frameworks for bank holding companies and savings and loan holding companies, respectively. The Dodd-Frank Act made the Federal Reserve the regulator of savings and loan holding companies and amended the Home Owners’ Loan Act and the Bank Holding Company Act to create certain similar requirements for bank and savings and loan holding companies. The securities and futures markets are regulated under a combination of self-regulation (subject to oversight by the appropriate federal regulator) and direct oversight by SEC and CFTC, respectively. SEC regulates the securities markets, including participants such as securities exchanges, broker-dealers, investment companies, and certain investment advisers and municipal advisors. SEC’s mission is to protect investors; maintain fair, orderly, and efficient markets; and facilitate capital formation. SEC also oversees self-regulatory organizations—including securities exchanges, clearing agencies, and the Financial Industry Regulatory Authority—that have responsibility for overseeing securities markets and their members; establishing standards under which their members conduct business; monitoring business conduct; and bringing disciplinary actions against members for violating applicable federal statutes, SEC’s rules, and their own rules. CFTC is the primary regulator for futures markets, including futures exchanges and intermediaries, such as futures commission merchants. CFTC’s mission is to protect market users and the public from fraud, manipulation, abusive practices, and systemic risk related to derivatives subject to the Commodity Exchange Act, and to foster open, competitive, and financially sound futures markets. CFTC oversees the registration of intermediaries and relies on self-regulatory organizations, including the futures exchanges and the National Futures Association, to establish and enforce rules governing member behavior. CFTC and SEC jointly regulate security futures, which generally refers to futures on single securities and narrow-based security indexes. In addition, Title VII of the Dodd-Frank Act expands regulatory responsibilities for CFTC and SEC by establishing a new regulatory framework for over-the-counter swaps. The act authorizes CFTC to regulate swaps and SEC to regulate security-based swaps with the goals of reducing risk, increasing transparency, and promoting market integrity in the financial system. CFTC and SEC share authority over mixed swaps—that is, security-based swaps that have a commodity component. The Dodd-Frank Act transferred consumer protection oversight and other authorities over certain consumer financial protection laws from multiple federal regulators to CFPB, creating a single federal entity to, among other things, help ensure consistent enforcement of federal consumer financial laws. The Dodd-Frank Act charged CFPB with the following responsibilities, among others: ensuring that consumers are provided with timely and understandable information to make responsible decisions about financial transactions; ensuring that consumers are protected from unfair, deceptive, or abusive acts and practices and from discrimination; monitoring compliance with federal consumer financial law and taking appropriate enforcement action to address violations; identifying and addressing outdated, unnecessary, or unduly burdensome regulations; ensuring that federal consumer financial law is enforced consistently, in order to promote fair competition; ensuring that markets for consumer financial products and services operate transparently and efficiently to facilitate access and innovation; and conducting financial education programs. Furthermore, the Dodd-Frank Act gave CFPB supervisory authority over certain nondepository institutions, including certain kinds of mortgage market participants, private student loan lenders, and payday lenders. Several regulatory analyses may apply to independent regulators, including the financial regulators. The regulators are subject to compliance with various requirements as part of their rulemakings, such as those in PRA; RFA, as amended by the Small Business Regulatory Enforcement Fairness Act of 1996; and the Congressional Review Act. PRA requires federal agencies to (1) seek public comment on proposed collections and (2) submit proposed collections for review and approval by OMB. According to the Office of Information and Regulatory Affairs’ PRA guidance, these actions must occur before federal agencies require or request information from the public. RFA requires that federal agencies consider the impact of certain regulations they issue on small entities and, in some cases, alternatives to lessen the regulatory burden on these entities. In some cases, PRA and RFA also require agencies, including financial regulators, to assess various effects and costs, respectively, of their rules. However, RFA, like PRA, does not require the agencies to conduct formal benefit and cost analyses. The Small Business Regulatory Enforcement Fairness Act of 1996, which amended RFA, generally includes judicial review of compliance with certain provisions of RFA and requires agencies, including financial regulators, to develop one or more small entity compliance guides for each final rule or group of related final rules for which the agency must prepare a regulatory flexibility analysis. In addition, the act requires CFPB to convene a small business review panel, when preparing an initial regulatory flexibility analysis in connection with a proposed rule, to gather recommendations and advice from representatives of small business entities about any projected increase in the cost of credit for small entities and any significant alternatives to the proposed rule. Under the Congressional Review Act, before rules can take effect, agencies (including financial regulators) must submit their rules to Congress and the Comptroller General, and rules deemed major by OMB generally may not become effective until 60 days after the rules are submitted. In addition to these requirements, authorizing or other statutes require certain financial regulators to consider specific benefits, costs, and effects of their rulemakings (see table 2). In contrast, E.O. 12,866, supplemented by E.O. 13,563, requires executive agencies (which do not include independent regulators such as financial regulators), to the extent permitted by law and where applicable, to provide more formal cost-benefit analyses that (1) assess costs and benefits of available regulatory alternatives and (2) include both quantifiable and qualitative measures of benefits and costs in their analysis, recognizing that some costs and benefits are difficult to quantify. Such analysis, according to OMB, can enable an agency to learn if the benefits of a rule are likely to justify the costs and discover which possible alternatives would yield the greatest net benefit or be most cost-effective. In 2003, OMB issued Circular A-4 to provide guidance to executive agencies on developing regulatory analysis as required by E.O. 12,866. The circular defines good regulatory analysis as including a statement of the need for the proposed regulation, an assessment of alternatives, and an evaluation of the costs and benefits of the proposed regulation and the alternatives. It also standardizes the way costs and benefits of regulatory actions should be measured and reported. FSOC and the Department of the Treasury (Treasury), which are not financial regulators, are subject to E.O. 12,866 and Circular A-4. However, as we have reported, some independent agencies consult Circular A-4. As we have noted in prior reports, effective coordination can help regulators minimize or eliminate staff and industry burden, administrative costs, conflicting regulations, unintended consequences, and uncertainty among consumers and markets. The Dodd-Frank Act imposes interagency coordination or consultation requirements and responsibilities on regulators or in connection with certain rules, including the following examples: Under Title VII, SEC and CFTC must coordinate and consult with each other and with prudential regulators (for the purposes of Title VII, these regulators are the Federal Reserve, OCC, FDIC, Farm Credit Administration, and Federal Housing Finance Agency), to the extent possible, before starting a rulemaking or issuing an order on swaps, security-based swaps, swap entities, or security-based swap entities. This requirement is designed to ensure regulatory consistency and comparability across the rules or orders, to the extent possible. Title VII also directs CFTC, SEC, and the prudential regulators, as appropriate, to coordinate with foreign regulators on establishing consistent international standards on the regulation of swaps, security-based swaps, swap entities, and security-based swap entities. In addition, the Dodd-Frank Act requires SEC and CFTC, in consultation with the Federal Reserve, to jointly adopt certain rules under Title VII, and if Title VII requires CFTC and SEC to issue joint regulations to implement a provision, any guidance on or interpretation of the provision is effective only if issued jointly and after consultation with the Federal Reserve. Under section 1022, before proposing a rule and during the comment process, CFPB must consult with the appropriate prudential regulators or other federal agencies on consistency with prudential, market, or systemic objectives administered by such agencies. We found that for rules that were issued and became effective between July 23, 2015, and July 22, 2016, agencies reported conducting PRA and RFA analysis where required. In addition, although not required to do so, financial regulators told us that they generally follow OMB’s guidance for developing regulatory analysis (Circular A-4). We found that the agencies included most of the key elements of OMB’s guidance in their analyses for select major rules during this review period. We recommended in our December 2011 report that federal financial regulators more fully incorporate OMB’s regulatory guidance into their rulemaking polices. Of the 30 Dodd-Frank Act rules within our scope, the agencies reported conducting regulatory analysis for PRA on 12 rules and conducted a regulatory analysis or provided a certification that such an analysis was not needed under RFA for 21 rules as part of their rulemaking process. These rules were issued individually or jointly by CFTC, CFPB, FDIC, the Federal Reserve, OCC, and SEC. (See app. II for a list of the regulations within the scope of our review.) In examining the regulatory analyses for the 12 rules, we found that the agencies reported conducting the regulatory analysis pursuant to PRA when required—that is, the agencies are required to minimize the paperwork burden of their rulemakings and evaluate whether a proposed collection is necessary for the proper performance of the functions of the agency. PRA analysis on all of the 12 rules included a discussion of the analysis the agencies performed and provided estimates of the paperwork burden on entities. For instance, for the joint rule on the registration and supervision of appraisal management companies, the regulators provided estimates on the total number of states and appraisal management companies affected and estimated total burden hours for reporting and recordkeeping requirements for these entities. In another rule, CFPB determined that permitting electronic filing of reports would result in a minimal one-time burden associated with a new method of submission but it estimated savings over time due to the reduction of paper filings each year. The rule allows land developers to choose whether to submit certain filings, such as annual reports, either by paper or via electronic means. For another rule on business conduct standards for security-based swap dealers and participants, SEC performed a PRA analysis in its proposed rule and updated certain estimates for security-based swap market participants and other entities for the final rule to reflect the most recent data available. For the remaining 18 rules, the agencies determined that they were not required to conduct the regulatory analyses pursuant to PRA or that PRA was not applicable. In some cases, they determined that they were not required to conduct regulatory analyses pursuant to PRA because they determined no new collection of information would be required. For instance, CFTC’s rule on trade options stated that for PRA, CFTC determined that the final rule would not impose any new information collection requirements that require OMB’s approval under PRA. In other cases, the agencies determined that the PRA was not applicable. For example, the Federal Reserve’s rule on unfair or deceptive acts or practices stated that the final rule contains no requirements subject to the PRA. Under the RFA, when an agency proposes a rule that would have a significant economic impact on a substantial number of small entities, the rule must be accompanied by an impact analysis, known as an initial regulatory flexibility analysis (IRFA) when it is published for public comment. The agency must publish a final regulatory flexibility analysis (FRFA) with the final rule. Alternatively, in the appropriate circumstances, an agency may certify that its rule will not have a significant economic impact on a substantial number of small entities. The certification must be published in the Federal Register “along with a statement providing the factual basis for such certification.” In one instance, a regulator—CFPB—determined that the final rule on integrated mortgage disclosures would have a significant impact on a substantial number of small entities. It conducted the regulatory flexibility analysis and estimated the number of affected entities in certain mortgage transactions and the benefits and costs to small entities. For 6 rules, the regulators conducted a FRFA and concluded that the rule would not have a significant economic impact on a substantial number of small entities. For example, the Federal Reserve, in a rule that established minimum margin and capital requirements for certain swap entities, considered the potential impact on small entities in accordance with a FRFA, and based on its analysis, believed that the rule would not have a significant economic impact on a substantial number of small entities. For 10 rules, the regulators stated that RFA was not applicable. For example, CFPB stated in its rule amending certain filing requirements under the Interstate Land Sales Full Disclosure Act that because no notice of proposed rulemaking is required, RFA does not require an initial or final regulatory flexibility analysis. In another example, FDIC determined that its rule on assessments relates directly to the rates imposed on insured depository institutions for deposit insurance. For this reason, it determined that the requirements of RFA do not apply. FDIC explained that certain types of rules, such as rules of particular applicability relating to rates or corporate or financial structures, or practices relating to such rates or structures, are expressly excluded from the definition of the term ‘‘rule’’ for purposes of RFA. In the remaining cases, the regulators certified that the regulations would not have a significant economic impact on a substantial number of small entities per section 605(b) of the RFA. In doing so, each regulator provided a basis supporting its certification. For example, SEC’s rule on business conduct standards for swap dealers and participants noted that because (1) large financial institutions generally were the entities engaged in the dealing activity involving security-based swaps, and (2) major security-based swap participants were not small entities, its security-based-swap entity registration rules and forms, as adopted, would not have a significant economic impact on a substantial number of small entities for purposes of RFA. Finally, of the 30 regulations that were issued and became effective between July 23, 2015, and July 22, 2016, the agencies identified 9 as being major rules. Pursuant to the Congressional Review Act, a major rule is one that results in or is likely to result in an annual impact on the economy of $100 million or more, a major increase in costs or prices, or significant adverse effects on competition, employment, investment, productivity, innovation, or on the ability of U.S.-based enterprises to compete with foreign-based enterprises in domestic or export markets. Specifically, CFTC issued 1 major rule; CFPB issued 1 major rule; Federal Deposit Insurance Corporation issued 1 major rule; the Federal Reserve issued 1 major rule; SEC issued 4 major rules; and 1 major rule was issued jointly (Farm Credit Administration, FDIC, Federal Housing Finance Agency, Federal Reserve, and OCC). Independent federal financial regulators are not required to follow OMB’s Circular A-4 when developing regulations, but they told us that they try to follow this guidance in principle or spirit. Regulators generally included the key elements of OMB’s guidance in their regulatory analyses for these major rules. To assess the extent to which the regulators follow Circular A-4, we examined 5 major rules (see table 3 for a description of these rules). Specifically, we examined whether the regulators (1) identified the problem to be addressed by the regulation; (2) established the baseline for analysis; (3) considered alternatives reflecting the range of statutory discretion; and (4) assessed the costs and benefits of the regulation. We found that all five rules we reviewed were consistent with OMB Circular A-4, which states that a rule should clearly identify the specific problem that the proposed regulatory action is intended to address. For example, SEC stated in its rule on pay ratio disclosure that current disclosure rules required registrants to disclose compensation information for only certain employees in their SEC filings; as a result, shareholders cannot calculate a company-specific metric that they can use to evaluate the chief executive officer’s compensation within the context of their own company. As another example, FDIC noted in its rule on assessments the need to reach the minimum reserve ratio to strengthen the fund, reduce the risk of the banking industry facing unexpected, large increases in assessment rates in a period of stress, and maintain stable and predictable bank assessments. Also, CFTC stated in its rule on margin requirements for uncleared swaps that the rule was intended to implement a specific provision of the Commodity Exchange Act, as amended by Title VII of the Dodd-Frank Act. As CFTC noted in the rule, Title VII intended to establish a comprehensive regulatory framework to reduce risk, increase transparency, and promote market integrity in the derivatives market. In addition, all five rules identified the baseline for analysis. OMB Circular A-4 states that the baseline should be the best assessment of the way the world would look absent the proposed action. For example, CFTC stated in its rule on margin requirements for uncleared swaps that the baseline against which the costs and benefits associated with this rule will be compared is the uncleared swaps markets as it existed at the time the rule was finalized. SEC stated in its rule on pay ratio disclosure that the baseline is the current state of the market without a requirement for registrants to disclose pay ratio information. Similarly, CFPB stated in its rule on integrated mortgage disclosures that the baseline considers economic attributes of the mortgage market and the existing regulatory structure. The regulators also provided alternative approaches to the proposed rules implementing the relevant provision of the Dodd-Frank Act and solicited comments. OMB Circular A-4 states that good regulatory analysis is designed to inform the public and other parts of the government of the effects of alternative actions. We found that all five rules that we assessed provided alternative approaches to the proposed rules. The agencies also asked for and received public comments, including possible alternatives to proposed requirements. For instance, in the joint rule on margin and capital requirements for covered swap entities, the prudential regulators identified and considered a number of alternatives raised by commenters and provided the rationale in their decision to a suggested approach. SEC stated in its rule on pay ratio disclosure that after considering all of the comments received on the proposed rule—and in particular, after considering specific suggestions from commenters on alternatives that could help to mitigate compliance costs and practical difficulties associated with the proposed rule—it was adopting a number of revisions to the final rule. OMB Circular A-4 states that quantifying costs and benefits allows regulators to evaluate different regulatory options using a common measure. Additionally, OMB Circular A-4 recognizes that some important costs and benefits may be inherently too difficult to quantify given current data and methods and recommends a careful evaluation of qualitative costs and benefits. In prior work, we have noted some of the challenges to quantifying costs and benefits. For example, in our 2014 report, we found that federal financial regulators were constrained by several factors such as limited data or data unavailability and difficulties modeling and quantifying costs and benefits. However, we also found that by drawing on several sources, such as public comments on proposed rulemakings or data from other regulators, regulators are able to more effectively consider the costs and benefits of the rulemakings. As shown in the following examples, the regulators generally quantified some costs in all five of their respective rules and in four instances they discussed some costs qualitatively. The preamble to CFTC’s final rule on margin requirements for uncleared swaps stated it used industry data to construct its own estimates of costs, but noted that there were a number of challenges in conducting quantitative analysis of the costs associated with the rule. As a result, CFTC stated that the discussion of the costs and benefits is largely qualitative in nature since administrative costs are difficult to quantify. For example, the preamble stated that the higher degree of harmonization between various regulators and jurisdictions in the final rule should result in lower administrative costs. Additionally, CFTC stated that longer lead times for industry to build compliance systems provided in the final rule will result in less operational error and costs. The joint rule on margin and capital requirements for uncleared swaps estimated the annual cost associated with initial margin requirements that will be required of U.S. swap entities and their counterparties once the requirements are fully implemented to range from $672 million to roughly $46 billion, depending on the specific initial margin estimate and incremental funding costs that is used to compute the estimate. The agencies noted the difficulty of estimating the costs associated with providing initial margin with any precision due to differences in marginal funding costs across different types of entities and over time, among other things. SEC’s rule on pay ratio disclosure provided both quantitative and qualitative costs. SEC discussed direct compliance costs paid by registrants that are subject to the pay ratio disclosure. For example, SEC estimated that the average initial cost of compliance for a registrant with foreign operations is expected to be approximately $700,000 and for a registrant with U.S.-based operation only is expected to be approximately $150,000. In its pay ratio disclosure rule, SEC allows a company, in identifying the median employee, to use a cost-of-living adjustment for employees living in a jurisdiction other than the jurisdiction in which the chief executive officer resides. Thus, where a company has employees in countries whose cost-of- living differs from the cost-of-living in the chief executive officer’s country of residence, the cost-of-living adjustment may have an effect on the determination of the median employee and on the calculation of the pay ratio. SEC noted that it was limited in its ability to quantify the impact of the adjustment on the pay ratio calculation by lack of data on the countries where employees are located, the actual distribution of employee pay and the specific cost-of-living measure used. SEC stated that it qualitatively analyzed the main factors that may contribute to more significant effects of the cost-of-living adjustment on the determination of the median employee compensation and on the calculation of the pay ratio. It found that the effect of the cost-of-living adjustment could be potentially larger for registrants with a larger percentage of employees outside the chief executive officer’s country of residence and for registrants with employees in countries with a cost-of-living that differs significantly from the chief executive officer’s country of residence. In addition, two of the five rules quantified some benefits and all of the rules included some qualitative information on benefits, such as their nature, timing, likelihood, location, and distribution. In one example, CFPB quantified some benefits in connection with its integrated disclosure rule. For example, CFPB estimated that the rule could result in savings of $130 million per year for employee time saved for mortgage transactions and stated that most of these savings are likely to be passed on to consumers. FDIC’s assessment rule also quantified some benefits. FDIC stated that it will collect approximately $10 billion in surcharges and award approximately $1 billion in credits to small banks, although actual amounts will vary from these estimates. The three remaining rules did not quantify benefits and noted data and other limitations to not doing so. The five rules provided a discussion of some qualitative benefits. FDIC’s rule on assessments stated that imposing surcharges on assessments so that the deposit insurance fund reaches its target reserve ratio promptly strengthens the fund more quickly so that it can better withstand an unanticipated spike in losses from bank failures or the failure of one or more large banks. FDIC stated that reaching the target ratio early also reduces the risk of the banking industry facing unexpected, large increases in assessment rates in a period of stress. In another example, SEC stated in its rule on pay ratio disclosures that providing additional executive compensation information to shareholders provides new data points that shareholders may find relevant and useful when exercising their certain voting rights. However, SEC also stated that it could not quantify in monetary terms the benefit to shareholders. SEC stated that pay ratio disclosure is not tied to an immediate economic transaction, such as a sale of a security, and that the pay ratio disclosure is but one data point among many considerations that shareholders might find relevant when exercising their say-on-pay votes. The agencies reported coordinating as required or voluntarily on 19 of the 30 regulations that became effective between July 23, 2015, and July 22, 2016. The Dodd-Frank Act stipulated coordination for 17 regulations, and agencies reported coordinating on these rules. For example, in its rule on business conduct standards for security-based swap dealers and major security-based swap participants, SEC reported consulting and coordinating with CFTC and the prudential regulators in accordance with the consultation mandate in the Dodd-Frank Act. For 2 additional rules, the Dodd-Frank Act did not stipulate coordination, but the rules were jointly issued by two or more regulators, and thus, inherently required coordination. For most of the other 11 rules, agency officials told us that they did not voluntarily coordinate because the rules were technical amendments or focused on areas solely within the agency’s purview. For example, CFPB explained that it did not coordinate on several of its rules because they were threshold adjustments that were mechanical in nature and often tied to the Consumer Price Index. Similarly, FDIC did not coordinate on its rule on assessments because FDIC is solely responsible for deposit insurance assessments, so this is not an area promulgated in coordination with other entities. Appendix III provides a complete list of rulemakings, along with an explanation of whether coordination was required and the nature of any coordination. Of the 19 rules that we identified as having interagency coordination, we reviewed 3 rules in depth (see table 4). Specifically, we examined when, how, and to what extent federal financial regulators coordinated on the CFTC’s and the prudential regulators’ respective rules on margin requirements for uncleared swaps, as well as CFPB’s rule on integrated mortgage disclosures. For the margin requirements for uncleared swaps rules, we also examined the efforts taken by the prudential regulators and CFTC to harmonize their respective versions of the rule. According to regulators, most coordination for the rulemakings occurred throughout the rulemaking process. Agencies described coordinating through regularly scheduled meetings and conference calls, as well as through e-mail, telephone conversations, and sharing copies of drafts for comment. In developing their respective rules on margin requirements for uncleared swaps, staff from the prudential regulators and CFTC engaged in coordination domestically, staff from the banking regulators and CFTC engaged in coordination internationally. Staff from the banking regulators and CFTC said that throughout the rulemaking process, regulators scheduled recurring interagency meetings to coordinate their rules and engaged in additional coordination as needed. Staff from the banking regulators and CFTC also said that before proposing their respective rules, they began holding regular meetings to discuss their ideas. According to staff from the regulators, these meetings, which were typically held at least biweekly, continued throughout the rulemaking process, although regulatory staff from one agency said that the regulators would meet more frequently if there were issues that required more discussion. Federal Reserve staff created agendas for these recurring meetings. These agendas included discussion items such as revisions for specific sections and particular comments for the agencies to consider. Staff from CFTC and one banking regulator said that they continue to have biweekly conference calls to discuss the implementation of the rules and issues that may arise regarding them. According to staff from CFTC and the banking regulators, their efforts to coordinate throughout the rulemaking process led to rules that are largely harmonized, particularly in key areas such as the initial and variation margin requirements, the timing for posting margin, and the parties that are required to post the margin. CFTC staff said that one of the goals of coordinating with the other regulators was to harmonize the rules to the extent possible and avoid the potential for regulatory arbitrage. Staff from CFTC and the banking regulators noted that the coordination process allowed them to resolve several areas where they had differences. For example, CFTC staff said that initially the prudential regulators and CFTC were considering setting different thresholds for the size of an entity that would be subject to the rules. However, they said that CFTC conducted an analysis that helped the regulators achieve a consensus on the appropriate threshold. According to regulators, another area where the prudential regulators and CFTC initially differed was in their proposed margin requirements for the treatment of uncleared cross-border swap transactions—transactions involving swap entities operating in a foreign jurisdiction or organized as U.S. branches or agencies of foreign banks. In their respective final rules, CFTC and the prudential regulators came to a similar position regarding whether to allow entities to comply with comparable margin requirements in a foreign jurisdiction. The prudential regulators’ rule permits certain swap entities to comply with a foreign regulatory framework for non- cleared swaps if the regulators jointly determine that the foreign regulatory framework is comparable to the regulators’ rule. Similarly, CFTC allows entities, under certain circumstances, to rely on compliance with a foreign jurisdiction’s margin requirements if CFTC determines they are comparable to CFTC’s. OCC staff noted that through the coordination process CFTC came to this determination, in part because much of the international swap dealer community is subject to the prudential regulators’ rule rather than CFTC’s rule. While regulators noted that coordination helped them achieve comparability between the final rules in many key areas, they identified one area where differences remain—that of margin requirements for uncleared swaps with affiliated entities (interaffiliate swaps). Both final rules require swap entities covered by the rules to collect and post variation margin for uncleared swaps with affiliates on the same basis as for nonaffiliated counterparties. However, the final rules are different with respect to the collection of initial margin for interaffiliate transactions. While the prudential regulators’ rule does require a swap entity to collect initial margin from an affiliate, subject to a threshold amount, CFTC generally does not impose a similar requirement to collect initial margin from an affiliate (although it stipulates such swaps must be subject to a centralized risk-management program that is designed to monitor and to manage the risks associated with such transactions). CFTC’s Chairman said in his statement of record that interaffiliate transactions are transactions within the consolidated entity, and not with a third party. As such, they do not increase the overall risk exposure of the consolidated entity. In its final rule, CFTC noted that, among other contributing factors, it considered the difference in mission and overall regulatory framework between the prudential regulators and CFTC in determining its initial margin requirement for interaffiliate transactions. CFTC and two banking regulators’ staff noted that it was unclear at this time as to whether this difference in the final rules was going to affect interaffiliate transactions. Two regulators said the regulators will need to monitor potential effects as the margin rules are implemented. However, in finalizing CFTC’s rule, one dissenting Commissioner said in her statement of record that CFTC’s treatment of interaffiliate initial margin places the swap dealers CFTC regulates and their customers at unnecessary risk in times of financial stress. The Dodd-Frank Act directs CFTC, SEC, and the prudential regulators to consult and coordinate, as appropriate, with foreign regulatory authorities on the establishment of consistent international standards for regulating swaps. Staff from CFTC, SEC, and several of the prudential regulators participated on the international working group that helped develop the international framework to regulate uncleared swaps, which was issued in September 2013 by the Basel Committee on Banking Supervision and the Board of the International Organization of Securities Commissions. According to CFTC staff, the working group coordinated on issues such as the logistics for the collection of margin and how to treat transactions in emerging markets. Staff from two banking regulators and CFTC said that after the international standards were established, the regulators coordinated through their standing, biweekly meetings to reconcile their initial proposed rules with the international framework. With respect to the integrated mortgage disclosure rule, CFPB followed its formal consultation process for working with agencies to develop Dodd-Frank Act rules. As previously discussed, section 1022 of the Dodd- Frank Act requires CFPB to consult with the appropriate prudential regulators or other federal agencies as part of the rulemaking process. In March 2012, CFPB developed internal guidelines that outline the minimum steps that it expects staff to follow during the consultation process. The guidelines state that while the process may vary depending on factors such as the nature, complexity, and deadlines of rulemakings, the process typically includes an opportunity for relevant agencies to coordinate with CFPB before it proposes its rule, after CFPB receives comments on its proposal, and before the final rule is issued. This coordination includes in-person briefings and solicitations for input on CFPB’s approach to the particular rule. In developing the integrated mortgage disclosure rule, CFPB staff said that they provided notification of a desire to consult on the rule to the prudential regulators, offered four briefings during the rulemaking process, and held other consultations as needed in accordance with its consultation process guidelines. While developing the proposed and final rules, CFPB staff provided outlines to the prudential regulators for their consultation and feedback. According to CFPB staff, when agencies provided comments in the proposal stage, CFPB staff sometimes updated the proposed rule to include a request for comment on their suggestions. For example, staff said that when FDIC suggested that CFPB improve the disclosures on annual percentage rates, CFPB included a request for comment in the proposed rule on ways to improve the disclosure. Staff from CFPB and the prudential regulators said that the prudential regulators participated in CFPB’s consultation process. For example, Federal Reserve staff said they participated in several interagency consultation meetings and calls that occurred throughout the proposed and final rulemakings. They said that CFPB staff consulted with them prior to proposing the rule and during the comment process on the rule’s consistency with prudential, market, and systemic objectives administered by the Federal Reserve. In addition, Federal Reserve staff provided informal feedback to enhance the clarity of the rule and facilitate compliance. FDIC staff described CFPB’s rulemaking process as flexible, allowing FDIC staff the opportunity to participate and understand CFPB’s rulemaking process, putting FDIC staff in a better position to explain the rule to FDIC-supervised banks. Financial regulators continue to implement reforms pursuant to the Dodd- Frank Act, but a number of factors make the full impact of the act uncertain. In particular, while many rules have been finalized, several rules have not been finalized or have not yet been started. As of December 2016, regulators had issued final rules for over 75 percent of the 236 provisions of the act that we are monitoring. When the act’s reforms are fully implemented, it can take time for the financial services industry to comply with the new regulations, which means additional time is needed to measure the impact of the rules. Moreover, isolating the Dodd-Frank Act’s effect on the financial marketplace is difficult. Many other factors that can affect the financial marketplace, such as monetary policy, could have an even greater impact than the act. Recognizing these limitations and difficulties, we developed an approach to analyze current data and trends that might indicate some of the Dodd- Frank Act’s initial impacts. First, using data through the second quarter of 2016, we updated the indicators developed in our December 2012 and 2015 reports to monitor changes in certain characteristics of bank SIFIs, which are subject to enhanced prudential standards and oversight under the act. Second, using data through the second quarter of 2016, we updated indicators of designated nonbanks that we developed in our December 2015 report that parallel our bank SIFI indicators. Third, using data through the second quarter of 2016, we updated indicators developed in our December 2013 report to monitor the extent to which certain of the act’s swap reforms are consistent with the act’s goals of reducing risk. These analyses have limitations, which we discuss in the following sections. According to the legislative history, the Dodd-Frank Act contains provisions intended to reduce the risk of failure of a large, complex financial institution and the damage that such a failure could do to the economy. Such provisions include (1) authorizing FSOC to designate a nonbank financial company for Federal Reserve supervision if FSOC determines its material distress or financial activities could pose a threat to U.S. financial stability and (2) directing the Federal Reserve to impose enhanced prudential standards on bank holding companies with $50 billion or more in total consolidated assets (bank SIFI) and nonbank financial companies designated by FSOC (designated nonbanks). The Federal Reserve has finalized rules imposing enhanced prudential standards on bank SIFIs, including capital, leverage, and liquidity requirements, and rules that require these firms to conduct resolution planning and stress testing, as well as proposed other rules. (See app. IV for a summary of provisions related to SIFIs and their rulemaking status.) As we first reported in December 2012, the Dodd-Frank Act and its implementing rules may result in adjustments to the size, interconnectedness, complexity, leverage, or liquidity of bank SIFIs over time. We updated the indicators we developed in our December 2012 and December 2015 reports to monitor changes in some of the characteristics of bank SIFIs. The size, interconnectedness, and complexity indicators reflect the potential for financial distress or activities of a single bank SIFI to affect the financial system and economy (spillover effects). The leverage and liquidity indicators reflect a SIFI’s resilience to shocks or its vulnerability to financial distress. It is important to note however, that these indicators have limitations. For example, the indicators do not identify causal links between changes in SIFI characteristics and the act. Rather, the indicators track changes in the size, interconnectedness, complexity, leverage, and liquidity of SIFIs since the passage of the act to examine if the changes have been consistent with the goals of the act. However, other factors—including international banking standards agreed upon by the Basel Committee on Banking Supervision (Basel Committee) and monetary policy actions— also affect bank holding companies and, thus, the indicators. These factors may have a greater effect than the Dodd-Frank Act on SIFIs. Furthermore, because several rules implementing provisions related to SIFIs have not been finalized or have not yet been started, our indicators include the effects of these rules only insofar as SIFIs have modified their behavior in response to issued rules or in anticipation of expected rules (see app. IV). In this regard, our indicators provide baselines against which to compare future trends. See appendix V for additional limitations of our indicators. Table 5 summarizes the changes in our bank SIFI indicators from the second or third quarter of 2010 through the second quarter of 2016 (see app. V for more information). For example: Changes in some size and complexity indicators are consistent with increased potential spillover effects for large bank SIFIs (which we define as bank holding companies with $500 billion or more in assets), while changes in interconnectedness and other size and complexity indicators are consistent with decreased or no change in potential spillover effects for large bank SIFIs. Changes in size, interconnectedness, and complexity indicators are consistent with decreased or no change in potential spillover effects for other bank SIFIs (which we define as bank holding companies with at least $50 billion but less than $500 billion in assets). Changes in all of our leverage and liquidity indicators are consistent with increased resilience for both large bank SIFIs and for other bank SIFIs. We updated indicators associated with size, interconnectedness, leverage, and liquidity for institutions whose material financial distress or activities FSOC determines could pose a threat to U.S. financial stability and therefore should be subject to Federal Reserve supervision and enhanced prudential standards. During 2013 and 2014, FSOC designated four nonbank financial companies for Federal Reserve supervision pursuant to a determination that their material financial distress could pose a threat to U.S. financial stability. These included the American International Group, Inc. (AIG) in July 2013, General Electric Capital Corporation, Inc. (GECC) in July 2013, Prudential Financial, Inc. (Prudential) in September 2013, and MetLife, Inc. (MetLife) in December 2014. FSOC determined that each of these institutions was predominately engaged in financial activities (that is, at least 85 percent of their revenues were derived from, or more than 85 percent of their assets were related to, activities that were financial in nature). According to FSOC, at the time of the designations, AIG was the third-largest insurance company in the United States and one of the largest insurers in the world; GECC was one of the largest holding companies in the United States and a significant source of credit to commercial and consumer customers; Prudential was one of the largest financial services companies in the United States providing a wide array of financial services, including group and individual life insurance, annuities, retirement-related products and services, and asset management; and MetLife was the largest publicly traded U.S. insurance organization and one of the largest financial services companies in the United States. However, in March 2016, the U.S. District Court for the District of Columbia invalidated FSOC’s designation of MetLife. Then, in June 2016, after the reorganization of GECC, FSOC rescinded the nonbank’s designation noting that divestures and organizational changes significantly reduced the potential for any material financial distress to threaten financial stability. As we first reported in December 2012, the Dodd-Frank Act and its implementing rules may result in adjustments to size, interconnectedness, leverage, and liquidity characteristics of designated nonbanks over time. Size and interconnectedness reflect the potential for the financial distress of a single designated nonbank to affect the financial system and economy, while leverage and liquidity reflect a designated nonbank’s resilience to shocks or its vulnerability to financial distress. In our December 2015 report, we developed the following indicators based on the characteristics of companies that FSOC reviews as part of its process for designating nonbanks: Size. Our indicator of size is total consolidated assets. Interconnectedness. Our indicators of interconnectedness are gross notional amounts of credit default swaps outstanding for which the designated nonbank is the reference entity and total debt outstanding (excluding deposit liabilities). Leverage. Our indicator of leverage is total equity as a percentage of total assets, except separate accounts. Liquidity. Our indicator of liquidity is short-term debt (excluding deposit liabilities) as a percentage of total assets, except separate accounts. We calculated each indicator, for each quarter, for each of the currently designated nonbanks from the second quarter of 2012 to the second quarter of 2016. We also calculated the medians of each indicator for publicly traded banks and insurance companies with total consolidated assets of $50 billion or more to provide a frame of reference. Like our indicators for bank SIFIs, our indicators for designated nonbanks have some limitations. For example, the indicators do not identify causal links between changes in designated nonbanks’ characteristics and the Dodd-Frank Act. Rather, the indicators track changes in the size, interconnectedness, leverage, and liquidity of designated nonbanks since the passage of the act to examine if the changes have been consistent with the goals of the act. However, other factors, such as capital standards for large, internationally active insurance companies, may also affect designated nonbanks and, thus, the indicators. Furthermore, to the extent that a number of rules implementing provisions related to designated nonbanks have not yet been finalized, our indicators include the effects of these rules only insofar as designated nonbanks have changed their behavior in anticipation of expected rules. In this regard, our indicators provide baselines against which to compare future trends. Figure 1 shows the indicators from the second quarter of 2012 through the second quarter 2016. In November 2011 and October 2012, the Federal Reserve issued specific rules requiring designated nonbank financial companies to conduct resolution planning and stress testing, respectively, and in June 2016 proposed rules that would establish corporate governance, risk-management, and liquidity risk-management standards for these firms. Thus, the current values of our indicators are baselines against which to compare future trends as more rules for designated nonbanks are implemented. Our indicators allow for the following observations: Based on their total assets, both designated nonbanks are relatively large. They are all larger than the median publicly traded bank or insurance company with assets of $50 billion or more. Gross notional amounts of credit default swaps outstanding (for which designated nonbanks are the reference entities) have decreased since the second quarter of 2012, suggesting that the designated nonbanks are relatively less interconnected and thus have smaller potential spillover effects than in prior years by this measure, all else being equal. Total debt outstanding (excluding deposits) for the two designated nonbanks has decreased since the second quarter of 2012. These trends suggest that the designated nonbanks have become less interconnected and thus have smaller potential spillover effects than in prior years based on this indicator, all else being equal. Total equity as a percentage of assets, except separate accounts, ranged from about 21 percent for AIG to about 11 percent for Prudential in the second quarter of 2016. This range in leverage suggests that the designated nonbanks have varying resilience to shocks and financial distress by this measure, all else being equal. Short-term debt as a percentage of assets, except separate accounts, decreased from the second quarter of 2012 to the second quarter of 2016. Decreases in short-term debt as a percentage of assets, except separate account ranged from about 71 percent for AIG to about 42 percent for Prudential. These trends suggest that the two designated nonbanks’ resilience to shocks and financial distress has improved by this measure, all else being equal. As we reported in December 2013, once fully implemented, some provisions in Title VII of the Dodd-Frank Act may help reduce systemic risks to financial markets in part by increasing margins posted for over- the-counter swaps. In November 2015 and January 2016, respectively, the prudential regulators and CFTC published final rules on margin requirements for uncleared swaps, for swap dealers and major swap participants, pursuant to the Dodd-Frank Act. As discussed previously, the final rules establish minimum initial and variation margin requirements. Using data through the second quarter of 2016, we updated the set of indicators that we developed in our December 2013 report and updated in our December 2014 and December 2015 reports to measure changes in the use of margin collateral for over-the-counter derivatives. This set of indicators may shed light on changes in the use of margin collateral associated with Dodd-Frank Act swap reforms as they are implemented, but the indicators have several key limitations, as described later in this section. Our margin indicators measure the fair value of collateral pledged by counterparties to secure over-the-counter derivatives contracts as a percentage of net current credit exposure for those counterparties for bank holding companies. To protect itself from the loss it would incur if a counterparty defaulted on a derivatives contract, a swap entity could require counterparties to post margin collateral in an amount equal to or greater than its exposure to the contracts. An increase in collateral as a percentage of credit exposure suggests that holding companies have required their counterparties to post a greater amount of collateral against their credit exposure due to derivatives contracts overall, which would be consistent with the purposes of the act’s swap reforms. Figure 2 shows trends in our margin indicators from the second quarter of 2009 through the second quarter of 2016. The rate of collateralization of net current credit exposure for all counterparties has increased from about 71 percent in the third quarter of 2010 to about 91 percent in the second quarter of 2016, suggesting that holding companies generally required their counterparties to post a greater amount of collateral against their derivatives contracts. However, as discussed later, aggregate measures of collateralization rates can mask differences in collateralization rates for different counterparty types. Collateral posted by type of counterparty—banks and securities firms, monoline financial guarantors, hedge funds, sovereign governments, and corporate and all other counterparties—increased (as a percentage of net credit exposure) between the second quarter of 2009 and the second quarter of 2016. However, the rate of collateralization consistently differed by the type of counterparty, with hedge funds consistently posting more collateral as a percentage of credit exposure than other types of counterparties. As we reported in December 2013, according to OCC, the rates differ partly because swaps dealers may require certain counterparties to post both initial and variation margin and other counterparties to post only variation margin. Under the prudential regulators’ 2015 final rule and CFTC’s 2016 final rule for uncleared swaps, minimum floors are set for both initial and variation margins and as a result, the final rules may further contribute to higher rates of collateralization. Our margin indicators, while suggestive, are subject to important limitations. First, they do not identify causal links between changes in collateralization and the Dodd-Frank Act, including its regulations. Rather, the set of indicators tracks changes in collateralization since the act’s passage to examine if the changes were consistent with the act’s goals for increasing collateralization. Second, both net current credit exposure and the fair value of collateral are as of a point in time because the fair values of derivatives contracts and collateral can fluctuate over time. Third, an average collateralization of 100 percent does not ensure that all current counterparty exposures have been eliminated because one counterparty’s credit exposure may be overcollateralized and another’s undercollateralized. Fourth, our indicators measure the fair value of the collateral held against net current credit exposures but do not necessarily measure the risk of uncollateralized losses. The fair value of net current credit exposure does not fully account for the riskiness of any single swap contract. If a party has entered into riskier swaps, it is possible for the rate of collateralization to increase while the risk of uncollateralized losses also increases. Fifth, our indicators are market aggregates that may not reflect the collateralization rate for any single company. Finally, these indicators do not reflect collateralization rates for companies, such as stand-alone broker-dealers, which have credit exposure to counterparties in over-the-counter derivatives contracts but are not affiliated with a bank holding company. We provided a draft of this report to CFPB, Federal Reserve, FDIC, OCC, NCUA, SEC, and CFTC for review and comment. The regulators provided technical comments, which we have incorporated, as appropriate. We are sending copies of this report to the appropriate congressional committees and members and federal financial regulators. This report will also be available at no charge on our website at http://www.gao.gov. Should you or your staff have questions concerning this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VII. Under the Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank Act), various federal agencies are directed or have the authority to issue hundreds of regulations to implement the act’s provisions. This report discusses the regulatory analyses conducted by federal financial regulators (financial regulators) in their Dodd-Frank Act rulemakings, including their assessments of which rules they considered to be major rules; coordination between and among federal regulators on these indicators of the impact of selected Dodd-Frank Act provisions and their implementing regulations on financial market stability. The financial regulators are the Bureau of Consumer Financial Protection, also known as the Consumer Financial Protection Bureau (CFPB), the Board of Governors of the Federal Reserve System (Federal Reserve), Federal Deposit Insurance Corporation (FDIC), Office of the Comptroller of the Currency (OCC), National Credit Union Administration (NCUA), Commodity Futures Trading Commission (CFTC), and the Securities and Exchange Commission (SEC). To examine the regulatory analyses conducted by the regulators, we focused our analysis on final rules issued pursuant to the Dodd-Frank Act that were effective between July 22, 2015, and July 23, 2016, a total of 30 rules (see app. II). We compiled these rules from a website maintained by the Federal Reserve Bank of St. Louis that tracks Dodd-Frank Act regulations, which we corroborated with officials from the agencies under review. In examining the regulatory analyses of the agencies in our review, we reviewed federal statutes, regulations, GAO studies, and other material to identify the regulatory analyses the agencies had to conduct as part of their Dodd-Frank rulemakings, focusing on those analyses required under the Paperwork Reduction Act (PRA) and the Regulatory Flexibility Act (RFA). We reviewed Federal Register notices of final rules for the agencies’ determinations of the applicability of PRA and RFA. In some instances, the regulators determined that the analysis was not required or not applicable and indicated this in their final rulemaking. Two analysts recorded the agencies’ determination of whether PRA and RFA were required in a spreadsheet. Using GAO’s Federal Rules database, we found that 9 of the 30 rules were identified as major rules, per the Office of Management and Budget (OMB) guidance, under the Congressional Review Act because they resulted in or are likely to result in an annual impact on the economy of $100 million or more; a major increase in costs or prices; or significant adverse effects on competition, employment, investment, productivity, innovation, or on the ability of U.S.- based enterprises to compete with foreign-based enterprises in domestic and export markets. For agencies subject to Executive Order (E.O.) 12,866, such major rules would be considered significant regulatory actions and subject to formal cost-benefit analysis. We also developed a data collection instrument to compare and assess the regulatory analysis conducted for the major rules against the principles outlined in OMB Circular A-4, which provides guidance to federal agencies on the development of regulatory analysis. To conduct our analyses, we reviewed Federal Register releases of the final rules and the cost-benefit analyses they included in the final rules, and we interviewed agency staff from CFPB, CFTC, SEC, Federal Reserve, FDIC, NCUA, and OCC. We selected five rules for in-depth review, comparing the cost-benefit or similar analyses to specific principles in OMB Circular A-4. To narrow the list from 9 major rules to the 5 rules subject to in-depth review, we selected rules that were from a variety of agencies, including one joint rule, and that covered varied topics. In conducting each individual analysis, we reviewed Federal Register notices prepared by agencies during the course of the rulemaking. To examine interagency coordination among the regulators, we reviewed the Dodd-Frank Act, Federal Register releases, and GAO reports to identify the interagency coordination and consultation requirements for the 30 rules in our scope. As part of this review, analysts looked for key words relating to consultation and coordination in the Federal Register releases and recorded this information in a spreadsheet. An attorney then independently evaluated each determination documented in the spreadsheet to reach concurrence on the assessment. (See app. III for a list of rules and determination of whether coordination was required). We also interviewed officials or staff from CFPB, CFTC, SEC, FDIC, NCUA, the Federal Reserve, and OCC to identify changes in the nature of interagency coordination and consultation. We also asked the financial regulators’ staff to identify any instances of interagency coordination not specified in the Federal Register releases, and if they did not coordinate, to discuss the reasons why. We did not examine the effects of noncoordination on rulemakings, which was beyond the scope of our review. We also selected three rules for in-depth review of interagency coordination: CFTC’s and the prudential regulators’ respective rules on margin requirements for uncleared swaps and CFPB’s rule on integrated mortgage disclosures. We selected these rules based on the opportunity for extensive interagency coordination. We selected the rules on margin requirements for uncleared swaps because the prudential regulators and CFTC issued rules that required coordination among the prudential regulators as well as between the prudential regulators and CFTC. We selected the integrated mortgage disclosure rule because of CFPB’s requirement to consult with the appropriate prudential regulators and other federal agencies on consistency with prudential, market, or systemic objectives administered by such agencies before proposing a rule. We interviewed the responsible agencies to discuss the outcomes of coordination and specific areas where coordination or harmonization of rules was a priority and obtained documentation of specific examples of interagency coordination and consultation. To analyze the impact of the Dodd-Frank Act on financial market stability, we updated several indicators developed in our prior reports with data through the second quarter of 2016. The indicators display trends in both banks that are systemically important financial institutions (bank SIFI) and nonbank financial institutions designated by the Financial Stability Oversight Council (FSOC) for supervision by the Federal Reserve. We updated indicators monitoring changes in size, interconnectedness, complexity, leverage, and liquidity of bank SIFIs. Since we began developing and tracking indicators for bank SIFIs, FSOC has designated three nonbank institutions for enhanced supervision by the Federal Reserve. As such, we updated indicators developed in our December 2015 report that are associated with the size, interconnectedness, leverage, and liquidity of these institutions. Finally, we updated our indicators that monitor the extent to which certain swap reforms are consistent with the act’s goals of reducing risk. For those parts of our methodology that involved the analysis of computer- processed data from Bloomberg, the Federal Reserve Bank of Chicago, the Federal Reserve, the National Information Center, and the Bureau of Economic Analysis, we assessed the reliability of these data by reviewing relevant documentation and electronically testing the data for missing values, outliers, and invalid values. We determined the data were sufficiently reliable for our purposes of monitoring changes in bank SIFIs and designated nonbanks and assessing the amount of margin collateral that over-the-counter derivatives counterparties used. We conducted this performance audit from June 2016 to December 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The following table lists the 30 Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank-Act) rules that we identified as having effective dates during the scope of our review,—from July 23, 2015 through July 22, 2016. Nine rules were major. The following table lists the 30 Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank Act) rules that we identified as having effective dates during the scope of our review (from July 23, 2015, through July 22, 2016), whether we found evidence of coordination during the rulemaking process, whether the Dodd-Frank Act required interagency or international coordination, and the nature of coordination, if any. The Dodd Frank Wall Street Reform and Consumer Protection Act (Dodd- Frank Act) contains several provisions—including designation by the Financial Stability Oversight Council (FSOC) for supervision by the Board of Governors of the Federal Reserve System (Federal Reserve) and enhanced prudential standards—that apply to nonbank financial companies if FSOC determines that material financial distress at the company or the nature, scope, size, scale, concentration, interconnectedness, or mix of activities at the company could pose a threat to U.S. financial stability. Enhanced prudential standards also apply to bank holding companies with $50 billion or more in total consolidated assets. For this report, we refer to those nonbank financial companies as designated nonbanks and bank holding companies as systemically important banks (bank SIFIs), respectively. Table 8 summarizes some of the Dodd-Frank Act provisions and the rulemakings, including their status, to implement those provisions as of July 22, 2016. We updated indicators to monitor changes in the size, interconnectedness, complexity, leverage, and liquidity of bank holding companies with $50 billion or more in total consolidated assets—bank systemically important financial institutions or bank SIFIs). As we first reported in December 2012, some provisions of the Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank Act) and related rules may result in adjustments to the these characteristics of bank SIFIs over time. The size, interconnectedness, and complexity indicators are intended to capture the potential for a bank SIFI’s financial distress to affect the financial system and economy (spillover effects). The leverage and liquidity indicators are intended to capture a bank SIFI’s resilience to shocks or its vulnerability to financial distress. We used the following data to construct our indicators: quarterly data on the price index for gross domestic product, which we obtained from the Bureau of Economic Analysis for the period from the second quarter of 2006 to the second quarter of 2016; annual data on numbers and locations of legal entities for holding companies obtained from the Board of Governors of the Federal Reserve System (Federal Reserve) for the period from the second quarter of 2010 to the second quarter of 2016; quarterly data on second-tier bank holding companies, which we obtained from the Federal Reserve via the National Information Center for the period from the second quarter of 2009 to the second quarter of 2016; quarterly balance sheet and income statement data that bank holding companies report on Form FR Y-9C, which we obtained from the Federal Reserve Bank of Chicago for the period from the second quarter of 2009 to the second quarter of 2016; and quarterly data on gross notional amounts of credit default swaps outstanding by reference entity, which we obtained from Bloomberg for the period from the third quarter of 2010 to the second quarter of 2016. Our analysis for our size, leverage, liquidity, and one of our interconnectedness indicators generally includes all top-tier U.S. bank holding companies, including any U.S.-based bank holding company subsidiaries of foreign banking organizations, with total consolidated assets of $1 billion or more that filed Form FR Y-9C for one or more quarters during the period from the first quarter of 2006 to the second quarter of 2016. We chose the threshold of $1 billion in assets to match the threshold for reporting Form FR Y-9C starting in the first quarter of 2015. For our complexity indicators and one interconnectedness indicator, we used data on top-tier U.S. bank holding companies with total consolidated assets of $50 billion or more. We defined bank SIFIs as bank holding companies with total assets of $50 billion or more. We defined large bank SIFIs as bank holding companies with total assets of $500 billion or more, and we defined other bank SIFIs as bank holding companies with total assets of at least $50 billion but less than $500 billion. We defined non-SIFI bank holding companies as bank holding companies with less than $50 billion in total assets. We calculate each of our indicators for each bank holding company in our sample for each quarter from the first quarter of 2006 to the second quarter of 2016, with the exceptions of our complexity indicators, which we calculate only for bank SIFIs as of the second quarter of each year from 2006 to 2016, and one of our interconnectedness indicators, which we calculate only for bank SIFIs for the period from the third quarter of 2010 to the second quarter of 2016. We then calculate the median value of each indicator for each group of bank holding companies—large bank SIFIs, other bank SIFIs, all bank SIFIs, non-SIFI bank holding companies, and all bank holding companies, to the extent possible—and track the median values over time. Finally, we assess the changes in the median values of the indicators for large bank SIFIs and other banks SIFIs between the second or third quarter of 2010 and the second quarter of 2016, depending on the indicator. We say that an indicator has increased or decreased if it has changed by 5 percent or more, depending on the direction of the change, and we say that an indicator has remained about the same if it has changed by less than 5 percent. When stating the implications of the indicator on potential spillover effects, we assume all other things are held equal. Our indicators analysis has limitations. For example, the indicators do not identify causal links between changes in bank SIFI characteristics and the act. Rather, the indicators track changes in the size, interconnectedness, complexity, leverage, and liquidity of bank SIFIs since the Dodd-Frank Act was passed to examine whether the changes were consistent with the act. However, other factors—including the economic downturn, international banking standards agreed upon by the Basel Committee on Banking Supervision (Basel Committee), and monetary policy actions— also affect bank holding companies and, thus, the indicators. These factors may have a greater effect on bank SIFIs than the Dodd-Frank Act. In addition, some rules implementing provisions related to bank SIFIs have not yet been finalized or fully implemented. Thus, changes in our indicators include the effects of these rules only insofar as bank SIFIs have changed their behavior in response to issued rules and in anticipation of expected rules. In this sense, our indicators provide baselines against which to compare future trends. Furthermore, each indicator has its own specific limitations, which we expand on in the following sections. An institution’s size is associated with the potential for its financial distress to affect the financial system and the broader economy (spillover effects). We developed three indicators of size: (1)—the number of bank holding companies with assets of $50 billion or more, (2) total assets of the consolidated bank holding company as reported on its balance sheet (adjusted for inflation and measured in billions of second quarter 2016 dollars), and (3) the market share of the bank holding company (equal to its total assets as a percentage of the total assets of all of the holding companies we analyzed). These indicators do not include an institution’s off-balance-sheet activities and thus may understate the amount of financial services or intermediation an institution provides. Also, asset size alone is not an accurate determinant of systemic significance because an institution’s systemic significance also depends on other factors, such as its complexity and interconnectedness. Furthermore, some bank SIFIs are U.S.-based bank holding company subsidiaries of foreign banking organizations, and the size of these bank SIFIs may not reflect the potential for the parent company’s financial distress to affect the financial system and the economy. We observed the following changes in our size indicators over the period from the third quarter of 2010 to the second quarter of 2016 (see table 9): The number of bank SIFIs decreased by one between the third quarter of 2010 and the second quarter of 2016. The number of large bank SIFIs decreased by one, and the number of other bank SIFIs was the same. Median assets of bank SIFIs decreased by about 16 percent. Median assets of large bank SIFIs increased by about 38 percent, while median assets of other bank SIFIs decreased by about 10 percent. Median market shares of bank SIFIs decreased by about 13 percent. Median market shares of large bank SIFIs increased by about 42 percent while median market shares of other bank SIFIs decreased by about 7 percent. Interconnectedness reflects direct or indirect linkages between financial institutions that may transmit distress from one financial institution to another (spillover effects). We developed two indicators of interconnectedness based on those that the Financial Stability Oversight Council uses in the first stage of its process for designating nonbank SIFIs (1)—gross notional amount of credit default swaps outstanding for which the institution is the reference entity (adjusted for inflation and measured in millions of second quarter 2016 dollars) and (2) total debt outstanding (adjusted for inflation and measured in second quarter 2016 dollars). We measure total debt outstanding as the difference between total liabilities and total deposits. We observed the following changes in our interconnectedness indicators over the period from the third quarter of 2010 to the second quarter of 2016 (see table 10): Median credit default swaps gross notional amounts among bank SIFIs that are reference entities decreased by about 65 percent. Median credit default swaps gross notional amounts for large bank SIFIs that are reference entities have decreased by about 62 percent, while median credit default swaps gross notional amounts for other bank SIFIs that are reference entities decreased by about 80 percent. We note that few bank SIFIs are reference entities—only six or seven large bank SIFIs are reference entities, and only three or four other bank SIFIs are reference entities in any one quarter. Median total debt outstanding for bank SIFIs decreased by about 19 percent. Median debt outstanding for large bank SIFIs decreased by about 23 percent, while median debt outstanding for other bank SIFIs remained about the same. Institutions that are more complex are likely to be more difficult to resolve and therefore cause significantly greater disruption to the wider financial system and economic activity if they fail (spillover effects). Resolution via a bankruptcy or under the backstop orderly liquidation authority in Title II of the Dodd-Frank Act may be more difficult if a large number of legal entities or legal systems are involved. For example, a SIFI with a large number of legal entities—particularly foreign ones operating in different countries under different regulatory regimes—may be more difficult to resolve than a SIFI with fewer legal entities in fewer countries. We developed three indicators of this type of complexity (1)—the number of a bank SIFI’s legal entities, (2) the number of a bank SIFI’s foreign legal entities, and (3) the number of countries in which a bank SIFI’s foreign legal entities are located. A key limitation of our indicators is that they may not capture all relevant aspects of the complexity of a SIFI, such as complexity that could result from being a subsidiary of a foreign company. We observed the following changes in our complexity indicators over the period from the second quarter of 2010 to the second quarter of 2016 (see table 11): Median numbers of legal entities for bank SIFIs decreased by 37, or about 28 percent. Median numbers of legal entities for large bank SIFIs decreased by 1016, or about 37 percent, and median numbers of legal entities for other bank SIFIs decreased by 26, or about 24 percent. Median numbers of foreign legal entities for bank SIFIs decreased by 1, or about 11 percent. Median numbers of foreign legal entities for large bank SIFIs increased by 131, or about 20 percent, and median numbers of foreign legal entities for other bank SIFIs decreased by 2, or about 33 percent. Median numbers of countries in which foreign legal entities are located for bank SIFIs decreased by 1, or about 20 percent. Median numbers of countries in which foreign legal entities are located for large bank SIFIs remained about the same (increased by 1, or about 2 percent), and median numbers of countries in which foreign legal entities are located for other bank SIFIs decreased by 1, or about 25 percent. Leverage generally captures the relationship between an institution’s exposure to risk and capital that can be used to absorb losses from that exposure (resilience). Institutions with more capital to absorb losses are less likely to fail, all else being equal. We track two indicators of leverage—(1) a bank SIFI’s tangible common equity as a percentage of total assets and (2) a bank SIFI’s total bank holding company equity as a percentage of total assets. Tangible common equity is calculated by subtracting the sum of intangible assets and perpetual preferred stock (net of related Treasury stock) from the company’s equity capital. A limitation of both indicators is that they may not fully reflect an institution’s exposure to risk because total assets do not reflect an institution’s risk exposure from off-balance-sheet activities and generally treat all assets as equally risky. We observed the following changes in our leverage indicators over the period from the third quarter of 2010 to the second quarter of 2016 (see table 12): Median tangible common equity as a percentage of assets for bank SIFIs increased by about 34 percent. Median tangible common equity as a percentage of assets for large bank SIFIs increased by about 23 percent, and median tangible common equity as a percentage of assets for other bank SIFIs increased by about 32 percent. Median total equity as a percentage of assets for bank SIFIs increased by about 15 percent. Median total equity as a percentage of assets for large bank SIFIs increased by about 27 percent, and median total equity as a percentage of assets for other bank SIFIs increased by about 11 percent. Liquidity represents the ability to fund assets and meet obligations as they become due, and liquidity risk is the risk of not being able to obtain funds at a reasonable price within a reasonable time period to meet obligations as they become due. Institutions with more liquidity (and less liquidity risk), are less likely to fail, all else being equal (resilience). We developed two indicators of liquidity: (1)—short-term liabilities as a percentage of total liabilities and (2) liquid assets as a percentage of short-term liabilities. Short-term liabilities reflect an institution’s potential need for liquidity in the immediate future. We measure short-term liabilities as the sum of federal funds purchased and repurchase agreements, trading liabilities (less derivatives with negative fair value), other borrowed funds, deposits held in foreign offices, and jumbo time deposits (deposits of $100,000 or more) held in domestic offices. Liquid assets are assets that can be sold easily without affecting their price and, thus, can be converted easily to cash to cover debts that come due. Accordingly, liquid assets as a percentage of an institution’s short-term liabilities are a measure of an institution’s capacity to meet potential upcoming obligations. We measure liquid assets as the sum of cash and balances due from depository institutions, securities (less pledged securities), federal funds sold and reverse repurchases, and trading assets. A limitation of both indicators is that they do not include off- balance-sheet liabilities, such as callable derivatives or other potential derivatives- related obligations. The second indicator also does not include off- balance-sheet liquid assets, such as short-term income from derivative contracts. Because these limitations affect both the numerator and the denominator of our indicators, we cannot determine whether the exclusion of off-balance-sheet items results in an under- or overstatement of an institution’s liquidity need and access. We observed the following changes in our liquidity indicators over the period from the third quarter of 2010 and to the second quarter of 2016 (see table 13): Median short-term liabilities as a percentage of total liabilities for bank SIFIs decreased by about 12 percent. Median short-term liabilities as a percentage of total liabilities for large bank SIFIs decreased by about 26 percent, and median short-term liabilities as a percentage of total liabilities for other bank SIFIs decreased by about 20 percent. Median liquid assets as a percentage of short-term liabilities for bank SIFIs increased by about 66 percent. Median liquid assets as a percentage of short-term liabilities for large bank SIFIs increased by about 54 percent, and median liquid assets as a percentage of short- term liabilities for other bank SIFIs increased by about 61 percent. The following tables list select rules that implement sections of Title VII of the Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd- Frank Act) related to central clearing requirements for swaps and security-based swaps, and margin and capital requirements for swaps entities, as of July 22, 2016. In addition to the contact named above, Stefanie Jonkman (Assistant Director), Janet Fong (Analyst-in-Charge), Farrah Graham, Donald Hirasuna, Courtney LaFountain, John McGrail, Marc Molino, Jennifer Schwartz, and Shannon Smith made key contributions to this report.
The Dodd-Frank Act requires or authorizes various federal agencies to issue hundreds of rules to implement reforms intended to strengthen the financial services industry. Congress included a provision in statute for GAO to study these financial services regulations annually. This sixth annual report discusses (1) the regulatory analyses federal agencies conducted for the 30 rules issued pursuant to the Dodd-Frank Act that became effective between July 2015 and July 2016, (2) coordination among the regulators on these rules, and (3) indicators of the impact of select Dodd-Frank Act rules on financial market stability. GAO assessed the extent to which regulators followed OMB's cost-benefit guidance for five major rules selected because they covered a variety of agencies and topics. GAO also examined coordination for three rules selected because they involved extensive interagency coordination and covered many regulators required to coordinate under the Dodd-Frank Act. GAO also reviewed documentation and interviewed regulatory staff. Federal financial regulators reported conducting the required regulatory analyses for rules issued pursuant to the Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank Act) as part of the rulemaking process. For example, of the 30 rules GAO reviewed, which became effective between July 2015 and July 2016, the regulators analyzed the paperwork burden imposed for 12 rules for which they determined this analysis was required. For the remaining 18 rules, they determined that this analysis was not required or applicable. For instance, in some cases they determined that no new collection of information was required. As independent regulatory agencies, the federal financial regulators are not subject to executive orders requiring federal agencies to conduct detailed cost-benefit analysis in accordance with Office of Management and Budget (OMB) guidance, but regulators told GAO that they generally follow this guidance in spirit. GAO reviewed five of the nine rules considered major—that is, rules likely to result in an annual impact on the economy of $100 million or more, among other things—and found that regulators addressed most key elements of OMB guidance in their regulatory analyses. For instance, these agencies generally quantified some costs related to these rules. However, they did not quantify benefits in each rule and noted data and other limitations to doing so. In 2011, GAO recommended that the regulators more fully incorporate OMB's regulatory guidance into their written rulemaking policies, but not all regulators have implemented this recommendation. Regulators reported coordinating, as required or voluntarily, on 19 of the 30 rules GAO reviewed. The Dodd-Frank Act and the rulemaking process did not require regulators to coordinate on the remaining 11 rules. GAO focused in particular on coordination efforts involving three rulemakings: the Commodity Futures and Trade Commission's and the prudential regulators' rules on margin requirements for over-the-counter swaps, and the Bureau of Consumer Financial Protection's (CFPB) rule on integrated mortgage disclosures. For the swaps rules, regulators coordinated domestically and internationally and, according to regulators, they largely harmonized their respective rules. For the integrated mortgage disclosure rule, CFPB followed its internal guidance for coordinating with relevant agencies throughout the rulemaking process. The full impact of the Dodd-Frank Act remains uncertain because some of its rules have not been finalized and insufficient time has passed to evaluate others. As of December 2016, regulators had issued final rules for about 75 percent of the 236 provisions of the act that GAO is monitoring. Using recently released data, GAO updated indicators from its prior reports, including those that monitor systemic risk characteristics of large U.S. bank holding companies. These indicators track changes in characteristics of these companies such as size, interconnectedness, leverage, and liquidity since the passage of the act to examine if the changes have been consistent with the goals of the act. While changes in the indicators are not necessarily evidence of the impacts of the act's provisions, trends in indicators suggested large bank holding companies have become larger but less vulnerable to financial distress. GAO makes no new recommendations but continues to monitor the implementation of five prior recommendations intended to improve, among other things, financial regulators' cost-benefit analysis, interagency coordination, and impact analysis associated with Dodd-Frank regulations. Not all regulators have implemented these recommendations.
The telephone remains an essential communication tool for business, government, and the general public. The public switched telephone network (PSTN), an interconnected network of telephone exchanges over which telephone calls travel from person to person, is the backbone of the communications architecture that enables the transmission of voice and data communications. In general terms, the PSTN is the public communications system that includes the networks of local and long distance telephone carriers, as well as cellular networks and satellite systems. To connect one wireline (also known as landline) telephone to another, the telephone call is routed through various switches at telephone exchanges that are operated by local and long-distance telephone carriers. As a caller dials another party’s number, the transmission from one caller to the other is made through a telephone company’s facility, known as the central office, over copper wires or fiber-optic cables to the called party’s telephone. Over time, the PSTN has evolved from an analog system to one that is almost entirely digital and able to support voice and data transmissions made from wireline and wireless devices. Wireless networks, which include cellular and satellite-based systems, among other systems, are an important and growing element of the communications infrastructure. Cellular and satellite-based systems and networks provide an alternative to wireline networks because they are potentially accessible from any point on the globe without the cost of installing a wire or cable. Rather than relying on wired connections, wireless devices (such as cellular telephones) are essentially sophisticated radio devices that send and receive radio signals. These devices connect to a wireless network—which may also interact with the PSTN, depending on the type of connection—that enables the wireless telephone to connect to another wireless or wireline telephone. Wireless networks operate on a grid that divides large geographical areas (such as cities) into smaller cells that can range from a few city blocks to several miles. Each cell contains or is adjacent to a base station equipped with one or more antennas to receive and send radio signals to wireless devices within its coverage area, which can range from less than a mile to 20 miles from the base station. When a caller turns on a wireless device, the device searches for a signal on an available channel from a nearby base station to confirm that service is available. At that time, the base station assigns a radio frequency (also known as radio channels) to the wireless device from among the group of frequencies that the base station controls. Each base station is wirelessly linked to a mobile switching office, as well as a local wireline telephone network. The mobile phone switching office directs calls to the desired locations, whether to another wireless device or a traditional wireline telephone. If a wireless caller is connecting with another wireless telephone, the call may go through the wireline network to the recipient’s wireless carrier, or it may be routed wholly within the wireless network to the base station that is nearest the called party. On the other hand, when the wireless caller is connecting to a wireline phone, the call travels to the nearest base station and is switched by the caller’s wireless carriers to a wireline telephone network. The call then becomes like any other phone call and is directed over the PSTN to the destination number. Because both voice and data transmissions have become common functions in daily life, an effective communications infrastructure that includes voice and data networks is essential to the nation’s ability to maintain communications to enable public health and safety during a natural disaster, such as a hurricane, or a man-made disaster, such as a terrorist attack. Over the years, voice and data networks have evolved separately, with voice networks relying on circuit-switching methods while data networks largely use packet-switching techniques. Thus, a user requiring voice, data, and videoconferencing services may have to use three separate networks—a voice network, a data network, and a videoconferencing network. The telecommunications industry has begun to address the limitations of legacy communications infrastructure (such as the PSTN) to provide integrated voice, data, and video services. Technological advances in these networks have led to a convergence of the previously separate networks used to transmit voice and data communications. These new converged networks—commonly referred to as next-generation networks—are capable of transmitting both voice and data on a single network and eventually are to be the primary means for voice and data transmissions. Converged voice and data networks use technology that is based on packet switching which involves breaking a message (such as an ongoing videoconference, images, or voice conversation) into packets, or small chunks of data. Using the packet’s destination address, computer systems called routers determine the optimal path for the packets to reach their destination where they are recombined to form the original message. In doing so, packets can be transmitted over multiple routes rather than via a predetermined circuit, which, in turn, can help to avoid areas that may be congested or damaged, among other things. For example, information sent over the Internet is packet-switched, the transmission of which is defined by Internet protocol (IP). Wireline and wireless carriers have begun transforming their networks to route voice data this way, called Voice over Internet Protocol (VoIP) rather than circuit-switched methods. The adoption of VoIP and other technological advances is changing the way in which people communicate and, as a result, are likely to become central to the future of NS/EP communications. Figure 1 shows a comparison between how information is transmitted via packet switching versus circuit switching. Industry analysts have said that although the transition to converged networks is well underway, they expect the process to take many years. Furthermore, NCS projects that half of the existing circuit-switched network will be transitioned to packet-based network by 2015 with the remainder reaching full transition by 2025. Despite the evolution in telecommunications technology, congestion in the wireline and wireless telephone networks occurs. Damage or destruction of infrastructure, or extreme demand for service, can result in outages or congestion on the wireline and wireless networks which can impede or obstruct successful communications. During periods of congestion, the caller may encounter signs that the network is congested such as (1) a fast busy signal and (2) a prerecorded message alerting the caller that all circuits are busy. Given the importance of telecommunications to coordinating response and recovery efforts, it is essential that NS/EP officials successfully complete their calls even when there is damaged infrastructure or network congestion. For example, nationwide telecommunications congestion and failures during the September 11, 2001, attacks and Hurricane Katrina in 2005 were due, in part, to both damaged infrastructure and high call volume. Additionally, high call volume that has the potential to create network congestion can occur independent of emergencies. For example, Mother’s Day has historically generated the highest volume of telephone calls of any day of the year. This increased call volume can create network congestion and cause call delay or disruption during normal operations; this congestion would also reduce the likelihood NS/EP personnel would be able to successfully place calls in the event of an emergency during this period. A similar issue exists for text messaging, wherein high volumes of text transmissions can create network congestion. For instance, on New Year’s Eve, a spike in the number of text messages transmitted in the minutes immediately preceding and following midnight could overload cellular networks. The effects of this congestion could be severe for emergency responders in the event they needed to coordinate planning for or response to an emergency at that time. As part of the creation of DHS under the Homeland Security Act of 2002, NCS was transferred to DHS from the Department of Defense. Within DHS, NCS is organized as part of the Office of Cyber Security and Communications and has a fiscal year 2009 budget of $141 million. While the Secretary of Homeland Security has overall responsibility for the broader NCS organization, the duties are delegated to the NCS Manager who has primary responsibility for day-to-day activities of the NCS, including coordinating the planning and provisioning of communications services that support NS/EP needs. Central to its functions are the partnerships that NCS has established with federal, state, and local government entities, and with the service providers and equipment vendors that provide wireline and wireless communications services to support NS/EP communications. For example, NCS has long-standing relationships with industry groups such as the National Security Telecommunications Advisory Committee (NSTAC)—a presidentially appointed committee of industry leaders—that help keep it abreast of changes in the commercial telecommunications marketplace. The committee provides industry-based analyses and recommendations to the President and executive branch regarding telecommunications policy and proposals for enhancing national security and emergency preparedness. Since joining DHS when DHS became operational in March 2003, federal policies provided that NCS’s responsibilities include, among other things, serving as the lead coordinating agency for communications issues (defined as emergency support function no. 2, or ESF-2), under the National Response Framework. As part of this responsibility, when significant impact to the communications infrastructure occurs or is expected, NCS is to serve as one of the primary agencies to (1) support the restoration of the communications infrastructure and (2) coordinate the deployment of federal communications support to response efforts. As part of its ESF-2 role, NCS conducts and/or supports training and exercises intended to test and improve response and recovery capabilities needed in the event of an emergency or disaster. For example, NCS has supported exercises that model emergency scenarios that include potential and actual impacts to the communications infrastructure. In addition to its ESF-2 responsibilities, NCS serves as the Sector-Specific Agency to lead the federal government’s efforts to protect critical communications infrastructure. In this regard, NCS works with industry that owns and operates the vast majority of communications infrastructure to develop strategies to protect against and mitigate the effects of natural disasters or manmade attacks against critical communications infrastructure. As part of this function, NCS is working with industry to develop a risk assessment methodology for use in assessing the communications sector’s overall exposure including the threats, vulnerabilities, and consequences of an incident such as a natural disaster or man-made attack. Within NCS, the National Coordinating Center for Telecommunications (NCC), which serves as the operational component, is an industry- government collaborative body that coordinates the restoration and provisioning of NS/EP communications services during crises or emergencies. The NCC consists of officials from 24 government agencies and 49 companies including eight industry members that are co-located at the center (such as AT&T, Sprint, and Verizon) as well as nonresident members that comprise the telecommunications sector—wireless companies, cable companies, internet service providers, satellite providers, and communications equipment manufacturers and suppliers, among others. Since January 2000, the center also functions as the Telecommunications Information Sharing and Analysis Center to allow information sharing between representatives of the telecommunications companies. During a disruption to telecommunications services, the NCS, through the NCC, coordinates with both resident and nonresident members with the goal of restoring service as soon as possible. According to NCS, this partnership allows both industry and government to work in close proximity, helping to ensure that NCS successfully executes its mission. For example, during the 2008 hurricane season, the NCC worked with its government and industry partners to identify communications assets and infrastructure in the impacted areas and develop pre- and post- landfall strategies and response activities to help ensure availability of communications. In order to overcome network congestion, NCS has implemented priority calling programs to provide NS/EP personnel within all levels of government, as well as the private and non-profit sectors, with communications services during incidents of national security or emergency that can overwhelm the telecommunications network. The two primary programs NCS provides to deliver priority calling are the Government Emergency Telecommunications Service (GETS) and the Wireless Priority Service (WPS). NCS has undertaken a number of outreach efforts to help increase participation in these priority calling programs and has designed controls to help ensure the use of these programs is only for authorized personnel and purposes. NCS has implemented two main programs intended to overcome busy networks during periods of congestion or network failure due to abnormally high usage or infrastructure damage; the GETS program provides wireline priority calling, and WPS provides wireless priority calling for authorized NS/EP officials. According to NCS, it established GETS in conjunction with the nation’s telecommunications industry to meet White House requirements for a nationwide voice and limited data service intended for authorized personnel engaged in NS/EP missions. GETS is designed to provide priority treatment in the wireline portions of the PSTN during an emergency or crisis situation when the PSTN is congested and the probability of completing a call by normal means has been significantly decreased. For example, during the 1995 Oklahoma City Bombing—one of the earliest uses of GETS in an emergency event—a high call volume of three times more than the usual volume resulted in an overload of the telephone network in the Oklahoma City area, according to NCS. During this emergency event, officials from the federal government and the private sector were able to successfully complete about 300 calls using the GETS service. According to a senior official from the Florida Division of Emergency Management, GETS was also used in Florida during Hurricane Katrina. Prior to hitting the Gulf Coast, the hurricane made landfall in South Florida, damaging the communications infrastructure and resulting in network congestion that prevented Florida emergency management officials from completing calls. According to this official, GETS allowed Florida emergency management officials to circumvent the congested lines and successfully complete calls. To activate a GETS call, subscribers follow a three-step process similar to that of using a traditional calling card. First, subscribers must dial the universal access number by using equipment such as a standard desk phone, payphone, secure telephone, cellular phone, VoIP telephone, or facsimile. Next, a tone prompts the subscriber to enter their GETS personal identification number (PIN) found on the calling card distributed to the subscriber. (Figure 2 shows the GETS calling card that is provided to each authorized NS/EP subscriber.) Lastly, the subscriber is prompted to enter a destination telephone number. Once the calling party’s identity is authenticated (via the PIN), the call receives priority treatment that increases the probability of call completion in damaged or congested networks. GETS is designed to achieve a probability that 90 percent of calls made via the PSTN will be successfully completed—that is, establish a connection with the intended called party—during periods of network congestion or outage. The service achieves a high probability of call completion through a combination of features such as re-routing GETS calls around network blockage areas, routing calls to a second or third carrier if the first carrier’s network is congested, and queuing pending GETS calls for up to 30 seconds, among other things. Subscribers can place local, long distance, and international calls; however, it is not possible to use GETS to dial a toll-free destination number. When using GETS, subscribers are billed by the wireline carrier at a rate of $0.07 to $0.10 per minute for calls within the United States and its territories. As of April 2009, the program had grown to more than 227,000 subscribers, according to NCS. As significant increases in wireless telephone subscribers occurred in the mid-1990s, the concept for a wireless priority capability first emerged, according to NCS; however, it was in the wake of the events of Tuesday, September 11, 2001, that the Executive Office of the President, through the National Security Council, directed NCS to implement a wireless priority capability. According to NCS, in the aftermath of the terrorist attacks, wireless carriers experienced significant difficulties trying to cope with the unprecedented call volume. The reported increase in the number of phone calls in the Washington, D.C., New Jersey, and New York City areas made between 9:00 a.m. and 12:00 p.m. was 2 to 10 times the number on an average Tuesday. The resulting effort became WPS, which is a subscription-based service designed to help increase the probability of call completion for NS/EP personnel that rely on wireless devices—typically, a cell phone—while performing duties related to emergency response and recovery. To that end, WPS provides nationwide wireless priority calling capabilities, from call initiation through to when a connection is established with the called party, to NS/EP personnel during natural or man-made disasters or emergencies that result in network congestion or outages in the nation’s wireless networks. Like the average U.S. consumer, NS/EP personnel have great flexibility in choosing a wireless carrier for wireless communications services. In order to assure that WPS capabilities are accessible by the majority of wireless services that could be used by NS/EP personnel, NCS has taken steps to ensure that the nationwide and regional wireless carriers that provide services to the greatest number of wireless customers upgrade their networks to support WPS functionalities. As a result, authorized WPS subscribers are able to access WPS in nearly all the major wireless markets in the continental United States and its territories. Currently, WPS is supported by all the nationwide wireless carriers (AT&T, Sprint Nextel, T-Mobile, and Verizon Wireless). Additionally, regional carriers (such as Cellcom and Cellular South) that can help to provide WPS coverage in geographically remote or sparsely populated areas are at varying stages of updating their networks to support WPS. To initiate a WPS call, authorized subscribers must dial *272 plus the destination number from their WPS-enabled cell phone. If all radio channels in the caller’s area are busy, the call will be placed in queue for up to 28 seconds for access to the next available local radio channel. WPS subscribers receive additional priority based on their office or position to ensure that communications are first available for senior leadership (see app. V for a description of how this priority is determined). While WPS provides priority access to the next available radio channel, it does not guarantee call completion as a WPS call may encounter further congestion while being routed through the wireline or wireless portions of the PSTN. Therefore, according to NCS, WPS is most effective when used in conjunction with GETS because GETS is also designed to help activate priority calling features in the wireless network in addition to the wireline network. Thus, using a GETS calling card after activating WPS can help to ensure a higher probability of call completion for calls placed from a cellular telephone to another cellular or wireline telephone number. As with GETS, WPS subscribers incur expenses as part of their subscription; however, the WPS fee structure is more expensive. In addition to wireless calling plan fees, WPS subscribers must pay (1) a one- time activation fee of up to $10.00, (2) a monthly service fee of up to $4.50, and (3) a $0.75 per minute fee when WPS is invoked by dialing the WPS code, *272. These fees help wireless carriers to recoup the costs associated with providing NS/EP calling features in their respective wireless networks, according to NCS. As of April 2009, there are approximately 93,000 WPS subscribers, according to NCS. NCS priority calling programs are primarily intended for officials with responsibilities for coordinating the functions critical to the planning, management, and response to national security and emergency situations—particularly during the first 24 to 72 hours following an emergency. According to NCS, participants in its priority programs come from federal, state, local, or tribal government, and private industry or nonprofit organizations. In order to subscribe to GETS and WPS, applicants must prove that their organization is engaged in activities essential to NS/EP functions including (1) national security leadership; (2) national security posture and U.S. population attack warning; (3) public health, safety, and maintenance of law and order; (4) public welfare and maintenance of national economic posture; and (5) disaster recovery. Furthermore, these individuals must demonstrate that they perform a function that is critical to the planning, management, and response to national security and emergency situations. At the federal government level, personnel that qualify to subscribe to the GETS and WPS service range from staff in the Executive Office of the President to members of Congress and officials in federal departments and agencies. Nonfederal representatives such as state governors, mayors, police and fire chiefs, as well as personnel engaged in restoration of services such as telecommunications and electricity, are among those who can qualify to use the priority calling programs. Appendix V provides further details about the types of positions and functions that generally qualify for access to the GETS and WPS programs. According to NCS, the number of personnel in the public and private sectors that perform functions critical to national security and emergency preparedness range from about 2 to 10 million people. In planning for future growth in its programs, NCS estimates that the communications network can successfully support up to 2 million priority subscribers. To that end, NCS has plans underway to achieve up to 2 million GETS subscribers. NCS officials have not yet finalized this goal or a goal for WPS subscribers but indicated that the WPS goal may be about 225,000 subscribers. As of April 2009, NCS has 227,614 active subscribers in the GETS program. For WPS, there were 92,820 active subscribers. As table 1 shows, the federal government accounts for about 46 percent of active GETS subscribers and 72 percent of active WPS subscribers. NCS has undertaken several outreach efforts to help increase awareness of and participation in its priority calling programs across essential NS/EP personnel. These efforts include, for example, attending emergency management conferences, writing articles for emergency management and telecommunications publications, as well as deploying outreach coordinators to promote NCS’s priority calling programs. For example, since 1995, NCS has participated in various conferences hosted by the National Emergency Management Association (NEMA) and the International Association of Emergency Managers to facilitate its outreach and marketing efforts. At these conferences, NCS operates display booths and distributes marketing materials and may conduct presentations to help increase awareness about the benefits of its priority calling programs. NCS officials and/or contract personnel attend approximately 30 conferences annually that target federal, state, local, and industry NS/EP members. NCS officials told us that it has enlisted all but 1 of the 50 state emergency operations centers to participate in GETS and/or WPS because of initial contacts made at events hosted by NEMA. Similarly, to expand its outreach to other essential emergency personnel who also rely on wireline and wireless communications services during emergencies such as those from water, gas, and electric companies, NCS has attended conferences and other events that attract this target audience. In addition to attending conferences to reach general NS/EP personnel, NCS has implemented targeted outreach efforts to groups such as governors and state homeland security advisors; critical infrastructure facilities, such as nuclear power plant operations centers, national and regional airport traffic control centers; and federal officials who serve as the designated continuity coordinator within their respective agency. NCS officials report that they have generally made progress in enlisting these groups in its priority calling programs. For example, in 2008 NCS enlisted 56 of 71 federal continuity coordinators in the GETS program. NCS also worked with the Nuclear Regulatory Commission and the Federal Aviation Administration to ensure that GETS cards are available at all nuclear facilities and at all national and regional airports respectively. In 2005, NCS began deploying regional outreach coordinators to promote NCS’s priority calling programs to emergency management officials and other key decision makers (such as governors) that coordinate emergency response and recovery and continuity of government in state and local government. NCS credits the addition of the regional outreach coordinators as a key reason for significant growth in enrollment rates across all NS/EP categories since 2005. Despite the outreach efforts NCS has undertaken to increase participation in its priority calling programs, WPS fees are a barrier to participation in the program, according to NCS. For example, as of October 2008, while the majority of federal continuity coordinators enrolled in the GETS program, only 44 percent or 31 of 71 federal continuity coordinators are WPS subscribers. Additionally, while 24 of 56 state homeland security advisors subscribe to GETS, only 10 subscribe to WPS, and only 8 governors subscribe to WPS while 43 subscribe to GETS. The subscriber levels for the GETS program are more than twice that of the WPS program as shown in table 2. For each WPS-activated device, subscribers pay an initial activation fee of $10, a monthly fee of $4.50 as well as a usage fee of $0.75 per minute. In 2006, NCS commissioned a study to examine barriers to WPS participation, among other things. According to NCS, the survey results found that program cost was the single largest impediment to participating in WPS. Similarly, our work showed that WPS fees can be a burden particularly for NS/EP members at the state and local government level due to limited financial resources. At least one-third of 37 state and local government entities that we spoke with—including some who subscribe to WPS—stated that WPS fees affected the extent to which they participate in the program. For example, an official from the Oregon Emergency Management Division stated his organization’s participation in the WPS is relatively low because the overall WPS costs can become very expensive when calculated across all subscribers in a particular agency. Another official from Ohio Emergency Management Division stated that his organization does not participate in the program due to budget constraints even though they consider WPS to be more beneficial than GETS because the wireless component is more widely used among staff performing emergency management functions. In light of concerns about WPS subscription costs, NCS has been exploring ways to minimize the burden of program fees for its intended subscribers. For example, NCS examined the feasibility of the federal government subsidizing all or part of the WPS fees; however, DHS and OMB determined that this may not be feasible because of questions about the federal government’s ability to sustain these costs in the future. Further, NCS has had discussions with the wireless carriers to explore ways to eliminate or defray the costs; however, the wireless carriers maintain that the fees are necessary to operate and maintain WPS capabilities in their networks in order to comply with the NCS requirements. Nevertheless, some carriers have made arrangements with WPS subscribers to provide WPS as part of a bundled telecommunications service package, which, according to NCS, can defray the costs. NCS officials have stated that they plan to continue to explore ways to address the WPS cost issue as it believes doing so can help increase participation in the WPS program. Federal internal control standards state that documented policies and procedures to control access to agency resources and records to authorized individuals are essential to accountability and safeguarding assets, and NCS has developed and implemented policies and procedures to help ensure that access to its programs is limited to authorized subscribers. NCS has standard operating procedures that document how potential subscribers can gain access to its priority calling programs. To be approved for a GETS card and/or WPS service request, the NCS contractor must be able to confirm that the request is from an organization that performs any of five NS/EP functions mentioned earlier in this report. If the organization’s NS/EP status is unclear (such as chemical suppliers, radio and TV stations, or housing shelters), the organization must obtain sponsorship from NCS, 1 of the 24 NCS member agencies, or through the emergency management agency in the state or locality in which they operate. Once approved, the organization must identify a primary point- of-contact (POC) and an alternate POC, if available. Within each organization, the POC is the primary liaison between NCS and individual GETS and WPS subscribers. The POC is responsible for (1) determining who should have access to the GETS and WPS service within their organization; (2) processing all GETS and WPS service requests; (3) notifying NCS of changes to subscriber account data such as changes in name, telephone number, or eligibility status; (4) reviewing and certifying monthly subscriber calling data; (5) familiarizing subscribers with GETS and WPS functionalities, and (6) annual verification of subscriber eligibility. As evidenced by their responsibilities, NCS relies on the POCs to manage almost all aspects of subscriber accounts. However, through an annual verification process, NCS seeks to ensure that POCs provide a current account of subscribers who meet the eligibility requirements. NCS will make multiple attempts over a 90-day period to ensure the POC responds to its request to validate subscriber information, according to NCS officials and failure to do so can result in cancellation of the subscribers’ account. NCS officials told us that they designed these verification procedures to help ensure that only eligible subscribers have access to NCS’s priority programs. From our review of selected GETS and WPS records as a limited check on whether current positions meet eligibility criteria, we found that the GETS and/or WPS accounts for former members and delegates of the U.S. House of Representatives and the U.S. Senate in the 109th Congress were terminated in accordance with NCS’s procedures. However, when we reviewed accounts for 15 immediate past heads of federal departments and agencies as of August 2008, we found 4 of 15 instances where these officials’ GETS and/or WPS accounts were not terminated. We brought this to NCS’s attention and officials told us that these accounts were terminated effective July 2009. Further, NCS plans to institute new processes that are to include more frequent monitoring of GETS and WPS accounts that coincide with administration changes to ensure that the subscriber’s account status is appropriately updated. In addition to verifying whether a subscriber is authorized to enroll in NCS’s programs, telephone carriers as well as NCS and its contractors have applied fraud detection mechanisms intended to protect against fraudulent calls in their networks as well as others that are unique to the GETS and WPS services. For example, carriers have fraud detection for general telephone use that also detects fraud for GETS and WPS services. These detection mechanisms include detection of a single PIN being used simultaneously from multiple originating phone numbers and calls of long duration, among other things. NCS and its contractor said that they have also instituted procedures to determine the legitimacy of calls and to take corrective action, which may include disabling the GETS and WPS account in question. According to NCS, it has rarely found actual cases of fraud and abuse. For example, although there were 45 reported cases of potential fraudulent calls in 2008, through further investigation NCS determined that the calls were legitimate and the reports typically resulted from calls placed by authorized subscribers conducting test calls or participating in preparedness exercises. Even if fraudulent calls were made using GETS and WPS services, the implications would likely be minimal due to two factors. First, the subscriber levels for GETS and WPS, which currently stand at more than 227,000 and about 93,000 respectively, are well below the capacity of the system. For example, according to NCS, the GETS system was designed to support up to 2 million subscribers, however, the current subscriber level—227,000 GETS subscribers—is well below the intended capacity. Second, the potential financial implications for the federal government would be nominal as NCS does not bear the costs for GETS and WPS charges for nonfederal subscribers. State and local governments as well as private and nonprofit organizations bear all of the costs related to the usage of the GETS and WPS programs. In general, NCS may cover GETS charges for federal departments and agencies up to an annual budget threshold; however, federal agencies may be responsible for these costs in the event of fraudulent or abusive calling activity. Federal and nonfederal WPS subscribers are responsible for all associated costs. The delivery of NCS’s priority calling services faces challenges related to the inherent vulnerabilities of the communication infrastructure such as downed phone lines, damaged cell towers, and broken circuits and switches. Therefore, NCS seeks to build redundancy into the communication capabilities and services it provides and has explored satellite technology to overcome such challenges. However, methods for implementation and evaluation of its related satellite pilot were unclear and NCS subsequently terminated the pilot. In addition, NCS faces the challenge of keeping pace with the rapid evolution in telecommunications technology and it is working with the telecommunications industry to ensure that NS/EP communications requirements are integrated into the next-generation communications networks. However, NCS’s planning efforts to update its programs as technology evolves could be strengthened. In December 2007, NCS launched a satellite pilot program to provide an alternative means to support NS/EP communications to help circumvent network congestion or outages in the PSTN. According to NCS, because GETS and WPS leverage PSTN-based infrastructure to enable communications for NS/EP personnel, these programs can be limited in their ability to provide services when damage renders the PSTN infrastructure inoperable, such as it did in certain regions affected by Hurricane Katrina. In February 2004, the National Security Telecommunications Advisory Council (NSTAC) issued a report to the Executive Office of the President recommending that NCS develop a satellite capability to facilitate NS/EP communications. The communications challenges that arose during the 2005 Gulf Coast hurricanes due to flooding and loss of power, among other things, underscored the need for a communications capability that could transcend these infrastructure issues, and NCS observed that satellite networks appeared to be the least disrupted communications service during this event. To that end, 3 years following the 2005 Gulf Coast Hurricanes, NCS launched the first of two phases of the satellite pilot program intended to enable unclassified voice connectivity during emergencies that leverages satellite infrastructure independent of the PSTN. As part of the pilot, according to NCS officials, NCS is to provide participants with a wall-mounted unit that consists of battery backup and surge protection and a satellite phone. According to NCS officials, one objective of the pilot is to evaluate two voice communications capabilities via satellite technologies: push-to-talk communication functions and GETS priority calling using a satellite phone. Push-to-talk is a radio-like function, similar to that of a walkie-talkie or three-way radio, with which a group of users would connect back-and-forth with each other from their individual satellite phones at the push of a button without having to make individual calls. NCS also plans to use the pilot to test the ability to make GETS priority functions to call a wireline or cellular telephone number using a satellite phone. According to NCS, calls made from a satellite phone to a cellular or wireline telephone can bypass congested or damaged areas of the PSTN, as such calls can be routed via satellite networks to a less congested area of the PSTN, thus increasing the likelihood of call completion. However, because these calls are still expected to travel through the wireline and wireless portions of the PSTN to reach their destination, they could face congestion while trying to connect to the PSTN. To bypass such congestion, NCS officials stated that the GETS priority calling features must be supported on the satellite networks, which currently they are not. By inserting priority calling functionality in satellite networks, GETS calls that originate from a satellite phone will have a greater likelihood of being successfully routed through the PSTN in times of network congestion. NCS officials also told us that other objectives for the pilot include determining the extent to which satellite communications meet NS/EP needs and educating NS/EP personnel about the availability of satellite communications for use in emergency situations. Although the pilot began in December 2007 and is estimated to last 3 years and cost $1.9 million, as of May 2009 NCS could provide little documentation to explain its objectives for the pilot, and how it planned to meet those objectives. For example, while NCS officials provided briefing slides to elaborate on the pilot program and describe some high-level program objectives, these slides lacked key program information such as a methodology for evaluating pilot results to determine whether the intended pilot objectives were met, and milestones for pilot implementation. Specifically, although the briefing slides noted the planned number of sites to be included in the pilot, it did not specify when the site selection would be completed, when sites would begin participating in the pilot, or the data that would be collected and analyzed to evaluate pilot performance. According to NCS, the pilot was to include up to 65 participating sites comprising emergency operations centers supporting federal and state government, and NCS officials stated they had initially identified six sites and conducted an evaluation of additional candidate sites. However, NCS officials could not provide any detailed information about what criteria or rationale was used to determine which sites to include in the pilot. For instance, while NCS officials told us they evaluated sites based on two factors (effects of disaster scenarios and population served by the respective location), they did not provide any documentation that outlined these details or demonstrated how these two factors would help it determine if the pilot objectives were met. In addition, as part of phase two of the satellite pilot, NCS officials said they intended to use lessons learned from the experience of phase one of the pilot to migrate the satellite capability to another NCS technology initiative already underway; however, NCS launched the pilot program without the benefit of completing a methodology to evaluate the pilot. In addition, NCS could not provide documentation as to how the results of the pilot would be evaluated and used to inform future program decisions such as future rollout. Exacerbating the absence of program planning documents, is that key staff originally involved in the pilot have since left NCS resulting in the loss of institutional knowledge about the original decisions and planning for the pilot. In April 2009, officials told us that the pilot had been placed on hold as they were reassessing various aspects of the pilot such as conducting a cost-benefit analysis to determine which satellite provider and equipment to use. After reassessing the pilot, NCS subsequently terminated the pilot in May 2009, according to NCS officials. NCS officials acknowledged that the pilot program needed improved planning and metric documentation and noted that NCS took a number of issues into consideration including the current availability of push-to-talk capability among existing satellite service providers in making the decision to end the pilot. NCS is mandated by presidential directive to support the use of technological advances and evolutionary communications networks for NS/EP communications functions assigned to NCS, including programs it provides to maintain continuity of communications. GETS and WPS are designed to operate on the circuit-based PSTN platform, while packet- based IP networks are increasingly used and expected to eclipse the use of circuits in telecommunications, according to representatives from the telecommunications industry. As a result, NCS and its GETS and WPS subscribers face the risk that these services will not work within these next-generation networks. To avoid disruption or degradation of service, NCS plans to migrate existing GETS and WPS priority calling features from circuit-based networks to public telephone packet-based networks to assure that the programs will be operable on new technologies available from wireline and wireless carriers. NCS’s efforts to integrate new and existing NS/EP services into next-generation networks (NS/EP NGN) consist of two primary components: (1) priority voice communications and (2) priority data communications that includes priority treatment for the transmission of e-mail, streaming video, text messaging, and Internet access, among other things. NCS has taken steps to assess how the evolution of technology will affect the provision of its priority calling services and to plan for these changes. In addition, because NCS’s programs are largely dependent on the telecommunications industry, which owns and operates most of the communications infrastructure on which GETS and WPS operate, NCS has partnered with industry to inform and implement these changes. According to NCS, adding the priority voice communications component of NS/EP NGN is less challenging than adding data services because while priority calling programs exist (GETS and WPS), priority data programs do not. NCS officials estimate that at least one of the three major carriers (AT&T) will begin supporting priority communications via VoIP by 2010 and the remaining carriers (Sprint and Verizon) by 2014. However, less is known about supporting priority data communications and, consequently, this effort is more challenging, according to NCS officials. The challenge to develop priority data services is not a new issue; in 2006 we reported that the obstacles to offering the service include both technical and financial challenges. For example, the commonly used version of Internet protocol (known as IPv4) does not guarantee priority delivery and has certain security limitations that may not adequately protect information from being monitored or modified while in transit via the Internet. Though the next version (IPv6) has features that may help prioritize the delivery of data in the future and provide enhanced security, it is not yet widely adopted. Also, in March 2006, the NSTAC reported that while the NS/EP NGN initiative is expected to offer improvements for NS/EP communications, the security challenges are likely to have an operational impact on the transmission of NS/EP communications if not adequately addressed. Specifically, they noted that robust user authentication methods are needed in order to enable NS/EP personnel to share information in a secure manner. While these authentication methods are to be available through IPv6, they are not available through IPv4, which is the more widely used version. In April 2009, NCS officials told us they have not yet finalized what types of authentication methods or which IP version would support the NS/EP NGN, though they plan to request additional information from industry experts about how to address authentication issues. In our 2006 report, we noted that NCS had previously requested information from private companies on the potential for prioritizing services, and found that there was no offering for a priority service, nor was there any consensus on a standard approach to prioritization. Although, NCS, in conjunction with international standards bodies, completed the first set of engineering standards for priority VoIP in December 2007, as of May 2009, standards had not yet been established to support prioritized NS/EP NGN data communications. Moreover, NCS could not provide further detail as to how its planning efforts account for the different capabilities of the available technology, and the associated challenges. In addition to NCS not fully detailing how it plans to mitigate existing challenges, it also could not provide details about key program elements such as, the estimated total costs, and a timeline for implementation of the NS/EP NGN initiative. Officials said the information was not yet finalized. Our previous work on acquisition and technology investment management has shown that undertaking such efforts is strengthened by first ensuring that (1) an acquisition approach, such as the one for NS/EP NGN, is based on available technologies that support the intended capability; (2) cost estimates are realistic; and (3) risks have been identified and analyzed, and corresponding mitigation plans have been developed. NCS officials told us they planned to develop program plans that included this information, but as of May 2009 these documents were in the early stages of development, and officials stated they were finalizing cost and schedule estimates for the initiative, which may be greater than previously projected. In addition, for the last 2 years, Congress has raised questions about the absence of detailed program information such as costs of planned investments for some of NCS’s programs, and NCS has faced difficulties in justifying its budget requests. For example, during the appropriations process for fiscal years 2008 and 2009, the House and Senate Committees on Appropriations raised questions about the intended investments in NS/EP NGN. Because of the lack of explanation about the significant increase in funds requested for fiscal year 2008 compared to the previous year, the House and Senate Committees on Appropriations stated that NCS had not adequately justified funding for the NS/EP NGN effort. Consequently, Congress appropriated $21 million—about 60 percent less than requested—to DHS for NS/EP NGN. In addition, the House of Representatives Committee on Appropriations directed DHS to brief them on the planned expenditures for NS/EP NGN in fiscal year 2008. Again, for the fiscal year 2009 budget request for NS/EP NGN, the House of Representatives Committee on Appropriations raised questions about the lack of a thorough explanation of (1) information about planned investments, (2) clarity about how the initiative aligns with DHS’s homeland security goals, and (3) information about the total costs to complete the initiatives. As a result, Congress withheld half of the fiscal year 2009 funding for NS/EP NGN until NCS completes an expenditure plan to be approved by the House and Senate Committees on Appropriations that identifies the strategic context, specific goals and milestones, and planned investments. Although NCS had planned to submit the expenditure plan to the Committees on Appropriations in January 2009, they have not done so, and as of May 2009, the plan was still being reviewed internally. Based on technological and planning challenges, NCS officials told us that in 2008 it began taking steps to restructure its acquisition approach to focus first on voice with data to follow much later. However, as noted by Congress in its response to NCS’s fiscal year 2009 budget request, little is known about this restructuring, including key program information such as what capabilities will be delivered, total costs, and milestones. Moreover, despite requirements from Congress to articulate its strategy for the NS/EP NGN initiative, as of May 2009 NCS had not yet clearly defined program objectives and total costs, among other things. While NCS officials told us that they expect increased costs and schedule delays, they have not provided any further details or plans to mitigate these challenges, and it is unclear when important technological and program details of the restructuring will be finalized. In February 2009, NCS hired a new manager whose responsibilities include NS/EP NGN, who stated the need to plan for these issues and develop corresponding program plans that outline the NS/EP NGN acquisition approach including costs, milestones, and risk mitigation plans. GAO and commercial best practices show that incorporating cost information and strategies to mitigate program and technical challenges are essential to successfully meeting program objectives and minimizing the risk of cost overruns, schedule delays, and less than expected performance. As NCS moves forward with the NS/EP NGN effort, clearly defining and documenting its technical approach to achieve program objectives within the constraints imposed by known challenges—such as the limitations of available technologies and NCS’s dependence on the telecommunications industry—could help provide reasonable assurance that an executable approach is in place to meet current and future NS/EP communications needs. Furthermore, such planning could provide a sound basis for determining realistic cost and schedule estimates and provide key stakeholders such as Congress with information they need to make funding decisions over time. NCS has been developing its strategic plan since 2007, and although officials have stated that a strategic plan could help inform their efforts, it has not been finalized. In addition, while NCS has generally linked the performance of its programs to broader agency and department goals, the performance of two of NCS’s core responsibilities is not measured. Finally, focusing program evaluation efforts on outcomes, gauging progress, incorporating past performance, and clarity can improve the usefulness of NCS’s performance measures. NCS has undertaken strategic planning for its programs and documented some key elements of strategic planning—such as a statement of the agency’s mission, strategic goals, and objectives—across a range of documents and sources. For example, the mission statement is documented in program documents such as NCS’s Annual Reports, and NCS officials told us they have identified 21 strategic objectives that align with its three strategic goals (information on the three strategic goals and some of the related objectives is shown in table 3). However, this information has not been incorporated into a strategic plan. Furthermore, NCS officials stated that these goals and objectives are being revised, but they did not provide a date when this would be finalized. Additionally, NCS’s congressional budget justification documents for fiscal years 2007 through 2009 contain planned milestones and spending for various program initiatives. In June 2008, we reported that efforts were under way to draft a strategic plan for the NCS, and recommended that DHS establish milestones for completing the development and implementation of the strategic plan. DHS agreed with our recommendation and stated that it was taking steps toward finalizing the strategic plan. However, as of April 2009, the plan, which has been in draft since mid-2007, had not yet been finalized and NCS officials could not provide a date for when this would occur. A draft strategic plan for fiscal years 2007 to 2013 did not include some of the key elements associated with effective strategic plans. For example, while the plan included NCS’s mission, strategic goals and high-level objectives, it did not include a discussion of the resources needed to achieve these goals and objectives. Although NCS intends to enhance its priority communications offerings to keep pace with emerging technology (such as priority data in an IP environment), it has not yet finalized the total costs to do so. In addition, the draft plan did not identify external factors that could affect achievement of strategic goals (such as management or technological challenges). Moreover, the plan did not articulate how current and planned initiatives such as the NS/EP NGN and the satellite pilot program fit into the broader agency goals. Our past work has discussed the importance of strategic planning as the starting point for results-oriented management. Strategic plans are to articulate the mission of an organization or program, and lay out its long- term goals and objectives for implementing that mission, including the resources needed to reach these goals. Leading management practices state that federal strategic plans include six key elements: (1) a comprehensive mission statement, (2) strategic goals and objectives, (3) strategies and the various resources needed to achieve the goals and objectives, (4) a description of the relationship between the strategic goals and objectives and performance goals, (5) an identification of key external factors that could significantly affect the achievement of strategic goals, and (6) a description of how program evaluations were used to develop or revise the goals and a schedule for future evaluations. As we have previously reported, strategic plans are strengthened when they include a discussion of management challenges facing the program that may threaten its ability to meet long-term, strategic goals. While NCS has completed some key aspects of strategic planning, critical elements such as the key external factors that could affect achievement of its mission—for example, challenges affecting the NS/EP NGN initiative— have not yet been documented and NCS has not committed to incorporating these elements in its strategic plan. A strategic plan that captures these key elements in a centralized way would help inform stakeholders, such as departmental leadership, Congress, and the administration about NCS’s priorities and plans and assist stakeholders in making efficient and effective program, resource, and policy decisions. In addition, because NCS has experienced frequent turnover in leadership, such a plan would be beneficial for new agency management during transition periods. For example, since January 2007, there have been two directors and one acting director as well as three different staff serving in the capacity of Chief for the Technology and Programs Branch—a position that oversees the day-to-day operations regarding NS/EP NGN, among other initiatives. NCS has five performance measures which relate to three aspects of GETS and WPS—the number of subscribers, priority call completion rates in emergencies, and cost to support GETS and WPS subscribers. While NCS has not documented how its performance measures link to NCS’s and DHS’s strategic goals and objectives, we used various documents, such as DHS’s fiscal year 2008 to 2013 strategic plan, to determine that NCS’s five performance measures link to agency and department strategic goals and objectives (see figure 3, which illustrates the connection between DHS’s mission to NCS’s performance measures). For example, NCS’s performance measure to track the call completion rate of priority calls is linked to its strategic goal of ensuring availability of communications as well as to DHS’s strategic objective to ensure continuity of government communications. Consistent with our past work on performance management, linking performance measures with strategic goals and objectives in this way provides managers and staff with a roadmap that shows how their day-to-day activities contribute to achieving broader DHS and NCS goals. While NCS’s performance measures generally link to overall goals and objectives, NCS’s performance measures focus exclusively on its priority calling programs, and NCS does not have measures to assess the performance of its other two primary responsibilities—serving as the ESF- 2 coordinator and the lead federal agency for critical infrastructure protection for the communications sector. Although NCS officials acknowledged that they do not have such measures and noted that they could be helpful, these officials did not commit to developing such measures. While we have previously reported that agencies do not need to develop performance measures that cover all of their activities, OMB requires that performance measures reflect a program’s mission and priorities. Furthermore, we have also reported that an agency’s performance measurement efforts are strengthened when they sufficiently cover its core activities. NCS’s critical infrastructure protection and ESF- 2 responsibilities are key components of the agency’s mission to help ensure that NS/EP communications are available during disasters or emergencies, and are articulated in NCS’s strategic goals (see table 3). For example, NCS, in conjunction with the telecommunication industry is responsible for conducting risk assessments of the nation’s critical communication infrastructure; according to Executive Order 13,231, as amended, communications infrastructure is critical not only to emergency preparedness, but all aspects of U.S. national security and economy. Without the benefit of performance measures that cover these functions, NCS may be limited in its ability to assess its overall effectiveness in meeting all three of its strategic goals. Moreover, developing performance measures for these mission-critical functions would help strengthen and inform future program and budget decisions, improve critical program activities, and as we have previously reported, help verify that NCS’s resources are being used responsibly. Of its five performance measures, NCS has identified two as outcome measures, two as output measures, and one as an efficiency measure (see table 4 for more information on each of these measures). While OMB guidance defines output measures (such as the number of products or services delivered) as a description of the level of activity provided over a period of time, it asserts program performance is most effectively measured by focusing on how those outputs support the achievement of desired outcomes—the intended results of carrying out a program or activity. NCS’s two output measures—the number of GETS subscribers and the number of WPS subscribers—could be strengthened to focus on outcomes, more effectively gauge progress toward achieving results, and set more reliable targets. In addition, one of NCS’s outcome measures, the call completion rate, does not clearly illustrate the measures’ intended purpose. OMB guidance emphasizes the use of outcome measures as a more meaningful indicator of performance and encourages agencies to translate existing measures that focus on outputs into outcome measures, or at least demonstrate that measured outputs would logically lead to intended outcomes. Currently, neither of NCS’s output measures fully demonstrates how it supports NCS in the achievement of the intended outcomes of the GETS and WPS programs, which, as articulated in one of NCS’s strategic goal, is to ensure the availability of communications capabilities for all NS/EP officials. For example, NCS told us that the long- term goal for the GETS program may be to reach 2 million subscribers; however, NCS has not demonstrated how reaching 2 million subscribers achieves the result of ensuring the availability of communications capabilities for NS/EP officials that could benefit from the use of the GETS service. According to NCS officials, NCS based this number on an internal study that identified 2 million subscribers as the capacity level that the PSTN can support. However, NCS could not provide a rationale as to how 2 million subscribers appropriately quantifies the population of NS/EP personnel critical to NCS achieving its desired results. Therefore, it is unclear whether achieving 2 million GETS subscribers means that all the NS/EP personnel who have the greatest need for access to priority calling capabilities are enlisted in the program thereby enabling them to make calls that can help to coordinate planning for national security incidents and emergencies and facilitate continuity of government under these conditions—a key function of the GETS program. In addition, NCS officials have told us that the agency has an unofficial long-term goal of 225,000 subscribers for the WPS program. Although NCS officials noted that this number has not been finalized, the measure also does not portray how well or if WPS is achieving its desired program outcome. Furthermore, NCS has not been able to provide information regarding how it developed this WPS subscriber goal or describe how it will do so in the future. Our past work, along with federal guidance, has discussed the importance of using a series of output and outcome goals and measures to depict the complexity of the results that agencies seek to achieve. We recognize that it can be difficult to develop outcome goals and corresponding measures. Nonetheless, by further articulating how NCS’s measures support the intended outcome articulated in its strategic goal—ensuring availability of communications for NS/EP functions—, NCS and its stakeholders could more effectively gauge the extent to which subscriber levels in GETS and WPS reflect if communications capabilities are available to all critical NS/EP personnel as intended. NCS’s progress can be better measured through annual performance targets that track subscriber levels to demonstrate how overall subscriber goals for GETS and WPS lead to program outcomes. This would help to better illustrate NCS’s annual progress toward achieving its desired results. Furthermore, although both of NCS’s output measures reflect the number of subscribers in each program for a given year, the measures do not reflect whether NCS’s annual achievement demonstrate significant or marginal progress toward reaching 2 million subscribers, and NCS has not defined a time by which it hopes to achieve this goal. In its GETS and WPS performance measures, NCS states annual results as an output of the number of subscribers in a particular year—for example, 208,600 GETS subscribers in fiscal year 2008. These output measures do not capture percentage increases in the number of subscribers from year to year to help measure performance changes in achieving any long-term goal for subscribers. According to OMB guidance, performance over time is to be expressed as a tangible, measurable objective, against which actual achievement can be compared, such as a quantitative standard, value, or rate. For example, for NCS’s performance measure related to the percent of federal continuity coordinators with access to priority calling programs—NCS tracks change over time by showing a rate of annual progress toward enlisting these particular officials in the GETS and WPS programs. In doing so, NCS can provide insight as to the extent to which this group can successfully place calls to help facilitate continuity of government at the federal level—particularly in the event of network congestion during emergencies. Although NCS has reported ongoing or planned targeted outreach efforts to similar groups that play a leadership role in coordinating emergency response and continuity of government such as governors or mayors, they have not developed similar performance measures to track their annual progress in enlisting and maintaining these subscribers. NCS has not finalized its overall goal for the number of GETS and WPS subscribers or set a timeline for when it plans to achieve its unofficial goals for the number of GETS and WPS subscribers. Based on GETS enrollment levels over the last 3 fiscal years, at current rates NCS may not achieve its unofficial subscriber goals until somewhere between 2015 and 2047. OMB guidance states that performance goals are to be comprised not only of performance measures and targets, but also include time frames for achieving these goals. In addition, OMB guidance states that targets are to consider past performance, adjusted annually as conditions change, such as funding levels and legislative constraints. However, NCS did not consider past performance when setting annual performance targets for several of its performance measures. As a result, the targets are not ambitious or based on reliable baselines. For example, NCS did not modify its targets for the number of GETS subscribers for fiscal years 2007 and 2009 based on actual results achieved in the previous fiscal year. According to OMB performance guidance, baselines are the starting point from which gains are measured and targets set; and performance targets are to be ambitious. Our past work has also emphasized the importance of baselines and multiyear goals particularly when results are expected to take several years to achieve. As detailed in table 4, for fiscal year 2006, NCS reported a target of 118,000 GETS subscribers and achieved 158,669, which also surpassed its 2007 goal. However, NCS did not update its fiscal year 2007 goal of 155,000 when it was achieved in 2006. Similarly, in fiscal year 2008, NCS set a target of 185,000 subscribers and achieved 208,600 subscribers, which surpassed the fiscal year 2009 goal. However, as of April 2009, the goal remained at 204,000 subscribers even though NCS exceeded this level in the previous fiscal year. Similarly, the target level for another measure—the average cost to maintain a priority telecommunications service subscriber—has not been modified to reflect the actual results of the prior year. NCS began using this measure in fiscal year 2007 and has exceeded its target reductions in cost for the 2 years that the measure has been in place. For fiscal years 2008 and 2009, the average cost targets were $15.63 and $14.22, respectively; however, NCS reported that the average cost to maintain a priority service subscriber in 2008 was $13.70, surpassing targeted reductions for both 2008 and 2009. As with the target for the subscriber measures, the average cost target was not modified to build upon actual results of the prior fiscal year. Furthermore, the baseline upon which each annual average cost goal is determined is the number of GETS and WPS subscribers. While officials cite reductions in operating costs as one reason for exceeding the target, they also stated that the achievement was more a function of the fact that they exceeded the projected number of GETS subscribers. As a result, because the annual GETS subscriber performance measure is not composed of ambitious targets from year to year, the baseline it provides for determining the average cost target is unreliable. Without considering changes in this baseline information—in this case, number of subscribers—valid comparisons to measure improvement over time cannot be made. Considering past performance in setting targets could help NCS develop a true sense of continued improvement in enlisting priority service subscribers and reducing costs to service the subscribers. Finally, while NCS has implemented an outcome-oriented measure to assess the effectiveness of its priority calling programs during periods of congestion, the information the measure intends to convey—priority service call completion rate—is not consistent with the methodology used to calculate the results. Specifically, the measure is intended to capture and measure combined call completion rates for GETS and WPS. However, wireless carriers collect the relevant information that NCS reports via this measure, and under current processes for capturing attempted WPS calls, wireless carriers are unable to identify all attempted WPS calls that are not completed. Our previous work holds that performance measures should be clearly stated in order to ensure that the name and definition of the measure are consistent with the methodology used to calculate it. Furthermore, OMB guidance states that agencies are required to discuss the completeness and reliability of their performance data, and any limitations on the reliability of the data. As the call completion measure does not provide clear information about program performance and limitations, NCS risks overstating the completion rate for WPS and the use of this measure may affect the validity of managers’ and stakeholders’ assessment of WPS performance in comparison to the intended result. NCS officials agreed that opportunities exist to strengthen this measure to ensure that it accurately reflects the activity being measured, and stated they are taking steps to work with carriers that support WPS services to develop a solution that would allow them to track the full range of WPS calls. However, in the meantime, NCS has not committed to revising the measure to accurately reflect the activity being monitored. The events of September 11, 2001, and the 2005 hurricane season dramatically demonstrated how catastrophic man-made and natural disasters can disrupt communication capabilities and highlight the need for essential NS/EP officials to be able to communicate during and in the aftermath of such events. NCS continues to recognize the need to keep pace with technological changes and look for ways to better meet NS/EP personnel’s current and future communications needs as evidenced by the development of its NGN initiative. Information such as costs, available technology, and future capabilities for these types of initiatives are unknown, and as such require thoughtful planning to most effectively allocate current and future resources. These efforts to ensure that the communication capabilities it provides to NS/EP personnel will be operable on and leverage next-generation networks could benefit from better planning. By clearly defining its acquisition approach for the initiative and developing mitigation plans to address known risks and technical challenges, NCS can help minimize cost overruns and schedule delays, and more importantly help ensure that it is developing services that meet the emerging communication needs of the NS/EP community. Strategic plans are an essential element in results-oriented program management, and provide agencies and stakeholders a common set of operational principles with which to guide actions and decisions. Although DHS stated that it was taking steps to finalize its strategic plan in response to our June 2008 recommendation, it has not yet finalized the plan which has been in draft since mid-2007 or committed to incorporating key elements of a strategic plan. We continue to believe that our prior recommendation has merit and that NCS could benefit from completing a strategic plan. A strategic plan that includes identifying strategic goals and objectives, the resources needed to achieve those goals and objectives, and a description of the relationship between planned initiatives and strategic goals could serve as the foundation to help NCS align its daily activities, operations, program development, and resource allocation to support its mission and achieve its goals. As NCS undertakes a variety of new initiatives and attempts to strengthen existing programs, finalizing its strategic plan will also help strengthen NCS’s ability to efficiently and effectively allocate resources, inform key stakeholders, and provide agency and congressional decision makers the ability to assess NCS’s programs and initiatives. As part of strategic planning, it is important that related performance measures are linked and support NCS strategic goals, as well as DHS’s strategic goal of ensuring continuity of communications. In the absence of performance measures for the key functions NCS performs as the lead for the federal government’s efforts to protect critical communications and as the coordinator for ESF-2, NCS cannot reasonably measure or demonstrate how these core program activities are contributing to achieving all three of its strategic goals and DHS’s overall mission of providing continuity of communications. For a performance measure to be used effectively, it is essential that a measure’s definitions, and its intended use, are consistent with the methodology used to calculate it. While NCS acknowledges that its primary performance measure for its priority calling programs—call completion rate—does not capture all WPS calls completed and is exploring ways to capture the full spectrum of uncompleted, by not revising the measure in the meantime to accurately portray what is being measured, NCS continues to inaccurately measure performance and provide potentially misleading information to decision makers. Similarly, by not adjusting the performance targets that intend to measure number of subscribers and average costs to build upon and reflect previous years’ results, NCS cannot make valid comparisons to measure improvement over time, and cannot ensure whether performance goals are reasonable and appropriate. Beyond adjusting targets for the number of subscribers, opportunities exist to make these measures more outcome oriented to reflect the progress in reaching NCS’s ultimate goals for the number of subscribers to its GETS and WPS programs. However, without clearly defining or demonstrating how its ultimate subscriber goals achieve the result of ensuring the availability of communications capabilities for NS/EP personnel who need these services, it will remain difficult to measure progress. To its credit, NCS has identified federal continuity coordinators as critical NS/EP personnel needing access to its programs and has developed an outcome measure to track progress in enlisting and maintaining this group of subscribers. However, without similar measures for other groups that play a significant role in coordinating emergency response and continuity of government, NCS will not be in a position to evaluate its efforts to reach out, target, and ultimately provide priority calling programs to these groups. To help ensure that NCS management has sufficient information needed to assess and improve NCS’s programs and new initiatives and to effectively support budget decisions, we recommend that the Secretary of DHS direct the Manager of the NCS to take the following three actions: Develop program plans for the NS/EP NGN initiative that outline an acquisition approach based on available technologies, realistic cost estimates, and that include mitigation plans to address identified challenges and risks. Follow best practices for strategic planning in finalizing the NCS strategic plan including identifying the resources needed to achieve its strategic goals and objectives and providing a description of the relationship between planned initiatives such as the NS/EP NGN and strategic goals. Strengthen NCS’s performance measurement efforts by (1) developing measures to cover all core program activities, (2) exploring opportunities to develop more outcome-oriented measures, (3) ensuring performance measure baselines are reliable and based upon past performance, (4) and improving the clarity of its call completion measure. We provided DHS a draft of this report for review and comment. DHS provided written comments on August 7, 2009, which are summarized below and presented in their entirety in appendix VI. DHS also provided technical comments, which we incorporated where appropriate. DHS disagreed with the recommendation in our draft report that it develop an evaluation plan for its satellite program that includes milestones for continued implementation and a methodology for assessing the results of the pilot before moving forward with the program. Specifically, DHS noted that the pilot program, which was on hold at the time of our review, was now complete. However, at the conclusion of our field work, our understanding from the NCS Director was that the pilot was on hold and that NCS was reassessing various aspects of the pilot such as conducting a cost-benefit analysis to determine which satellite provider and equipment to use. In light of this discrepancy, we subsequently obtained clarification on the status of the pilot. Our discussion with DHS revealed that the pilot program was terminated rather than completed. In providing clarification, DHS stated that it agreed with our assessment that the pilot program needed improved planning and metrics documentation and that NCS took a number of issues into consideration including the current availability of push-to-talk capability among existing satellite service providers to determine whether the pilot should be continued. Given these considerations, as well as the issues that we identified such as lack of program objectives, documentation and metrics, NCS terminated the pilot. According to NCS, about $900,000 had already been spent or obligated to support various activities for the pilot program. According to NCS officials, the remaining $1 million for the pilot will be reprogrammed and any funds that had already been obligated but not yet spent will be deobligated and also reprogrammed for other priority communications services. Thus, based on the termination of the pilot, we withdrew our recommendation and have modified our report to reflect the current status of the pilot. DHS concurred with our recommendation that it develop program plans for the NS/EP NGN initiative that outline an acquisition approach based on available technologies, realistic cost estimates, and that include mitigation plans to address identified challenges and risks. Although it concurred with our recommendation, DHS also reported that NCS currently follows a structured approach in the design and implementation of program plans and that it assesses industry trends to help determine program enhancements and mitigation plans. Developing program plans for the NS/EP NGN initiative as we recommended can help NCS minimize cost overruns and schedule delays and help ensure that it is developing services that meet the needs of the NS/EP community. DHS concurred with our recommendation that NCS follow best practices for strategic planning in finalizing the NCS strategic plan including identifying the resources needed to achieve its strategic goals and objectives and providing a description of the relationship between planned initiatives, such as the NS/EP NGN, and strategic goals. DHS stated that all NCS activities are directly linked to its mission and associated performance measures. Finalizing its strategic plan as we have recommended will help provide decision makers with information to help them assess NCS’s programs and initiatives. With regard to our recommendation that NCS strengthen its performance measurement efforts by (1) developing measures to cover all core program activities, (2) exploring opportunities to develop more outcome-oriented measures, (3) ensuring performance measure baselines are reliable and based upon past performance, and (4) improving the clarity of its call completion measure, DHS concurred. Specifically, DHS reported that NCS will continue to develop performance measures. Taking action to strengthen its performance measures as we recommended should help NCS improve its ability to evaluate its efforts to reach out, target, and provide priority calling programs. DHS also commented on the report’s discussion of subscriber database accuracy, stating that it disagreed with what it viewed as our assertion that NCS should be able to easily determine whether certain individuals serving in public positions were still entitled to be GETS subscribers, as well as our expectation that NCS terminate access for individuals regardless of whether the subscriber’s organization has notified NCS to do so. DHS also highlighted the steps that NCS takes to help ensure agency points of contact keep NCS’s subscriber database updated. We modified the report to better recognize the role agency Points of Contacts play in updating NCS’s database. DHS also noted that the report suggested that NCS’s outreach efforts are limited to a select number of activities and noted that NCS also meets with other governmental bodies. We have modified our report to clarify the discussion that these are examples of outreach efforts that are not intended to be inclusive of all of NCS’s efforts. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of Homeland Security, and any other interested parties. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-8777, or [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in Appendix VII. William O. Jenkin Director, Homeland Security and Justice Issues s, Jr. The National Communications System (NCS) was established by a memorandum signed by President Kennedy in 1963, in the wake of the communications challenges that arose during the Cuban Missile Crisis when, according to NCS, delays in sending and receiving communications between the United States and foreign governments involved in the crisis threatened to further complicate the crisis. The original memorandum which has been amended and superseded over time, called for establishing a national communications system by linking together, and improving the communications assets of various federal agencies. Such a system is to provide the necessary communications for the federal government under all conditions ranging from normal conditions to domestic emergencies and international crises. Today, Executive Order 12,472 is the primary federal guidance in force that dictates the composition and functions of the NCS. Executive Order 12,472 defined the NCS as those telecommunications assets owned or leased by the federal departments, agencies, or entities that comprise the NCS that can meet the national security and emergency preparedness (NS/EP) needs of the federal government together with a management structure that could ensure that a national telecommunications infrastructure is developed that is responsive to NS/EP needs, among other things. Executive Order 12,472 which was amended by Executive Order 13,286 on February 28, 2003, provided that NCS’s mission is to assist the President, the National Security Council, the Homeland Security Council, the Directors of the Office of Science and Technology and Office of Management and Budget in, among other responsibilities, “the coordination of the planning for and provision of NS/EP communications for the Federal government under all circumstances, including crisis or emergency, attack, recovery, and reconstitution.” The NCS organization structure largely consists of federal entities. However, the telecommunications industry serves in an advisory capacity to the federal government on matters regarding NS/EP communications. A description of the roles and responsibilities of the entities that comprise the NCS organization follows. See figure 4 for an illustration of the current NCS management structure. Executive Office of the President (EOP). Within the EOP, the National Security Council (NSC), the Homeland Security Council (HSC), the Office of Science and Technology Policy (OSTP), and the Office of Management and Budget (OMB) have varying responsibilities for setting the policy direction for NS/EP communications and providing oversight of the NCS. For example, in consultation with the Executive Agent and a group of federal telecommunications officers (known as the NCS Committee of Principals), the EOP helps to determine NS/EP telecommunications requirements. NCS Executive Agent. Pursuant to the Homeland Security Act of 2002, the functions and responsibilities of the NCS Executive Agent were transferred to the Secretary of Homeland Security. Among other things, the Executive Agent is responsible for ensuring that the NCS conducts unified planning and operations, in order to coordinate the development and maintenance of an effective and responsive capability for meeting the domestic and international NS/EP telecommunications needs for the federal government as well as ensuring coordination with emergency management activities of the Department of Homeland Security (DHS). Additionally, the Executive Agent designates the NCS Manager and oversees related activities including the delivery of priority communications programs (such as Government Emergency Telecommunications Service (GETS) and the Wireless Priority Service (WPS)). Office of the Manager, NCS. The Office of the Manager, NCS (OMNCS) falls under the Office of Cyber Security and Communications which is part of the National Protection and Programs Directorate within DHS. The responsibilities of the NCS Manager include, among other responsibilities, preparing for consideration by the NCS Committee of Principals and the Executive Agent: recommendations on an evolutionary telecommunications architecture to meet current and future NS/EP needs; and plans and procedures for the management, allocation and use, including the establishment of priorities or preferences, of federally owned or leased telecommunications assets under all conditions of crisis or emergency. Additionally, the NCS Manager is responsible for implementing and administering any approved plans or programs as assigned, including any system of priorities and preferences for the provision of communications service, in consultation with the NCS Committee of Principals and the Federal Communications Commission (FCC), to the extent practicable or otherwise required by law or regulation. Further, the NCS Manager is to conduct technical studies or analyses for the purpose of identifying improved approaches which may assist in fulfilling NS/EP telecommunications objectives, among other things. Additionally, in consultation with the NCS Committee of Principals and other appropriate entities of the federal government, the NCS Manager is to ensure that, where feasible, existing and evolutionary industry, national, and international standards are used as the basis for federal telecommunications standards. The OMNCS also includes the National Coordinating Center—a joint industry-government entity—which assists in coordinating the initiation and restoration of NS/EP communications services and is involved in critical infrastructure protection of telecommunications assets. NCS Committee of Principals. According to NCS, this collaborative body, chaired by the NCS Manager comprises of the key telecommunications officers of those agencies designated by the President that own or lease telecommunications assets of significance to national security or emergency preparedness, and other executive entities which bear policy, regulatory, or enforcement responsibilities of importance to NS/EP telecommunications capabilities. Currently, the NCS Committee of Principals includes representatives from 24 federal departments and agencies—known as the NCS Member Agencies. In accordance with Executive Order 12,472, the NCS Committee of Principals, among other things, provides comments and recommendations to the National Security Council, the Director of OSTP, the OMB Director, the NCS Executive Agent, or NCS Manager regarding ongoing or prospective activities of the NCS. According to NCS, the NCS Committee of Principals, in accordance with its bylaws, has established subgroups such as the NCS Council of Representatives to help support the work activities of the NCS. Further, the NCS Committee of Principals established other groups such as the Priority Services Working Group to analyze the potential impact of future technologies on priority services programs and examine the outreach efforts for the GETS and WPS programs, among other things. The National Security Telecommunications Advisory Committee (NSTAC). The NSTAC was established in 1982 by Executive Order 12,382 to serve as an advisory committee to the President on matters related to NS/EP communications and may comprise of no more than 30 industry leaders appointed by the President. The NSTAC members are usually chief executive officers, from the telecommunications companies, network service providers, information technology firms, finance, and aerospace companies. As we previously reported, over the course of its longstanding relationship with the NSTAC, the NCS has worked closely with NSTAC member companies during emergency response and recovery activities following a terrorist attack or natural disaster. For example, after the September 11, 2001, terrorist attacks, NSTAC member companies immediately coordinated with NCS to assist with communication restoration efforts despite the fact that some of their network infrastructure had been among the most severely damaged. As we have previously reported, the NCS and NSTAC share information on a variety of issues including federal policies related to NS/EP communications and changes in the telecommunications marketplace. The NSTAC has also issued multiple reports addressing a wide range of policy and technical issues regarding communications, information systems, information assurance, critical infrastructure protection, and other NS/EP communications concerns. For example, in 2006, NSTAC issued a report that identified challenges related to NS/EP communications and provided recommendations to the President intended to help ensure that next generation network initiatives meet NS/EP user’s need, among other things. As provided under Executive Order 12,382, the NSTAC has established subgroups such as the Industry Executive Committee to help it it carry out its functions. carry out its functions. These subgroups may be composed, in whole or in part, of individuals who are not members of the NSTAC. To analyze the extent to which the National Communications System (NCS) provides priority communications programs, we reviewed relevant legislation, regulations and other documentation that outline NCS responsibilities in ensuring the continuity of communication including the Homeland Security Act of 2002, Executive Orders 12,472 and 13,231, and NCS Directive 3-10. We also reviewed budget requests, annual reports, the Performance Assessment Rating Tool (PART) reports submitted to the Office of Management and Budget (OMB), and other documentation related to NCS activities. We also obtained and reviewed relevant agency documents such as internal briefings, program planning documents, and standard operating procedures that describe how Government Emergency Telecommunications Service (GETS) and the Wireless Priority Service (WPS) operate and the capabilities that each program delivers. We obtained information on the mechanisms NCS utilizes to collect, track and analyze the performance of GETS and WPS. In addition, we obtained and analyzed data on the performance of GETS and WPS during select emergency or national special security events such as the 1995 Oklahoma City Bombing, the September 11, 2001, attacks, Hurricane Katrina in 2005, and the 2009 Presidential Inauguration, among others. We also interviewed NCS officials to obtain information on the agency’s role in ensuring continuity of communications, the types of priority communications capabilities it provides to the national security and emergency preparedness (NS/EP) community—specifically through the GETS, WPS, and Telecommunications Service Priority (TSP) programs—as well as the types of challenges, if any, the agency may face in providing these services. We interviewed officials from the Federal Communications Commission (FCC) to obtain information on the agency’s role in providing emergency communications, including how it works with NCS in providing priority communications capabilities. Furthermore, we interviewed telecommunications industry representatives from AT&T, Qwest Communications, and Verizon that are among the U.S. telephone carriers that provide NS/EP communications services. Although their views cannot be generalized to all telecommunications companies that provide NS/EP communications, the information we obtained helped to enhance our understanding of their role in providing emergency communications and their views on the impact the next generation network (NGN) technology transition may have on NCS’s priority communication programs. We also interviewed NS/EP officials from a non-probability sample of 15 states and 13 localities to obtain their perspectives and views on the NCS and its priority communication programs. Specifically, we obtained information from these officials regarding (1) their awareness of the NCS and the GETS, WPS, and TSP programs; (2) the extent they had utilized these programs in responding to an emergency situation and/or in their training and exercise activities; and (3) their perspectives on the benefits of these priority calling programs and potential barriers to participation. In selecting these states and localities, we considered a variety of factors including (1) the frequency and types of declared disasters by the Federal Emergency Management Agency (FEMA), (2) geographic dispersion, and (3) topographical factors that could affect the functionality of communications. The selected states and localities represent a range of natural disasters, terrains, climates, and population densities and also include areas that have recently experienced high-profile natural disasters or man-made attacks. While the perspectives of the officials we interviewed cannot be generalized to reflect the views of NS/EP emergency management officials in all states and localities, we believe the perspectives of the officials in these locations provided us with an overview and useful information on the NCS and the priority communications programs it provides. To determine how NCS enlists subscribers and controls access to its priority programs, we collected and analyzed documentation, and interviewed NCS officials (1) on subscriber eligibility criteria, (2) to determine NCS’s outreach efforts to enlist new subscribers for its priority calling programs, and (3) to identify its internal controls for controlling access to these programs. With regards to NCS’s outreach efforts, we obtained and reviewed documentation such as brochures, newsletters, and conference schedules on NCS outreach efforts including its use of regional outreach coordinators and its awareness booth deployments at various emergency management conferences. We also attended several NCS user- focused meetings and obtained documentation which detailed NCS efforts to attract new subscribers and provide support to current subscribers. To determine what internal controls NCS utilizes to grant and control access to its priority calling programs, we obtained the NCS standard operating procedures for GETS and WPS programs which outlined the procedures and processes to participate in the programs including the eligibility criteria, the approval process, and the re-validation process. We also obtained NCS standard operating procedures and compared them with criteria in Standards for Internal Control in the Federal Government. To determine whether NCS adhered to its procedures for terminating access for subscribers who no longer meet the programs’ eligibility criteria, we reviewed a nonprobability sample of records for 76 former federal and 9 former state government officials including former members of the U.S. Senate as well as members and delegates of the U.S. House of Representatives for the 109th Congress; immediate past heads of federal departments and agencies as of August 2008; and immediate past governors of U.S. states and territories as of August 2008, which is when we obtained the subscriber data. We selected these groups because they served in public positions that would allow NCS to easily determine that their positions ended, and in turn, work with the subscriber’s organization to update account status, as appropriate. Although the results of our work cannot be generalized to evaluate the effectiveness of controls used for all NCS program subscribers, the information obtained provided us with useful information about the extent to which subscriber records for these groups were terminated following a change in the subscriber’s eligibility status. Because the subscriber database, in its entirety, is classified, we have limited our reporting of the results of our analysis to only nonclassified information; however, this does not affect our findings. To assess the reliability of these data, we reviewed the data for obvious problems with completeness or accuracy and interviewed knowledgeable agency officials and contract support staff about the data quality control processes and reviewed relevant documentation such as the database dictionary that describes the data fields in the subscriber database. When we found discrepancies (such as duplicate records), we brought them to the attention of NCS officials and its contract support staff to better understand the nature of the discrepancies and resulting impact on our work. We performed electronic testing on the data and found the data to be sufficiently reliable for the purposes of this report. To determine what challenges can affect NCS’s delivery of its priority communications programs, we interviewed relevant NCS officials who have responsibilities for these programs. We also obtained information and reviewed documentation from the agency regarding its efforts to implement the Satellite Priority Service pilot program, as well as its efforts to leverage NGN technology in its priority communication programs. We compared this information with our previous work on pilot program planning and technology acquisition. To assess NCS’s overall planning and evaluation efforts, we interviewed NCS officials and reviewed relevant documentation regarding their strategic planning efforts and the mechanisms they use to evaluate their services. Specifically, we reviewed and analyzed NCS’s draft strategic plan to determine the extent to which the plan outlined the agency’s short and long term strategic goals and objectives, the associated time frames with their identified goals and objectives, the current status of the goals and objectives and internal and external factors that may affect their ability to achieve their goals and objectives. We also obtained and reviewed the OMB Performance Assessment Rating Tool, NCS’s Congressional Budget Justifications, and other documents that outlined the performance measures utilized to assess the extent they are achieving their goals and objectives; and planned milestones and spending for their priority calling programs. To assess the effectiveness of NCS planning efforts, we compared their efforts with federal best practices contained in our past reports which discussed the importance of strategic planning. We also utilized guidance from OMB Circular A-11, and related federal legislation, such as the Government Performance and Results Acts of 1993, which identifies the six key element of a strategic plan. In addition, we interviewed NCS officials about their strategic planning efforts and the mechanisms they use to monitor and evaluate their services. While NCS is not required to explicitly follow these guidelines, the guidelines do provide a framework for effectively developing a strategic plan and the basis for program accountability. We conducted this performance audit from June 2007 through August 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence provides a reasonable basis for our findings based on our audit objectives. The Telecommunications Service Priority (TSP) program provides priority provisioning and restoration of telecommunications services that support emergency operations facilities for certain federal, state, and local governments and other entities. Such services include equipment used to transmit voice and data communication by wire, cable, and satellite, among other things. During and following an emergency event, wireless and wireline carriers may receive numerous requests for new telecommunications service as well as for the restoration of existing services. Under this program, telecommunications carriers and their partners (collectively referred to as service vendors) are required to restore national security and emergency preparedness (NS/EP) telecommunications services that suffer outage, or are reported as unusable or otherwise in need of restoration, before non-NS/EP services. As with Government Emergency Telecommunications Service (GETS) and the Wireless Priority Service (WPS), certain government agencies and other groups are identified as having specific NS/EP responsibilities that qualify them for priority provisioning and restoration of services. However, unlike GETS and WPS, for which new subscriptions can be requested and approved during emergency response and recovery activities, authorization to receive TSP priority services must be in place before it is needed. Although the federal government does not charge a fee, telecommunications service providers (such as wireless carriers and cable and satellite providers) may charge an initial startup fee of up to $100 per circuit and a monthly fee of up to $10 per circuit. The National Communications System (NCS) reported that as of fiscal year 2008, over 1,000 organizations have registered more than 191,000 circuits under the TSP program. Telecommunications personnel have traditionally faced difficulties in accessing disaster areas in order to make TSP repairs to communications assets. According to telecommunications representatives that are part of the National Coordinating Center for Telecommunications (NCC) within NCS, access for repair crews to disasters areas has been an issue dating back to Hurricane Hugo in 1989, and during the aftermath of Hurricane Katrina. For example, an independent panel formed to examine the telecommunications challenges during Hurricane Katrina, reported that inconsistent and unclear requirements for repair crews and their subcontractors to gain access to the affected area impeded their efforts to make necessary repairs including those that they are required to complete under the TSP program. The panel reported that there were no mechanisms in place to issue credentials to those who needed them prior to Hurricane Katrina making landfall. Consequently, personnel from telecommunications companies were unable to gain access to repair some communications assets in the disaster area because they lacked the necessary credentials to access these areas. For example, during Hurricane Katrina, Louisiana authorities, among others, provided credentials to telecommunications repair crews to permit them access to certain affected areas; however, telecommunications personnel reported that within disaster areas, credentials that permitted access through one checkpoint would not be honored at another. In addition these personnel reported that in some cases the checkpoints required different documentation and credentialing before granting access to repair personnel. As a result, repair personnel had to carry multiple credentials and letters from various federal, state, and local officials authorizing their access to the disaster area. Furthermore, telecommunications personnel were unclear about which government agency had the authority to issue the necessary credentials. Similarly, repair crews reported that other factors delayed or interrupted the delivery of TSP services, such as the enforcement of curfews and other security procedures intended to maintain law and order. Although the full scope of these credentialing issues is outside NCS’s jurisdiction, under the communications annex of the revised 2008 National Response Framework, NCS is to coordinate with other emergency support function 2 (ESF-2) support agencies, among others, to ensure that telecommunications repair personnel have access to restore communications infrastructure in the incident area. To help facilitate this, NCS has taken steps to work with federal, state, and local government agencies as well as the private sector to identify solutions. For instance, NCS has coordinated with emergency management officials in Georgia and Louisiana to develop standard operating procedures to ensure access for critical infrastructure workers during emergencies or disasters. NCS officials also told us that they have begun to catalog the access procedures for various states and localities that could be provided to telecommunications personnel in order to facilitate access to damaged infrastructure in the aftermath of an emergency or disaster. In addition, other federal agencies, such as the Federal Emergency Management Agency (FEMA), have also taken steps to address this issue. For example, in November 2008, FEMA released for comment credentialing guidelines for essential personnel who need access to disaster areas in order to facilitate response, recovery and restoration efforts. The guidelines are intended to provide a uniform approach at the state and local level to provide telecommunications repair personnel, among others with access and credentials needed to enter a disaster area in order to expedite the restoration of communication capabilities. Government Emergency Telecommunications Service (GETS) and the Wireless Priority Service (WPS) are designed to achieve a probability that 90 percent of calls made using these services will successfully connect. The ability to communicate is critical to coordinating emergency response and recovery efforts during the first 72 hours following an emergency; however, the availability of communications can be disrupted by increased call volume or outages that occur in wireline and wireless networks. According to NCS, telephone calls made without the use of GETS or WPS during nonemergency periods generally result in a 99 percent likelihood of successful completion—that is the (1) called party answers the call, (2) called number rings but is not answered, or (3) called number responds with a busy signal. However, during a disaster or emergency event, NCS officials stated that the public switched telephone network (PSTN) can experience up to 10 times the normal call volume. Conversely, without using GETS or WPS, approximately 9 out of every 10 calls would not complete during a time period when the PSTN is highly congested. NCS’s priority calling programs services have been used to facilitate communications across the spectrum of emergencies and other major events dating back to the 1995 Oklahoma City Bombings through the recent 2009 Presidential Inauguration. GETS and WPS usage has varied greatly during disasters or emergencies as the programs have evolved and the programs have generally achieved call completion rates that range from 68 percent to 99 percent. For example, during the 1995 Oklahoma City bombings, of 429 GETS calls attempted 291 calls that may not have otherwise been completed due to network overload reached the intended destination number and resulted in a call completion rate of about 68 percent. In contrast, during Hurricane Katrina in 2005, the number of GETS calls attempted was 28,556, of which 27,058 (or 95 percent) were successfully completed (see table 5). Additionally, GETS and WPS capabilities were also used during the 2003 power outage that affected New York City and other areas. During this event, there were fewer GETS and WPS calls made in comparison to other events; however, the call completion rates for the duration of the event were 92 percent and 82 percent respectively. The National Communications System (NCS) uses five broad categories to determine who may be eligible to participate in its priority calling programs such as the Government Emergency Telecommunications Service (GETS) and the Wireless Priority Service (WPS). Eligible subscribers may include personnel from federal, state, local, or tribal government; as well as private industry and or non-profit organizations (see table 6 below for further detail on each of these categories). In addition, these categories are used to prioritize WPS calls in order to further ensure that communications are first available for senior executive leaders and policy makers at the federal, state, and local government level. The Federal Communications Commission (FCC), in response to NCS’s request, established these priority levels that are used to determine which WPS calls are to receive the first available channel with level five receiving the lowest priority (though all levels receive priority over non-WPS callers). In the event of an emergency and network congestion, the mobile switching center queues the call according to the subscriber’s priority level and call initiation time. For example, authorized staff from the Executive Office of the President would receive priority over national security and emergency preparedness (NS/EP) officials who have responsibility for public health and law enforcement if they placed calls at the same time. NCS has not determined whether a similar approach is required for the GETS program; however, if it is determined that a similar approach is needed—NCS believes it can apply the WPS approach to the GETS program. Table 6 also shows the priority level for each user category. In addition to the contact named above, Kirk Kiester, Assistant Director, and Candice Wright, Analyst-in-Charge, managed this review. Mark Abraham, Flavio Martinez, and Daniel Paepke made significant contributions to the work. David Alexander and Arthur James assisted with design, methodology, and data analysis. Sally Williamson provided assistance in report preparation. Pille Anvelt provided assistance with the report’s graphics.
Government functions and effective disaster response and management rely on the ability of national security and emergency preparedness (NS/EP) personnel to communicate. The Department of Homeland Security's (DHS) National Communications System (NCS), is responsible for ensuring continuity of NS/EP communications when network congestion or damage occurs. As requested, GAO assessed the (1) priority communication programs NCS provides, how it enlists subscribers, and to what extent NCS controls access to these programs; (2) challenges that can affect delivery of these programs; and (3) extent to which NCS plans for and evaluates its services. GAO reviewed NCS program documents, such as annual reports and access control procedures and data on program subscribers. GAO also interviewed officials from NCS and select state and local government entities. GAO compared NCS performance measures to federal best practices. NCS has two programs to provide NS/EP personnel with priority calling service when telephone networks are congested or damaged--the Government Emergency Telecommunications Service (GETS) and the Wireless Priority Service (WPS). NCS has undertaken several efforts, such as outreach at industry conferences, to increase participation in and control access to these programs. According to NCS, though outreach efforts have helped to increase overall enrollment, it is working to further address possible cost barriers to participation in WPS, such as discussing options with wireless carriers to help defray costs. In addition, NCS has implemented policies and procedures to ensure that access to its priority programs are limited to authorized users. GAO's review of select GETS and WPS subscriber data revealed that, of the 85 records we examined, NCS generally followed its policies and procedures to limit GETS and WPS access to authorized subscribers. NCS is taking steps to address inherent challenges in the communications environment--such as network congestion. For example, NCS initiated a satellite pilot program to allow NS/EP officials to circumvent severely damaged or congested traditional telephone networks. However, methods for implementation and evaluation of the pilot were unclear and NCS subsequently terminated the pilot. NCS is also working to provide priority voice and data NS/EP communications as part of the evolving telecommunications networks, but it has not finalized an acquisition approach based on available technologies, costs, or plans to mitigate technological and other challenges to deliver such capabilities. The lack of this information has led to congressional restrictions on NCS's funding. As NCS attempts to ensure that GETS and WPS services can operate in these evolving networks, an acquisition approach that includes this information will provide NCS officials and Congress with essential information to most effectively allocate resources and guide decision making. Although DHS agreed with GAO's June 2008 recommendation to complete the NCS strategic plan, NCS has not finalized its strategic plan which has been under development since 2007. Furthermore, existing performance measures do not cover all of its core responsibilities, as suggested by best practices, and certain performance measures could be strengthened. For example, NCS does not have a measure to gauge its performance in two of its key federal roles--critical infrastructure protection for communications under DHS's National Infrastructure Protection Plan as well as coordinating communications issues under the National Response Framework. Furthermore, NCS does not use prior years' enrollment levels to help determine increases, if any, to be made to future year's goals for user enrollment. Fully and accurately measuring performance is critical to ensuring the agency and key stakeholders--such as Congress--base program and resource decisions on actual performance.
Workers have a variety of options to save for retirement. While personal savings accounts and home equity can be used in retirement, many workers who save for retirement do so in tax-advantaged accounts available through their workplace. Their employers may sponsor an employee benefit plan, such as a 401(k) plan, or make arrangements for employees to contribute to Individual Retirement Accounts (IRAs), such as payroll deduction IRAs, to help employees save for retirement. (See table 1.) Workers may also choose to save on their own in an IRA to increase their retirement savings. However, our recent work shows that approximately 95 percent of money contributed to traditional IRAs in 2008 was from rollovers, primarily from employee benefit plans. Despite the various options available for employers to offer a workplace retirement savings program, our prior work and other research shows a persistent gap in coverage among private sector workers. While estimates of participation rates can vary depending on the nature of the study sample (e.g., whether it includes full and part-time workers or is based on household or firm-level data), research consistently indicates that many workers do not participate in a workplace retirement savings program. For example, one study using household data from the Current Population Survey (CPS) shows that the participation rate of private sector workers has declined slightly from about half of full- and part-time workers in the late 1990s to 43 percent in 2012. Similarly, another study using CPS data found that the participation rate among private sector workers ages 21 to 64 has fluctuated over the time period from 2000 to 2012 from a high of about 47 percent in 2000 to a low of about 39 percent in 2012. In addition, a study using firm-level data from the 2014 National Compensation Survey reports that 48 percent of private sector workers participate in a retirement plan. The President and some members of Congress have proposed various efforts over the years to expand workplace retirement savings program coverage among private sector workers. These efforts generally strive to overcome obstacles for employers to offer workplace retirement savings programs or for workers to participate. In particular, our prior research found that small employers may be reluctant to offer these programs because of administrative burden and potential fiduciary risk. Some workers, on the other hand, may lack the financial literacy or resources to participate. To foster retirement saving among the portion of the workforce who have been offered an employee benefit plan but do not participate, some employers have adopted automatic enrollment policies for their defined contribution plans. Under automatic enrollment, eligible workers are enrolled into the plan, unless they explicitly choose to opt out, as opposed to the more traditional method in which workers must take action to join a plan. Employers who have adopted automatic enrollment must also establish default contribution rates and default investment vehicles for workers who do not specify these choices. Employers may also adopt automatic escalation policies, which increase contribution rates on a predetermined schedule—even without active decisions by employees— typically up to a pre-defined maximum contribution rate. The Internal Revenue Code (IRC) and the Employee Retirement Income Security Act of 1974 (ERISA) were amended by the Pension Protection Act of 2006 (PPA) to facilitate the use of automatic enrollment, and Department of Labor (DOL) and the Department of the Treasury (Treasury) promulgated implementing regulations. To encourage low- and middle-income individuals and families to save for retirement, the IRC was amended by the Economic Growth and Tax Relief Reconciliation Act of 2001 to allow a credit against federal income taxes of up to $2,000 for qualified retirement savings. Eligibility for the Saver’s Credit is based on workers’ adjusted gross income (AGI) and contributions to employee benefit plans and IRAs with the credit phasing out at certain income limits, depending on the size of the household. Since the adoption of the Saver’s Credit, bills have been introduced to further encourage low-income workers to save for retirement, including making the tax credit refundable and increasing the rate of the tax credit for retirement contributions. In January 2014, the President directed Treasury to create the My Retirement Account (myRA) program, a new retirement savings account for Americans looking for a simple, safe, and affordable way to start saving. Individuals who voluntarily open myRA accounts will be able to set up recurring payroll deduction contributions that will be invested in nonmarketable retirement savings bonds only available to participants in the program. The savings bond is backed by the Treasury, will not go down in value, and will earn interest equal to the rate of return provided in a fund offered in the federal employee retirement plan. The retirement savings bonds will mature at the earlier of 30 years from the date the bond is first issued or when the total value of the bond reaches $15,000. At that time balances will be transferred to a private-sector Roth IRA. Participants are not charged fees for myRA accounts—administrative costs are paid by the Treasury. myRA accounts follow Roth IRA rules, so contributions are made with after-tax income but may be withdrawn tax- free at any time. Moreover, not all Americans will be eligible to participate due to IRA contribution limits based on modified adjusted gross income. Unlike some commercial IRA accounts, myRA does not impose minimum balance or minimum contribution requirements—individuals will be able to open accounts with no start-up cost and can choose to automatically contribute any amount each payday. Members of Congress have also introduced bills over the years to foster retirement savings among those who work for employers that do not sponsor employee benefit plans. One group of proposals would establish “automatic IRAs” for workers not covered by an employee benefit plan. Under an automatic IRA, employers would be required to make available an arrangement in which employees would be automatically enrolled and contributions would be made through automatic payroll deduction, with an opt-out provision for participants. In addition, some bills would allow for more widespread adoption of multiple employer plans by enabling employers without a common interest to sponsor such plans. Appendix III provides a description of the bills we identified. In the United States, employers are generally not required to provide employee benefit plans, including pension plans, to any employees. When they do, however, employee benefit plans are generally regulated at the federal level, providing employers with largely uniform nationwide standards. Most significantly, plans are subject to the requirements of ERISA, which are generally enforced by DOL’s Employee Benefits Security Administration (EBSA) and various provisions of the Internal Revenue Code (IRC), which is enforced by the Internal Revenue Service (IRS). ERISA was enacted to, among other things, protect the interests of plan participants and their beneficiaries and set minimum standards for most private sector pension plans, including rules for fiduciary conduct and prohibited transactions. The IRC and ERISA define prohibited transactions and list exemptions to them. In addition DOL may grant exemptions. To carry out its responsibilities under ERISA, EBSA promulgates regulations and issues various forms of guidance. The IRS is primarily responsible for ensuring that plans meet certain requirements for tax-favored treatment. ERISA and relevant provisions of the IRC establish minimum requirements and standards for private-sector employee benefit plans. ERISA establishes minimum participation, vesting, and funding standards for plans. For example, ERISA limits the age and the length-of-service that employers can require employees to meet to be eligible for a plan. To qualify for tax benefits under IRC, plans must also meet minimum participation, vesting, and funding standards. An employer may also establish a plan that excludes certain groups of employees as long as the ERISA and IRC requirements are met. For example, an employer may establish and maintain a plan that excludes workers in certain job categories or geographic locations. Lastly, IRS is responsible for enforcing IRA tax requirements but IRS and DOL share responsibility for overseeing prohibited transactions relating to IRAs that are not ERISA-covered plans. ERISA includes a provision stating that ERISA supersedes any and all state laws as they “relate to” any employee benefit plan covered under ERISA. This ERISA preemption provision has a relatively small number of exceptions and reflects a policy judgment that nationwide uniformity respecting employee benefit plans outweighs the value of state differentiation. In addition to statutory provisions, “state laws” encompasses decisions, rules, regulations, and any other state action having the effect of law. Based on statutory interpretation and its review of the legislative history of ERISA, in 1983 the Supreme Court held that a state law “relates to” an employee benefit plan and is preempted “if it has a connection with or reference to such a plan.” The Court has emphasized that state law may be preempted even if not specifically designed to affect such plans. Furthermore, even if a state law is not in conflict with ERISA but is, in fact, consistent with it because, for example, it promotes retirement security, it is not spared from ERISA preemption if it “relates to” an employee benefit plan. The broad scope of ERISA’s preemption provision has permitted large employers to provide pension plans to their employees without having to establish multiple plans or plan policies depending on differing requirements from state to state. In addition, ERISA’s preemption provision helps to ensure that participants are protected by several safeguards. For example, ERISA establishes minimum participation and vesting standards, imposes fiduciary duties on plan sponsors, and authorizes DOL to enforce its requirements. A 1995 Supreme Court case, however, raised some question regarding the Court’s prior attempts to construe “relate to” and the expansiveness of ERISA. Furthermore, in 2010 a federal appeals court appeared to limit the scope of ERISA preemption when it upheld a local law requiring, among other things, that covered employers make a certain level of health care expenditures on behalf of their employees. About half of private sector workers did not participate in a workplace retirement savings program in 2012. While some workers chose not to participate, we found that most workers who did not have coverage lacked access to such programs. Among those not participating, the majority worked for an employer that did not offer a program or they were not eligible for the programs that were offered. In particular, lower income workers and those employed by smaller firms were much less likely to have access to programs, after controlling for other factors. In addition to lacking access, certain workers, such as lower income, service sector, and younger workers, were also less likely to participate in programs even when provided access. However, the majority of these workers participated when they had workplace access. Roughly half of private sector workers participate in a workplace retirement savings program, according to 2012 data. Specifically, self- reported SIPP data indicate that 45 percent of all private sector workers were participating in a program. However, prior research using SIPP data linked with W-2 tax records has shown that some individuals under- report their participation. To address this issue, we examined similarly linked data to correct for under-reporting and the resulting participation rate increased to 54 percent (see fig. 1). While the W-2 adjusted data show a moderate increase in participation, both measures indicate that many workers lack coverage in a workplace retirement program. Our findings are similar to estimates from our prior work and other studies. For example, the prior research that linked 2006 SIPP data with W-2 tax records shows, using this approach, that the measure of participation among private sector workers increased from 45 percent to 58 percent. A more recent update to this study found that participation further increased to 62 percent in 2012, although the age range of this study differed from our work—this study examined private sector workers ages 21 to 64, while we focused on private sector workers ages 18 and over. Other more recent data from the 2014 National Compensation Survey, a firm level survey conducted by the Bureau of Labor Statistics, show that 48 percent of private sector workers participated in a retirement plan. Among workers who are not participating, we found that the gap in coverage is mainly due to a lack of access rather than a failure to participate. The vast majority of workers who do not participate–84 percent—reported they did not have access to a workplace retirement program. Access depends on two essential factors: (1) The employer must offer a program, and (2) the worker must be eligible to participate (see fig. 2). Of these two factors, we found that the lack of access was primarily due to employers not offering a retirement program. Specifically, among those who do not participate, 68 percent reported they worked for an employer that did not offer a program, and another 16 percent reported they were not eligible for the program their employer offered (see fig. 3). Only 16 percent of those who did not participate reported being eligible and not participating. Certain types of workers, such as those with lower incomes, are much less likely to have coverage compared to other workers. Lower-income workers, in particular, are much less likely to have access to workplace retirement programs and to choose to participate when programs are available. Compared to workers in the lowest income quartile, our analysis found workers in the highest income quartile were nearly 4 times as likely to work for an employer that offers a program, after controlling for other factors. The gap in access exists across the income distribution and is even larger when it comes to eligibility—workers in the third and fourth quartiles were, respectively, 4.4 and 7.5 times as likely to be eligible for a program offered by their employer than workers in the lowest income quartile. In addition, lower-income workers had a lower likelihood of participating even when they were eligible (see fig. 4). The combined effects of lower access and lower participation result in large gaps in coverage across income groups (see fig. 5). Overall, approximately 14 percent of workers in the lowest income quartile participated in a program compared to 57 percent and 76 percent of those in the third and fourth income quartiles, respectively. Similarly, according to the W-2 adjusted data, 22 percent of workers in the lowest income quartile participated in a program compared to 67 percent and 84 percent of those in the third and fourth income quartiles. In addition to income, working for a small or mid-size firm is one of the most important factors associated with a lack of coverage. In particular, workers at smaller firms were much less likely to have coverage than workers at larger firms because their employer did not offer a program. Workers at the largest firms were more than 9 times as likely to have an employer that offered a program compared to those who worked for firms with 50 or fewer workers, after controlling for other factors. Even outside the smallest firms, the difference in the likelihood of an employer offering a program was considerable when comparing mid-size and larger firms (see fig. 6). As we have previously reported, smaller firms face challenges in offering programs, such as the perceived complexity and risk of establishing and administering a program. Moreover, smaller and newly formed firms have higher rates of “churn”—business formation and dissolution—and are less likely to offer a program initially. Certain characteristics associated with small employers may also contribute to the challenges of starting and maintaining a program. For example, small employers are more likely to encounter higher rates of worker turnover. In addition, small employers’ operating revenue can be uncertain from year to year. Outside of whether an employer offers a program, firm size had little to no effect on eligibility or participation. Workers at small or mid-size firms that offer a program were just as likely to be eligible as workers at larger firms with up to 1,000 workers, after controlling for other factors. And workers at the largest firms—more than 1,000 workers—were only slightly more likely than workers at the smallest firms—50 or fewer workers—to be eligible. Similarly, among those who were eligible, workers at the largest firms were only slightly more likely to participate compared to workers at the smallest firms, although this effect was only statistically significant using the W-2 adjusted data. Overall, about 23 percent of workers at firms with 50 or fewer workers participated in a program compared to 60 percent of workers at firms with more than 1,000 workers. Corresponding participation rates from the W-2 adjusted data were 31 percent and 68 percent, respectively. In addition to income and firm size, other characteristics were also significantly associated with whether a worker has coverage, after controlling for other factors. For example: Part-time: Part-time workers were less likely to have coverage, primarily due to a lack of access. Specifically, compared to part-time workers, full-time workers were about 2.6 times more likely to be eligible for a program offered by their employer. Full-time workers were also more likely to work for an employer that offers a program, but the difference in likelihood was considerably smaller—by a factor of 1.2. Among those who were eligible, full-time workers were only slightly more likely to participate than part-time workers; however this result was not significant using the W-2 adjusted data. Occupation: Workers in management, business, science, and arts occupations were nearly twice as likely to work for an employer that offers a program compared to workers in service occupations. Those in service sector occupations were also less likely to participate in programs when they had access. However, occupation was not associated with whether workers were eligible for programs offered by their employers. Age: Compared to older workers, younger workers were generally less likely to be eligible for a program and to participate when eligible. For example, among those eligible, workers ages 18 to 24 were roughly one-half as likely to participate as workers ages 25 to 34. This pattern holds when comparing progressively older age categories of workers with younger workers, with the exception of workers ages 65 and older. Workers ages 65 and older were less likely to have access to a program compared to workers ages 25 to 34, but were no less likely to participate if they were eligible. Among workers who are least likely to participate—such as lower income, service sector, and younger workers—the majority did so when they had workplace access. The share of workers in the lowest and 2nd income quartiles who participated when eligible was 63 percent and 74 percent, respectively (see fig. 7). Corresponding figures from the W-2 adjusted data were 68 percent and 79 percent. Similarly, the participation rate for eligible service sector workers was 70 percent (74 percent according to W-2 adjusted data). Among the categories of workers we examined, the lowest participation rate among eligible workers was for those ages 18 to 24, but still more than half, 54 percent, participated (59 percent according to W-2 adjusted data). In the six states we studied, proposals were made and, in some, laws enacted in an effort to expand coverage that would combine workplace access to a retirement savings program with automatic enrollment and financial incentives —an approach that has helped increase worker participation in countries we studied. For example, the United Kingdom (U.K.) implemented reforms that require private sector employers to automatically enroll eligible workers in a workplace retirement savings program, allowing workers to contribute to individual accounts and receive financial incentives in the form of employer contributions and tax preferences. A government study published in March 2015 found that since implementation of these reforms began in October 2012, various stakeholders generally perceived them as successful to date by bringing millions of new people into retirement savings programs, with significantly fewer individuals opting out than predicted. In fact, the government reported that by the end of 2014 more than 5 million workers had been automatically enrolled and only 12 percent of workers had opted out in 2014. Similar to the U.K. and other countries, state efforts we reviewed in the United States would use a range of approaches to combine workplace access, automatic enrollment, and financial incentives to expand coverage. The six state efforts we reviewed would expand workplace access for uncovered workers in two ways. Some of the states are encouraging small employers to offer workplace access by creating state-run programs or state-facilitated marketplaces by which employers can voluntarily offer workers access to a retirement savings program and payroll deduction. For example, Massachusetts is developing a state-run 401(k) plan that not-for-profit employers with fewer than 20 employees in the state can adopt. Similarly, Washington plans to create a state-facilitated marketplace that would list a variety of qualified providers from which employers with fewer than 100 employees could choose to offer their workers. Laws or bills in other states, such as California, Illinois, and Maryland, would require employers with more than a certain number of employees, and which do not already offer an employee benefit plan to make their payroll systems available for workers to contribute via payroll deduction. To ensure these employers are able to find a reasonable option to meet this requirement, these states would create state-run programs. (See table 2.) State stakeholders told us that state efforts to expand coverage in workplace retirement savings programs were designed to provide workplace access because research and experience by employee benefit plans in the United States has shown that workers are more likely to save for retirement if their employer offers a retirement savings program. DOL and the Small Business Administration (SBA) note that payroll deduction—an amount of salary taken from a worker’s paychecks— allows workers to save smaller amounts each pay period instead of waiting until the end of the year to set aside money in an IRA. And a study prepared for SBA concluded that the biggest step small employers could take to increase worker retirement savings was to offer them access to a plan. Workplace access enables workers to take advantage of payroll deduction for retirement savings. Federal and state officials and state stakeholders noted that using payroll deductions makes contributing easy for most workers, helping them develop a habit of saving. The countries we reviewed have also taken steps to encourage or require employers to provide workplace access for uncovered workers that use similar approaches as state efforts (see fig. 8). As part of the U.K.’s effort, the government created the National Employment Savings Trust (NEST) to provide employers with a reasonable option to meet the requirement to provide workers access. This approach is similar to efforts in California, Illinois, and Maryland, and to some state efforts without an employer requirement that still create a state-run program, such as Massachusetts and West Virginia. In addition, New Zealand and Canada would encourage or require certain employers to offer workplace access. Instead of creating a state-run program for employers, though, New Zealand and Canada license service providers to offer programs that meet established criteria—an approach similar to the marketplace in Washington State. In combination with workplace access, each of the six state efforts we reviewed would require or allow employers to automatically enroll workers in a workplace retirement savings program to increase participation. Specifically, efforts in California, Illinois, and Maryland would require eligible employers that do not offer an employee benefit plan to automatically enroll their workers in the state-run program. In addition, state officials told us that the program for not-for-profit employers in Massachusetts would require employers who adopt the state plan to use automatic enrollment. In each of these programs, workers would have the ability to opt out. For example, California’s Secure Choice program would require that employers enroll workers, but an official said workers would have a 90-day opt out period. In West Virginia, on the other hand, employers who would sign up to participate in the state-run program would have the option of automatically enrolling their workers, but it is at the employer’s discretion. Similarly, Washington would provide an online marketplace with multiple vehicles—a SIMPLE IRA, payroll deduction IRA, and myRA—and employers that choose to use the marketplace would be encouraged, but not required, to automatically enroll workers. State officials and stakeholders emphasized the importance of automatic enrollment in increasing participation and contributions, which can, in turn, help reduce costs. For example, in California, members of the Secure Choice program’s board said that automatic enrollment helps increase participation and promote better outcomes by nudging workers to save. A representative from the Maryland task force also said that without automatic enrollment fewer workers would participate, and the burden to provide financial education would increase. Similarly, our prior work and other research show that automatic enrollment is effective in overcoming workers’ inertia and considerably increases participation. For example, we previously found that automatic enrollment has considerably increased participation in programs adopting this feature, with some participation rates reaching as high as 95 percent. In addition to expanding participation, state officials and stakeholders we interviewed said that automatic enrollment can reduce the costs of managing programs as the overall amount of savings increase. Government officials and stakeholders in the countries we reviewed also emphasized the importance of automatic enrollment in increasing participation. According to the former Retirement Commissioner in New Zealand, automatic enrollment has been essential to the success of their KiwiSaver program and, without it, they would need a massive education campaign. Similarly, other government officials in New Zealand said that automatic enrollment is critical because too many people—even those who want to save—will not actively seek out participation. Government officials and stakeholders in the U.K. and Canada also highlighted the importance of automatic enrollment in increasing participation. According to a report to Parliament submitted by the largest organization representing unionized workers in the U.K., initial opt-out rates have been lower than expected, and their experience has shown the value of harnessing inertia to improve outcomes for workers. Moreover, the program was designed to further reduce opt-outs because it requires automatic re-enrollment of those who opted-out after 3 years. Government officials in Canada said that the PRPP program utilizes automatic enrollment to increase participation, but it is unclear how successful it will be because the program is voluntary for employers. While all six state efforts we reviewed anticipate using tax-advantaged vehicles to encourage participation, state officials and others said Roth IRAs, in particular, could help partially address concerns over limited tax benefits for lower-income workers. Financial incentives for participation in retirement savings programs include preferential tax treatment and employer contributions and multiple stakeholders noted that such incentives encourage worker participation. The state efforts we reviewed all seek to incentivize participation by using vehicles—employer- sponsored 401(k) plans or workplace-based IRAs—that typically qualify for preferential tax treatment. But they do not allow for other financial incentives, such as employer contributions, or do so only to a limited extent. In the absence of other financial incentives, some states, such as Illinois and California, are using or considering a Roth IRA vehicle to address concerns that lower-income workers may realize little or no current tax benefit from savings. Treasury officials and stakeholders said a Roth IRA may be beneficial for workers who have little or no current tax liability but may pay higher taxes in the future. Moreover, Treasury officials said that Roth IRAs can benefit workers with limited resources by allowing them to withdraw contributions tax free under certain circumstances. In light of these issues, Illinois enacted the Secure Choice program, which uses a Roth IRA vehicle, and California’s Secure Choice board commissioned a feasibility study to examine this issue. Government officials and stakeholders in the countries we reviewed pointed to the importance of additional financial incentives, such as employer and government matching contributions, in increasing participation. For example, New Zealand’s KiwiSaver used a one-time “kick-start” contribution of $1,000 New Zealand Dollars (NZD), about $650 U.S. dollars (USD), as well as matching employer contributions and tax benefits, as financial incentives to encourage worker participation in the program. The former Retirement Commissioner in New Zealand and other stakeholders noted that the kick-start has been very popular and effective because it is so easily understood by participants, while the tax incentive is less well-understood. Government officials attributed the higher than expected rate of participation, particularly for those opting in to KiwiSaver, to the success of the financial incentives. Similarly, in the U.K., the automatic enrollment requirement includes matching contributions from the employer and government—employers are currently required to contribute 1 percent for a specified range of earnings, which will gradually increase to 3 percent in 2018, while the government contribution will increase from 0.2 percent to 1 percent. One academic noted that these matching contributions are very important for low-income workers who can only afford a modest contribution rate. In addition to the key strategies discussed above, state efforts would take steps to simplify the overall design and implementation of their programs to reduce employer burden and complexity for workers. Specifically, in order to mitigate some of the challenges of setting up workplace retirement savings programs, state officials seek to (1) limit the responsibility and cost for employers, and (2) reduce complexity, cost, and investment risk for workers. These efforts would simplify program administration and investment management with the goals of lowering fees and encouraging broad employer adoption and worker participation. However, state officials also noted that this may elevate the role of the state in administering these programs and create implementation challenges, including how to fund them. Since no state effort has been implemented to date, states may be able to draw lessons from experiences in the countries we reviewed, as well as other state programs. (See appendix II for a full description of the actions states and other countries have taken to reduce administrative burden and cost for employers.) To reduce administrative burden and cost for employers, states plan to take a number of actions that could offload or help employers with typical employer duties in workplace retirement savings programs (see fig. 9). Other countries have taken similar steps to reduce administrative burdens and costs for employers. For example, the U.K. government created NEST to ensure that all employers, particularly small employers, would have access to a low-cost program, with the added benefit of diminishing the burden on employers of choosing an appropriate provider. NEST stakeholders said the existence of a low-cost program with a universal public service obligation reduces the burden on small employers who might otherwise expend considerable time and effort in identifying a provider willing to serve them at an acceptable cost. Among other things, NEST’s governing board selects and monitors providers, takes on fiduciary liability for management of the program’s investments, and does not charge employers to set up and use NEST. Lastly, NEST is responsible for sending out welcome packages to new participants with information on how to access the website and create accounts. In addition to addressing challenges for employers, state and country efforts we reviewed address issues of complexity, cost, and investment risk for workers through a variety of approaches (see table 3). While successful implementation of these efforts by states will likely increase coverage for many private sector workers, some workers in those states may remain uncovered. In particular: Workers at employers that continue to not offer access to a workplace retirement savings program: While all the states that we reviewed would allow small employers of varying sizes and self- employed workers to choose whether to offer access to the state program to avoid creating a burden for these employers, many stakeholders felt this would leave some key worker populations uncovered. For state efforts that would create a voluntary program for all employers who are eligible to offer it to their workers, such as Massachusetts’ not-for-profit program, stakeholders said many employers would still likely not choose to offer workplace access. Specifically, one national industry representative said, by definition, the employers targeted by state programs have already chosen not to offer retirement programs to their workers. Similarly, state and national stakeholders felt that the absence of a requirement for employers to offer workers access to workplace retirement savings programs would significantly lessen any expansion of coverage. Yet even those state efforts that would require eligible employers to offer workplace access, such as Illinois’ and California’s Secure Choice programs, would apply only to employers above a certain employee size threshold, 25 and 5 employees, respectively. According to one academic, there could be many employers under that size threshold. In addition, there are many self-employed workers who may continue not to have access to a workplace retirement program. For example, while California’s target population is 6.3 million workers who lack access to an employer-sponsored plan, California Secure Choice board members estimated that there are an additional 2 million uncovered Californians who are contractors or self-employed. For this reason, California officials said that the Secure Choice program may allow self-employed workers to opt in. Ineligible workers: National industry stakeholders said that the state efforts would not cover some of the traditionally ineligible populations—including part-time and temporary workers—at employers that already offer qualifying employee benefit plans. Some state efforts that would require employers of a certain size to offer workplace access, such as California and Illinois, exempt those employers that already offer employee benefit plans, for which existing law allows employers to determine which populations are eligible. By contrast, Maryland’s proposed program would cover workers who are ineligible for the employee benefit plan offered by their employer, but only if the employer has 5 or more workers eligible for the state program. Workers who choose not to participate: Since none of the state efforts we studied would require workers to contribute—all of these efforts allow workers to either opt in or opt out—some workers will choose not to participate. As noted above, state efforts that would require the use of automatic enrollment will likely achieve broader increases in participation than efforts that allow workers to choose whether to opt in. For example, one industry study found that 41 percent of those surveyed postponed saving for retirement in order to pay down student loan debt. In addition, a member of the California Secure Choice board said that some workers may not understand the value of earning investment returns on savings. To help address this, a California official said that the marketing materials for Secure Choice would need to clearly explain the benefits of participation. Workers with very low earnings who cannot afford to participate: Several stakeholders noted that it may be difficult for very low income workers to afford contributions. For example, a representative from the California Secure Choice board said that lower income workers may need the money for more immediate expenses. Our prior work indicates that while Social Security retirement benefits replace a higher percentage of earnings for lower income workers, this alone may not ensure an adequate retirement income. In Canada, a representative of an association representing program providers said that the PRPP program is targeted to middle income workers, while lower income workers who are eligible for other federal government income supplements in retirement. Moreover, a representative of an employer group noted that the government is motivated to provide programs to ensure a minimum level of income in retirement because Canadian provinces assume responsibility for the welfare of their citizens, and increased purchasing power can have a positive impact on the overall economy. In particular, outside of PRPP, Canada provides a targeted benefit, Guaranteed Income Supplement, which supplements the Canadian universal Old Age Security program to ensure low income seniors have a minimum level of income in retirement. States face legal uncertainty that could result in legal challenges in connection with efforts to expand coverage in workplace retirement savings programs for private sector workers, which will continue if no action is taken by Congress or relevant agencies, including DOL and Treasury. For over a decade legal uncertainty has influenced the design of state efforts. More recently, four of the six states we studied have enacted legislation to increase coverage in workplace retirement savings programs and legislators in other states have introduced similar bills or have studied potential solutions to expand coverage. However, state and national stakeholders said these efforts face potential challenges because of legal uncertainties created by existing federal law—ERISA—and various agency regulations, depending on the type of program state efforts intend to create (see fig. 10). While stakeholders noted multiple issues causing legal uncertainties for state efforts, the most prevalent and pervasive was ERISA preemption. Specifically, ERISA preempts, or invalidates, any and all state laws that “relate to” any private-sector employee benefit plan. Generally, the “relate to” provision in ERISA could be applicable to state laws that either directly regulate employee benefit plans or, in some cases, only indirectly affect such plans. For example, a state law that mandates the way in which employee benefit plans are administered may be determined to “relate to” such plans and may, therefore, be preempted. In this way, ERISA’s preemption provision enables employers to establish uniform plans and administrative schemes, preventing them from having to comply with different requirements for employees located in different states. Whether ERISA preempts a state law has historically been determined by federal courts, so states may face litigation. One national stakeholder indicated that it might be beneficial for a state to implement a program and go through resulting litigation to resolve some of the areas of legal uncertainty and clear the way for other states to implement similar programs. However, other state and national stakeholders were concerned about the potential consequences for workers and employers should an implemented program later be preempted. Noted implications for employers and workers included that the state program might lose its preferential tax treatment or create risk for employers that choose to offer the programs. Given these implications of uncertainty regarding ERISA preemption, state efforts to expand access to millions of workers and address the retirement savings shortfall may be delayed or deterred. Based on our interviews with state and national stakeholders and government officials, none of the state efforts we reviewed are immune from legal uncertainty caused by ERISA preemption, but the type of uncertainty differs depending on the details of the state efforts. Employee benefit plan programs. For states that are attempting to use an employee benefit plan, such as a 401(k) plan or SIMPLE IRA, DOL officials told us that it is unclear whether a state could create a program without being preempted by ERISA because it is unclear what level of state effort would “relate to” employee benefit plans. For example, Massachusetts is the furthest along implementing an effort that would create a 401(k) plan that not-for-profit employers with 20 or fewer employees could adopt, but national stakeholders had mixed opinions about whether its program will be preempted if legally challenged. Among other things, a Massachusetts official said that the state plans to take on administrative responsibilities and oversight of the program’s service providers and will charge employers who choose to offer the program, but employers will still be plan fiduciaries. Payroll deduction IRA programs. Partly to avoid uncertainty caused by ERISA preemption, four of the six states we examined would create programs using payroll deduction IRAs because by complying with relevant DOL regulations such IRAs are not employee benefit plans and are not subject to ERISA. However, programs relying on payroll deduction IRAs could run into similar preemption uncertainty as state efforts with employee benefit plans because the DOL regulation does not address some key questions. First, the regulation was promulgated primarily to provide guidance to employers and, as DOL officials noted, it does not specify whether a state can offer payroll deduction IRAs to private sector workers. In addition, DOL officials said the regulation does not address whether certain program features states intend to use would cause the programs to be considered employee benefit plans. For example, some states would like to capitalize on the potential advantages of using automatic enrollment for workers and requiring certain employers to offer workplace access to retirement savings programs. If these features cause the programs to be considered employee benefit plans, stakeholders said there would be uncertainty regarding preemption. To address this uncertainty, state and national stakeholders thought DOL and Treasury should provide guidance and one thought DOL should clarify its regulation on payroll deduction IRAs. On July 13, 2015, the President announced that DOL will propose a set of rules by the end of the year to provide a clear path forward for states to create retirement savings programs. DOL officials said the agency’s role is limited under ERISA without further Congressional action—it can revise and promulgate regulations but there is nothing in ERISA that would allow it to waive preemption for state efforts. In one case, Illinois law explicitly requires the state to request an opinion or ruling from DOL on the status of the program with respect to ERISA before the program can be implemented. An Illinois stakeholder said that the state does not have to wait for a DOL opinion to implement the program, but implementation would stop if DOL sent a letter saying the Illinois program had to comply with ERISA. While Illinois and other states may reach out to DOL for an opinion, in June 2015, DOL officials told us they had not received any such requests. Even if they did, DOL officials said the department does not have a formal process for issuing such opinions, and the opinion would not necessarily be binding in court. As a result, DOL’s opinion may not give states the level of certainty regarding preemption they need to proceed. Similarly, state and national stakeholders said state experimentation with various approaches could help determine which work best for expanding coverage in workplace retirement savings programs, so some have called for increased flexibility with respect to ERISA preemption. Given the need of many workers to increase their retirement savings, and limitations on DOL’s ability to provide flexibility regarding ERISA preemption, stakeholders have suggested a number of ways to address uncertainty and facilitate state efforts to expand coverage in workplace retirement savings programs: Amend ERISA’s preemption provision. Some stakeholders suggested Congress could amend ERISA’s preemption provision by adding an exception for state efforts that expand coverage in workplace retirement savings programs. Pilot program. DOL officials told us a pilot program proposed in the Fiscal Year 2016 President’s Budget Submission, issued February 2, 2015, could help identify actions states could take to effectively expand coverage in workplace retirement savings programs and determine if new, or changes to, DOL regulations or guidance are needed. Under this proposal DOL would select a small number of states to implement different approaches to increasing coverage in workplace retirement savings programs. As part of such a pilot program, DOL officials said the department would need statutory authority from Congress to temporarily waive ERISA preemption for the pilot program timeframe in the selected states. They said some of the appropriations DOL may receive pursuant to the program, should it be authorized, could be used to fund start-up costs for state efforts given the potential implementation challenges noted by states. A key part of the pilot program would also involve data collection on state efforts to provide government and experts an opportunity to see which strategies will actually increase coverage before making more permanent changes to permit state efforts. Safe harbor. DOL and a national stakeholder said Congress could authorize DOL to establish a regulatory safe harbor for certain state efforts. DOL officials said the pilot program could even be considered a less permanent version of a safe harbor—albeit limited to a small number of states. To address other areas of legal uncertainty under ERISA, Congress has sometimes authorized DOL to prescribe safe harbors setting out conditions under which entities can operate without running afoul of the law. For example, PPA provided for DOL to prescribe by regulation a safe harbor for plans adopting automatic enrollment that, among other things, invest plan contributions in a qualified default investment alternative (QDIA). DOL promulgated a regulation describing the type of investments that qualify as QDIAs. In addition to the uncertainty caused by ERISA preemption, state and national stakeholders and government officials shared other issues causing legal uncertainty. For example, DOL officials were concerned with states offering payroll deduction IRA programs because they would presumably fall outside of ERISA and DOL regulation. They said the protections provided by ERISA are important for employee benefit plan participants and that DOL has already developed a proven regulatory framework. Stakeholders said other legal uncertainty is caused by conflicting DOL and Treasury policies related to multiple employer plans, and questions about whether certain Treasury regulations allow states to implement a guaranteed return, or pool assets to achieve scale. For these other issues, states will continue to face uncertainty under existing DOL and Treasury regulations (see table 4). Millions of workers in the United States have little or no retirement savings, an issue exacerbated by the lack of access to workplace retirement savings programs for many of them. Without this coverage, a significant number of Americans face the prospect of financial insecurity in retirement, and federal and state safety net programs face the potential for bearing increased financial burdens. Despite several major changes to federal law during the last few decades, federal action has not spurred such an increase in coverage. Recognizing this need to increase coverage, and thereby increase retirement savings, some states have undertaken efforts that would require or encourage employers to expand access to workplace retirement savings programs. However, the existing framework of federal law and regulation was not designed to foster a state role in providing coverage to private sector workers, and the resulting uncertainties about the application of that framework raise questions about the future and success of such efforts. Changes at the federal level—Congressional action combined with revised regulations and guidance within the authority of relevant agencies, particularly DOL— could help address these uncertainties. These actions would require difficult policy choices and involve weighing the benefits of uniformity and consistency provided by ERISA preemption against the potential value of state efforts to adopt innovative approaches to address the lack of sufficient retirement savings by their citizens. Moreover, along with the known regulatory challenges already identified by state officials and experts we interviewed, other areas of uncertainty could emerge through the experience of states implementing these programs. Congress has several options for legislative action, each of which highlight some of the difficult policy choices and trade-offs that would need to be considered. For example, amending the ERISA preemption provision to add exceptions for any of the state efforts discussed here would provide states with certainty about which types of efforts they could undertake. It might also set a precedent for additional exceptions that could diminish the nationwide uniformity and stability the preemption provision is intended to create. Alternatively, a pilot program could permit states to test, with DOL involvement, innovative approaches to increasing coverage. However, by their very nature, pilot programs involve a limited number of states and therefore would not create certainty for states not included in the pilot that wish to expand coverage. Pilot programs are generally temporary in nature so even included states may not have the benefit of long-term certainty about the feasibility of their efforts. Finally, through the creation of a statutorily authorized safe harbor, DOL could identify a small number of options available to states that would not run afoul of ERISA’s preemption provision, thereby retaining some degree of ERISA uniformity for employers. However, the development of a safe harbor option that would appeal to states and employers while retaining key protections for workers could be challenging, and little is known about the relative effectiveness of any particular model to actually increase coverage and retirement savings. To address the legal uncertainty stemming from ERISA preemption of state laws while maintaining the advantages of ERISA for both employers and workers, Congress should consider providing states limited flexibility to pursue efforts to increase coverage under workplace retirement savings programs. To do this, Congress could, for example, direct or authorize the Secretary of Labor, in consultation with the Secretary of the Treasury, to (1) promulgate regulations prescribing a limited safe harbor under which state workplace retirement savings programs with sufficient safeguards would not be preempted and would receive tax treatment comparable to that provided to private sector workplace retirement savings programs, or (2) create a pilot program under which DOL could select a limited number of states to establish workplace retirement savings programs subject to DOL and Treasury oversight. In either case, any such initiative should ensure that state programs include adequate participant protections and are subject to agency oversight, appropriate reporting requirements, and meaningful evaluation. To facilitate state efforts to expand coverage in workplace retirement savings programs, we recommend that the Secretary of Labor and Secretary of the Treasury consider their authority and review and revise, if necessary, existing regulations and guidance causing uncertainty for state efforts. For example, the Secretary of Labor could direct the Employee Benefits Security Administration’s (EBSA) Assistant Secretary to revise Interpretive Bulletin 99-1 to clarify whether states can offer payroll deduction Individual Retirement Accounts (IRAs) and, if so, whether features in relevant enacted state legislation—such as automatic enrollment and/or a requirement that employers offer a payroll deduction—would cause these programs to be treated as employee benefit plans. We provided a draft of this report to DOL, Treasury, the Pension Benefit Guarantee Corporation (PBGC), and the Social Security Administration (SSA) for their review and comment. PBGC and SSA did not provide comments. DOL provided written comments, which are reproduced in appendix VII. DOL also provided technical comments, which we have incorporated where appropriate. Treasury provided oral and written technical comments, which we have incorporated where appropriate. Treasury generally agreed with the findings, conclusions, and recommendation of this report. In its written comments, DOL generally agreed with the findings and conclusions of the report. They noted that inadequate retirement savings has a detrimental impact on the well-being of older Americans and increases the burden on state and federal retirement income support programs. In addition, DOL noted that many of the states engaged in efforts to address this issue by expanding coverage in workplace retirement savings programs have questions about preemption by ERISA. DOL generally agreed with the recommendation of the report. To address uncertainty facing state efforts, EBSA is initiating a regulatory agenda entitled “Saving Arrangements Established by States for Non- Governmental Employees,” which will appear in the Fall 2015 Semi- Annual Regulatory Agenda. EBSA expects to publish a Notice of Proposed Rulemaking by the end of 2015. We agree that DOL should review and revise existing regulations and guidance to accomplish all that can be done administratively to facilitate state efforts to expand coverage As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Labor, the Acting Director of the PBGC, the Acting Commissioner of the Social Security Administration, the Secretary of the Treasury, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VIII. Our objectives for this study were to examine: (1) recent estimates of workplace retirement savings program coverage, including eligibility and participation, and characteristics of workers who lack coverage, (2) strategies used by states and other countries to expand coverage among private sector workers, and (3) potential challenges states could face given existing federal law and regulations. To answer our research objectives, we used several different approaches. We examined data on private sector workers. We reviewed relevant research, selected state and federal legislation, and federal laws, regulations, and guidance. In addition, we interviewed state, national, and international industry stakeholders and government officials, including those at the Department of Labor’s (DOL) Employee Benefits Security Administration, the Department of the Treasury (Treasury), the Pension Benefit Guaranty Corporation, and the Social Security Administration (SSA). Section 1 describes the information sources and empirical methods we used to examine workplace access and participation across various characteristics of workers. Section 2 describes the methodology by which we identified state efforts and selected case studies, and reviewed the strategies they use and the potential challenges they could face. Section 3 describes similar methodology for our international review. To answer this question, we obtained information from the Survey of Income and Program Participation (SIPP) along with taxpayer data from W-2 filings, reviewed relevant literature, and conducted interviews with academics, industry stakeholders, and agency officials. SIPP is a nationally representative survey conducted by the U.S. Census Bureau. This panel survey is conducted generally every 4 years and resurveys participants every 4 months in a series of “waves” for the duration of their panel. Within each wave, Census administers a core survey consisting of questions that are asked at every interview, and several modules relating to a particular topic. We used data from the core survey and the topical module on retirement and pension coverage fielded from January-April 2012, the most recent data available. The survey collected data on about 52,000 individuals, including detailed information on work history, demographics, assets, and income. In comparison to other nationally representative surveys, SIPP has several main advantages. First, SIPP collects separate information on defined benefit (DB) and defined contribution (DC) plans. Other surveys, such as the Current Population Survey, do not distinguish between income from and participation in DB and DC plans. Second, the SIPP sample is larger than comparable surveys, such as the Survey of Consumer Finances (SCF). Consequently, it is possible to produce point estimates for demographic subcategories with a higher degree of reliability. Further, in comparison to SCF, which oversamples wealthy households, SIPP oversamples lower-income households—arguably an important component of an analysis of income security. Despite its advantages, SIPP has two limitations for our analysis. First, as with most survey data, SIPP data are self-reported. This can be problematic for the reporting of data on income sources and retirement program participation. For example, respondents might incorrectly report that they participate in a workplace retirement savings program when they do not. Second, despite the fact that SIPP differentiates between participation in a DB or DC plan, it does not contain full information on whether an individual’s employer offers a DB plan. Previous research has also found evidence of under-reporting of retirement program participation by comparing self-reported survey responses to W-2 tax records. Specifically, W-2 records include information on contributions to tax-advantaged retirement programs. By comparing the SIPP data to W-2 tax records, researchers can identify under-reporting of program participation. Similar to the approach used in prior research, we worked with Census to create a W-2 adjusted indicator of participation. If a respondent reported they did not participate, but actually had positive contributions to a workplace retirement program reported on their W-2, they were re-classified as participating. The data did not allow us to correct for the possibility that some participants may report they are participating when in fact they did not. Specifically, if the respondent said “yes, they do participate” but there is no contribution evident in the W-2, we did not recode them as a no because we cannot rule out the possibility that their employer offers a defined benefit plan or defined contribution plan in which only the employer contributes. We conducted a data reliability assessment of selected SIPP variables by conducting electronic data tests for completeness and accuracy, reviewing documentation on the dataset, or interviewing knowledgeable officials about how the data are collected and their appropriate uses. For the purposes of our analysis, we found the variables we ultimately reported on to be sufficiently reliable. We compared our estimates to estimates provided by Census using the SIPP data linked to tax records on retirement program contributions. Census replicated our analysis using the public use SIPP data with consistent results. Further, the results of our regression analysis of participation using the self-reported measure in the public use data and the W-2 adjusted measure were very similar in the size and significance of variables included in our analysis. In our sample we included respondents who are age 18 and older working in private sector jobs. For all SIPP analyses, we used SIPP individual-level weights to compute point estimates. Table 5 provides an overview of the number of private sector workers participating in workplace retirement savings programs using the W-2 adjusted data. To determine the extent to which private sector workers are covered by workplace retirement savings programs and the characteristics of those who lack coverage, we reviewed relevant literature and interviewed researchers, stakeholders, and agency officials to discuss relevant research methodologies and findings. This review informed our analysis of SIPP and Census data. Specifically, we examined the likelihoods, or odds, of the following outcomes: 1) participating in a retirement program (among all private sector workers), 2) having an employer that offers a retirement program (among all private sector workers), 3) being eligible (among those offered programs), and 4) participating in a program (among those who are eligible). The regression models we used to estimate these likelihoods included variables for the following characteristics of workers: income, occupation, education, age, gender, marital status, race/ethnicity, size of the firm they worked for, whether they worked full-time or part-time, whether they worked for the full year or only part of the year, and whether they were or were not union members. We examined regression results from the SIPP public-use data and the linked W-2 data. As described in the body of this report and appendix VI, the results were very similar in the size and significance of variables included in our analysis for both measures of participation. To understand factors that may be associated with access to workplace retirement programs and inform the methodology for our study, we conducted a literature review. A formal literature search was conducted by a GAO reference librarian using the Proquest database. In addition, we coordinated with the Congressional Research Service and Congressional Budget Office to identify relevant studies, and checked with DOL and SSA officials as to whether they would recommend any additional materials. Finally, during interviews with outside researchers, we asked for recommendations for other noteworthy studies. We performed these searches and identified articles from June 2014 to September 2014. Our review primarily focused on studies from the last five years (2009- 2014). The team reviewed article abstracts and identified those which were most relevant to our research objectives and developed detailed spreadsheet summaries of study goals, methodology, and findings. To review the strategies used in state efforts to expand private sector retirement coverage and the potential challenges they could face given existing federal law and regulations, we compiled a list of recent state efforts and conducted case studies in six states. To provide context on the number and type of recent state efforts we (1) developed a list of recent efforts in 29 states by reviewing industry websites and publications, interviewing federal officials and knowledgeable industry representatives, and conducting targeted searches of legislative databases; (2) confirmed the completeness of the list of states we identified at multiple points during the process with knowledgeable stakeholders; and (3) described the strategies in these state efforts. See appendix IV for a full description of our methodology and results. We selected a limited number of state efforts for case studies in October 2014 to provide non-generalizable examples of the types of efforts underway to expand coverage. To do so, we asked officials from DOL and Treasury and representatives from the Pension Rights Center, the American Society of Pension Professionals and Actuaries, and the retirement issues group AARP to recommend “leading” state efforts. They recommended eight states: California, Connecticut, Illinois, Maryland, Massachusetts, Oregon, Washington, and West Virginia. From those eight states, we selected six for case studies based on the following criteria: States that enacted, or were expected to enact, legislation that leads to implementation of a substantive effort to expand coverage. Some parity in the numbers of states from each of the two broad categories of state efforts we initially identified—automatic IRA- and other voluntary account-type programs. State efforts with some differences in how the broad categories were approached. Based on these criteria, we selected California, Illinois, Maryland, Massachusetts, Washington, and West Virginia. We did not select Oregon because the state’s Retirement Savings Task Force had just completed a study that recommended the program have certain characteristics, but any legislative proposal that might utilize those recommendations would not be available until 2015 at the earliest. We did not select Connecticut because its recent legislation and historical context were similar to other states we selected. To conduct case studies in the six states, we reviewed applicable GAO and academic research and legislative documentation on each state’s effort. Where available, we reviewed status updates and final reports by the state government or appointed task force or board. In addition, we interviewed national industry stakeholders and academics with knowledge of state efforts, and key stakeholders in the states including, where applicable, elected state officials, state government officials, board or task force members, and employer, worker, and industry representatives. We asked about key features of the state efforts and advantages, disadvantages, and challenges of the strategies in the state efforts. We conducted some of these interviews in person in California, Illinois, and Washington. To examine strategies used by other countries to expand coverage and identify lessons learned for the United States, we studied efforts in three countries that have voluntary workplace retirement systems—Canada, New Zealand, and the United Kingdom (U.K.). Our review provides non- generalizable examples of the types of efforts underway to expand coverage outside the U.S. We conducted an initial review of workplace retirement savings programs in Organisation for Economic Co-operation and Development (OECD) countries and consulted with knowledgeable industry stakeholders at the OECD and World Bank, among others. Given this information, we selected countries that met the following criteria: Private sector, workplace retirement savings programs are an important pillar of the country’s retirement system. The country has well-developed financial markets. Reforms designed to increase coverage have been implemented or are in the process of being implemented. Reforms use similar strategies as state efforts we identified. Specifically, the reforms use a voluntary approach or require employers to offer a program but allow workers to opt out. It was identified through our research and the consensus of knowledgeable external stakeholders as having strong potential for yielding useful lessons for the United States. It is not duplicative. Where similar programs exist in multiple countries we will select the one that best addresses the other selection criteria. As part of our review, we examined available documentation and analyzed the selected countries’ systems based on strategies used to increase coverage and the potential effectiveness in the United States. In particular, we examined eligibility and enrollment features, as well as measures targeted to workers who tend to lack coverage (e.g., those who work for small employers or are self-employed). We interviewed knowledgeable industry stakeholders and government officials from each country, as well as academics and national stakeholders based in the United States, about each strategy’s strengths, weaknesses, tradeoffs, and lessons learned for the United States. We did not conduct an independent legal analysis to verify the information provided about the laws, regulations, or policies of the countries selected for this study. Instead, we relied on appropriate secondary sources, interviews, and other sources to support our work. We submitted key report excerpts to agency officials in each country for their review and verification, and we incorporated their technical corrections as necessary. We note also that the fact that a legal feature was successful in one or more of the countries we visited, which may have significantly different cultures, histories, and legal systems than the United States, does not necessarily indicate that it would be successful in the United States. We conducted this performance audit from June 2014 through September 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. States we reviewed would attempt to use employers’ existing payroll processes to reduce administrative burden for employers, but some stakeholders said this may create challenges that could be mitigated based on state experience with other programs. For example, Maryland’s Task Force found that the best way to minimize administrative burden on small employers is to use their existing payroll process. While some stakeholders were concerned with employers’ payroll deduction capabilities, research shows that the majority of employers already use electronic payroll services. Because of this, state and national stakeholders told us that deducting and remitting contributions from a worker’s paycheck would not substantially increase the burden or cost for most employers. However, California stakeholders noted that using employers’ existing payroll processes would create a large state role in educating employers about their responsibilities and collecting payroll deductions. For example, the California Secure Choice program would need to be able to accept contributions from millions of small employers and one stakeholder noted that the program would likely have to deal with privacy issues if employers use tax identification information to remit payments. However, some states hoped to utilize the experience of state agencies that accept payroll deduction for other employer deductions and payments. For example, board members said that the California Secure Choice program may utilize the services of the state’s Employment Development Department that already collects employment taxes from employers in the state. While implementation details still need to be decided, board members hoped to capitalize on the department’s connection with California employers, infrastructure for processing payroll deductions, and experience with enforcing employer requirements. Similarly, Illinois’ Department of Revenue currently oversees employers’ payroll deposits for taxes, so Illinois stakeholders thought that the Illinois Secure Choice program might be able to leverage its administrative and enforcement experience. Government officials and stakeholders in New Zealand noted that this approach has worked well for KiwiSaver. Inland Revenue, New Zealand’s tax collection agency, is the interface between employers and providers, which officials said simplifies the administrative role for employers. Employers do not have to engage with various KiwiSaver providers a worker may select nor administer worker requests for opting out. Employers simply send a worker’s contributions to Inland Revenue and the agency sends the worker’s contribution to their KiwiSaver provider. In recognition of resource limitations for small employers, all of the state efforts would allow small employers of varying sizes to choose whether to participate in the state program. While stakeholders and research indicate that payroll deduction is readily accessible for the majority of employers, some of the smallest employers may not yet have migrated to an electronic payment system. As a result, each of these state efforts allows the smallest employers to choose whether to participate in the program. In particular, states that are considering requiring employers to offer workplace access have exempted the smallest employers from the requirement, but would allow them to choose to offer access to the program. For example, Illinois law exempts employers with fewer than 25 employees and California law exempts employers with fewer than 5 employees. Stakeholders did note a drawback to allowing small employers—as well as self-employed workers—to choose whether to offer the state program. Specifically, if too few employers choose to offer a purely voluntary program, it may not be able to achieve the scale necessary to ensure that costs are reasonable. In contrast, some of the countries we studied required even small employers to offer workplace access but they took other actions to recognize the resource limitations of small employers, including phasing in the employer requirement and creating specific programs that support small employers. For example, in the U.K. the employer requirement to offer access and automatically enroll workers is being phased in gradually between October 2012 and February 2018 based on employer size, which provides smaller employers additional time to prepare for meeting the new requirements. However, without any other action by the government, U.K officials concluded that small employers may find it difficult to meet the automatic enrollment requirements because of a supply gap—the Pensions Commission concluded that existing providers would find it unprofitable to serve employers that did not previously offer workers access, the overwhelming majority of which had fewer than 15 employees, at a reasonable cost to workers. To address the supply gap, it created the National Employment Savings Trust (NEST) to ensure that all employers, particularly small employers, would have access to a low- cost program, with the added benefit of diminishing the burden on employers of choosing an appropriate provider (see text box 1). Similarly, the Australian government recognized the specific needs of small employers by creating a Small Business Superannuation Clearing House in 2010 to simplify administrative requirements and lessen the administrative burden on small employers. The Clearing House is a free service that allows eligible employers to make one payment at least quarterly on behalf of all of the employer’s workers. The Clearing House then disburses the payments to each of the worker’s choice of fund (see text box 2). Text Box 1: The United Kingdom’s National Employment Savings Trust (NEST) has a public service obligation to accept any employer (and any qualifying worker) that wishes to use it, and in this way serves all those employers and workers who are unable to find another provider of a retirement savings program. The existence of a low-cost program with a universal public service obligation reduces the burden on small employers who might otherwise expend considerable time and effort in identifying a provider willing to serve them at an acceptable cost. In addition, there are no charges for employers to set up and use NEST. Finally, NEST has worked with the major payroll software providers in the United Kingdom to integrate their platforms with NEST in order to reduce data compliance burdens on employers. NEST representatives thought that this effort could make NEST even easier to use for small employers. Text Box 2: Australian officials explained that the Small Business Superannuation Clearing House was set up to help employers make contributions on behalf of their workers as required by the Australian Superannuation system. They thought that these contributions could represent a heavy administrative burden for small employers because workers are allowed to choose the provider to which they want the employer to send their contributions, employers must make payments at least quarterly, and superannuation providers all have slightly different forms and payment delivery methods. Created in 2010, the purpose of the Small Business Superannuation Clearing House was to simplify administrative requirements, lessening the administrative burden for small employers— those with 19 or fewer workers or, from July 1, 2015, those with total income of less than $2 million Australian dollars (AUD) (about $1.4 million USD). Employers log onto the Clearing House’s website and are required to complete a one-time registration, after which their involvement is low. As of July 31, 2015, about 120,000 employers are registered to use the service, approximately 15 percent of those eligible. The majority of employers who use the service generally have between 3 and 19 workers. The government has found that these employers typically do not have payroll software. Officials said that employers who use the Clearing House say it saves them time and gives them peace of mind because use of the system guarantees they have met their obligation. The Clearing House is funded through government appropriations and costs the government about $6 million AUD annually. States would also take other actions to reduce administrative burden for employers and may be able to learn from other countries’ implementation experiences, especially those of NEST in the U.K., and their own experiences with other state savings programs, including college savings plans and public pension plans (see table 6). These actions to reduce administrative burden for employers elevate the state’s role in the programs, which could create implementation challenges, including how to fund state actions. Stakeholders generally noted the importance of the state’s acceptance of some of the employer’s administrative duties and liability in encouraging employers to offer workplace access. However, this state role is particularly important for efforts that would require eligible employers to participate, such as in California, Illinois, and Maryland. While enabling legislation provides the intent and the direction of these state programs, the state effort’s governing body will have to make many decisions to develop and implement the programs, including how to educate employers about their responsibilities. In three of the four states we studied that had enacted such legislation—Massachusetts, California, and Illinois—the states have already taken or planned at least 2 years to study options and design their programs. In addition, stakeholders noted that the six states we reviewed would likely incur potentially significant startup and ongoing costs and saw potential challenges in determining how to fund those costs. For example, a California official expects the feasibility study and related legal analysis will cost $1 million, but total startup costs could be much larger. Since the feasibility study has to be funded without a state appropriation, the state official said that an immediate challenge was to raise the necessary funds for the feasibility study so they could better determine cost estimates for developing the program and ongoing administration. The U.K. recently dealt with this issue when it created NEST and received a loan from the government. A NEST representative said it had spent about 300 million British pounds (GBP) (about $459 million USD) over the last 2.5 years to (1) develop systems infrastructure and architecture; (2) enroll employers; and (3) fund the program in the first few years while contributions are very low. For the latter, the NEST representative said that current members are contributing 2 percent of their generally low wage—at these rates contributions could be 20 GBP (about $31 USD) a month—so NEST does not receive revenue from these accounts to pay back the loan and, in fact, runs into a cash flow shortfall. In these early years, NEST is running a slightly higher shortfall than expected because the income levels of members are lower than expected. To provide federal context, we interviewed federal agency officials and knowledgeable industry representatives, and we conducted targeted searches of legislative databases. Then from all bills in the 113th Congress and the 114th Congress categorized in the Legislative Information System as having retirement as a topic, we selected and discuss in the table below those that would make or would have made efforts to expand coverage in workplace retirement savings programs (see table 7). Because this is a select list, however, and we did not include bills that may have had an indirect effect on workplace retirement savings program coverage, this list should not be viewed as exhaustive. We developed the following list of 29 states with recent efforts to expand retirement savings program coverage for private sector workers— including the introduction, action on or enactment of legislation, an executive action, or a study—by (1) reviewing information posted online by industry stakeholders, including the National Council of State Legislatures, the Pension Rights Center, and other industry publications; (2) conducting targeted searches of enacted and proposed state legislation using LexisNexis and WestLaw, in consultation with a law librarian; and (3) interviewing federal officials and knowledgeable industry representatives, including stakeholders at the Pension Rights Center, the American Society of Pension Professionals and Actuaries, and AARP. At multiple points during this process, we confirmed completeness of our list of states with knowledgeable industry stakeholders, including a more in-depth review by a representative of the Center for Retirement Initiatives at Georgetown University, a public policy center created to promote retirement savings solutions at the state level in the United States. (See fig. 11.) We described the strategies proposed in the state efforts based on documentation we obtained from state legislative or executive office websites. We use “state efforts” to refer to a range of activities that may have occurred in a state, including the introduction of a bill, executive action, studies, or the enactment of legislation. While states are exploring various approaches, a number of state efforts seem to incorporate strategies, such as payroll deduction individual retirement accounts (IRAs) and automatic enrollment, that are similar to those incorporated into “automatic IRA” proposals submitted by the President and some members of Congress. We compiled this list, shown in table 8, by interviewing state officials and knowledgeable industry representatives and conducting targeted searches of legislative databases. We did not conduct an independent, legal analysis to determine whether the features and strategies used by states will hold up under scrutiny and this list may unintentionally exclude relevant state efforts that we did not identify. Workplace Access and Automatic Enrollment Federally regulated and most provincially regulated employers can voluntarily offer PRPPs to their workers, while Quebec will require employers with five or more eligible workers to offer VRSP. If a federally or provincially regulated employer chooses to offer a PRPP, or is required to offer VRSP, the employer will contract with a licensed provider who will manage the set-up process. Eligible workers will be automatically enrolled. However, workers have 60 days to opt out or may, 12 months after their contributions begin, set their contribution rate to zero for a fixed period of up to five years. Full-time workers are immediately eligible to join a PRPP and part-time workers are eligible after 24 months of continuous service. Eligible workers for Quebec’s VRSP are those who are ages 18 and older, have 1 year of continuous service, and who otherwise do not have the opportunity to contribute to a workplace retirement plan. Financial Incentives and Worker Contributions PRPPs and VRSPs offer workers tax benefits to encourage participation, but contribution requirements vary. Worker contributions to PRPP receive a corresponding tax deduction up to a set contribution limit. Default contribution rates will be set by the plan administrator for their PRPP, but providers may permit workers to adjust the contribution rate. For VRSP, the default contribution rate is set by regulation at 2 percent until 2017, increasing to 3 percent in 2018 and 4 percent in 2019. However, workers can alternatively set their contribution rate to zero. Employers are not required to make financial contributions to their workers PRPP or VRSP accounts. Plan Providers Service providers, like banks or insurance companies, must apply for a license to become an administrator of PRPPs or VRSPs, and take on the fiduciary responsibility for administering the plans. Investment Options Licensed administrators will offer a limited number of investment options including a default option that will be either a balanced fund or a target date fund, and up to five additional investment options with varying degrees of risk. According to research by a Canadian financial institution, VRSP is estimated to enroll 1 million workers in the province by 2018. Because the PRPP is voluntary for employers and implementation is in the early stages, estimates on participation rates remain unclear. Fees To ensure fees are reasonable, government regulation requires that PRPP providers charge fees equivalent to or below those charged to members of defined contribution plans with 500 or more members. As of July 2015, VRSP fees for the default option range between 1.09 and 1.25 percent. At a glance To increase national savings and address the declining trend in private sector retirement plan coverage of the under 65 population—from about 20 percent in 2001 to about 15 percent in 2007—the New Zealand government passed the KiwiSaver Act. The act established KiwiSaver in 2007 as a government- sponsored defined contribution workplace program. KiwiSaver requires employers to automatically enroll all new workers into a qualified plan, with a worker opt- out. Workers are encouraged to participate in KiwiSaver through a series of incentives that include employer contributions, tax benefits, and a one-time government contribution. Workplace Access and Automatic Enrollment Employers are required to automatically enroll new workers ages 18 to 65 into a KiwiSaver plan. In addition, existing workers, as well as the self- employed and unemployed, can choose to opt into a KiwiSaver plan by notifying their employer or by contacting a provider directly. Employers can select a preferred provider for workers who do not choose their own, but workers have the right to change providers anytime. Following automatic enrollment, workers have between 2 and 8 weeks to opt out. Financial Incentives and Worker Contributions Participation in KiwiSaver offers eligible workers employer contributions, tax benefits, and a one-time government contribution, to complement workers’ own contributions. For workers who do not opt out, employers are generally required to contribute 3 percent of earnings. In addition, workers who contribute a minimum of $1,043 New Zealand dollars (NZD) (about $663 USD) or more annually get a $521 NZD ($331 USD) tax credit. Until May 21, 2015, the government also made a one-time kick start contribution of $1,000 for new accounts. After this date, new members are no longer eligible for this payment. Workers generally contribute 3, 4, or 8 percent of earnings. After an initial 12 month period, workers have the option of a contributions holiday from 3 months to 5 years. As of June 2014, KiwiSaver had grown to include 2.35 million members. Of this total, approximately 61 percent chose to opt into KiwiSaver either through their employer or through a plan provider. The remaining 39 percent were automatically enrolled by their employer. Approximately 20 percent of enrolled workers have chosen to opt out of KiwiSaver. Plan Providers KiwiSaver service providers, such as banks, insurance companies and investment firms, are registered to offer qualifying plans, and deal directly with workers after the initial set-up process. If an employer does not have a preferred KiwiSaver provider and a worker does not select one, Inland Revenue, the government tax collection and social programs agency, allocates workers to one of the nine government-appointed default service providers. All service providers are registered with and monitored by the Financial Markets Authority, the government regulator. Investment Options KiwiSaver service providers offer a range of products with different investment options and risk levels with the default being a conservatively invested fund. Approximately 25 percent of KiwiSaver participants are in a default investment fund. Fees Default service providers are licensed by Inland Revenue and establish reasonable fees for participants by submitting a fee schedule as part of the application process. Inland Revenue must also approve any changes to the fee structure. In addition, the Commission for Financial Capability established Fund Finder, an online tool that allows participants to compare plan features and fee structures. Over half of KiwiSaver plans charge between 1 and 1.5 percent in fees—New Zealand uses a Total Expense Ratio tool to measure fees, which is a ratio of total fees to funds under management in percentage terms. At a glance To address the declining proportion of private sector workers participating in a workplace retirement plan—down from 47 percent in 2002 to about 32 percent in 2012—the United Kingdom government passed legislation in 2008 requiring employers to automatically enroll all eligible workers into a retirement plan. Employers can meet their obligation by enrolling workers in a plan that meets minimum standards or the new National Employment Savings Trust (NEST) set up by the government. Workplace Access and Automatic Enrollment Employers are required to automatically enroll workers to a qualified workplace retirement plan if they are between age 22 and State Pension Age—which is variable based on age, gender, years of employment, and national insurance status—and have earnings over 10,000 British pounds (GBP) (about $15,353 USD) for the 2015/2016 tax year. The U.K. government reviews the threshold annually and may adjust it. Once they have been automatically enrolled, workers have 1 month to opt out. Some workers earning less than 10,000 GBP, ages 16 to 21, and over the State Pension Age have the option to opt in. Every 3 years, employers are required to automatically re-enroll workers who have previously opted out. Automatic enrollment is being phased in over the course of several years beginning with the largest employers. Since its implementation began, the rollout had reached over 50,000 employers and 5.3 million workers by end of June 2015. An additional 1.2 million employers are scheduled to implement automatic enrollment for approximately 4 million employees by April 2017. The rollout is scheduled to be completed by 2018, when it will cover employers of all sizes. Worker opt-out rates were approximately 9 to 10 percent in 2013 and 12 percent in 2014, below the initial government estimates of 33 percent. Financial Incentives and Worker Contributions For workers who do not opt out, the total minimum contribution from the employer, government, and worker will combine to reach 8 percent by 2018. Employers are required to contribute 1 percent of qualified earnings—a band of earnings between 5,824 GBP and 42,385 GBP (about $8,941 USD and $65,073 USD) as of 2015—gradually increasing to 3 percent by 2018. In addition, the U.K. government will contribute the equivalent of 1 percent of qualified earnings in the form of tax relief by that date. Workers currently pay 0.8 percent of qualified earnings, gradually increasing to 4 percent by 2018, through payroll deduction. Plan Providers Employers may offer a plan through a commercial provider, such as an insurance company, if it meets or exceeds certain legal criteria. If an employer does not offer a qualifying plan, they must enroll workers into NEST, which was established to act as a low-cost default provider. It has a public service obligation to accept any employer that wants to use it, including smaller employers that may not be able to find suitable commercial providers. Investment Options NEST offers a limited number of investment options to reduce complexity and fees. The default fund is a diversified target date fund, which is conservatively invested to avoid capital loss during initial years of saving to discourage workers from opting out. In addition to the default, NEST offers five other options with varying levels of risk. Fees NEST does not charge employers to use the program. Workers enrolled in NEST pay an annual management fee of 0.3 percent. A temporary additional fee of 1.8 percent on contributions is also deducted until NEST start-up costs are recovered. According to NEST officials, this is equivalent to a total fee of about 0.5 percent annually, which is comparable to fees charged by larger workplace retirement programs. From April 2015 onward, workers who enroll with non-NEST commercial providers pay an annual charge fee capped at 0.75 percent. In this appendix we summarize the results of our analyses of factors affecting retirement program participation. We first looked at the likelihood of participating in retirement programs overall, or among all workers. We then examined, among all workers, the likelihood of their employers offering them retirement programs. Then, among workers whose employers offered programs, we looked at the likelihoods of workers being eligible for them. And finally, we looked at the likelihood of participating in retirement programs among workers who were offered programs and eligible for them. In all of these analyses, we estimated how these different likelihoods were associated with different characteristics of the workers, including their income, occupation, education, age, gender, marital status, race/ethnicity, the size of the firm they worked for, whether they worked full-time or part-time, whether they worked for the full year or only part of the year, and whether they were or were not union members. For our analyses, we used publicly available data from the Survey of Income and Program Participation (SIPP) from 2012, as shown in tables 9 – 12 of this appendix. To correct for under-reporting of participation we also used W-2 data as described in appendix I. Results from the public use data and W-2 data were very similar in the size and significance of variables included in our analysis, as shown in table 13 at the end of this appendix. In all of the tables, the numbers and estimates derived from them use weighted data to reflect population estimates, using weights provided by SIPP. Table 9 shows how various categories of workers differ in their likelihood (expressed both as percentages and odds) of participating in retirement programs. In the first column of numbers in the table we show the number of workers in each group (or category) defined by the different factors. The next two columns of numbers reflect the percentages in each group that were and were not participating in retirement programs. The traditional way of comparing groups involves considering the difference in those percentages, and those differences are in many cases quite sizable. For example, only 14 percent of the workers in the lowest income quartile were participating in retirement programs, while 76 percent of the workers in the highest income quartile were participating in retirement programs. Only 18 percent of workers with less than a high school diploma, but 62 percent of workers with at least a bachelor’s degree, were participating in retirement programs. And while only 23 percent of the workers in firms with fewer than 50 workers were participating in retirement programs, 60 percent of workers in firms with more than 1,000 workers were retirement program participants. Sizable differences also exist between most of the categories of workers defined by the other characteristics shown in the table. An alternative method of estimating the likelihood of participating in retirement programs and the differences in those likelihoods between groups involves calculating odds and odds ratios. The odds on participating (vs. not participating) in retirement programs are calculated by taking, overall or for any one group, the number (or percent) of workers participating in retirement programs and dividing it by the number (or percent) of workers not participating in retirement programs. Overall, using the percentages in the final row of table 9, the odds on participating in retirement programs in our (weighted) sample of workers is 45.4 ÷ 54.6 = .832, apart from rounding. While somewhat different and less traditional than percentages, the odds have a fairly direct and simple interpretation: In this case it implies that overall there are .83 retirement program participants for every 1 non-participant, or 83 participants for every 100 non-participants. In the fourth column of numbers in table 9, we show the odds on participating for each subgroup defined by the different worker characteristics shown in the table. There we see, for example, that the odds on participating in retirement programs increases from 0.167 for workers in the lowest income quartile to 0.514 for workers in the second income quartile to 1.338 for workers in the third income quartile and, finally, to 3.176 for workers in the highest income quartile. The unadjusted odds ratios in the penultimate column of table 9 show how the odds on participating in retirement programs vary across the different subgroups. Where the factors or worker characteristics distinguish only two groups, like union membership, we simply take the ratio of the odds for one group to the other – e.g., 2.103 ÷ 0.774 = 2.72, which implies that the odds on union members participating in retirement programs are 2.7 times higher than the odds for workers who are not union members. Where the factors involve more than two subgroups, we choose one subgroup as the referent category and calculated the ratios of the odds for the other subgroups relative to that one. To make comparisons across income categories, for example, the lowest income quartile was chosen as the referent category, and the ratios shown for the other subgroups (i.e., 0.514 ÷ 0.167 = 3.08; 1.338 ÷ 0.167 = 8.01; and 3.176 ÷ 0.167 = 19.01) indicate that workers in the second, third, and highest income quartiles have higher odds on participating than workers in the lowest quartile, by factors of 3, 8, and 19, respectively. The full set of unadjusted odds ratios shown in the table indicate that virtually all of the subgroups differ from one another, in most cases significantly, and in many cases by a substantively large amount. Workers in the broad category of occupations involving Management, Business, Science, and Arts, for example, were 6.5 times as likely to be participating in retirement plans (or have odds that are 6.5 times higher) as workers working in Service occupations. Workers with bachelor’s degrees were 2.7 times as likely as workers with less than a high school education to be participating in retirement programs. And, to offer a final example, Hispanics were less likely than white non-Hispanics to be participating in a retirement program by a factor of 0.37. These unadjusted odds ratios may seem to inflate the differences between groups, especially to those who are accustomed to comparing percentages. That is, while the odds ratio comparing workers in the highest and lowest income quartiles suggests a 19-fold difference between the two groups, there is only slightly more than a 5-fold difference between the two groups in the percentage participating (i.e., 76.1 ÷ 14.3 = 5.3). Focusing on the difference between groups in the percent participating, however, ignores the implicit difference in the percent not participating between groups, which differed by a factor of 23.9 ÷ 85.7 = 0.28. One advantage of the odds ratio (also referred to as the cross-product ratio) is that in estimating the ratios of participants to non-participants it makes fuller use of the data involved in the comparisons, and considers both differences at once. In fact, the odds ratio we obtain in this case, equal to 19.01, could just as easily have been obtained by taking the ratio of these two percentage differences (i.e., 5.3 ÷ 0.28 = 19.01, apart from rounding). Another advantage of odds ratios is that, unlike percentage differences, they can be adjusted using multivariate models (logistic regression models) so that they reflect the net effect of each variable, rather than the gross (or unadjusted) effect. The odds ratios in the final column of table 9 show how different the odds on participating are when we consider all of the variables in the table simultaneously. Because of the correlations among some of the variables in the table (like income and education), the effects of the different variables, or the differences between the categories of workers they define, are in many cases substantially attenuated (or smaller) when we look at them simultaneously than when we look at each individually. The odds ratio estimating the income difference just discussed, for example, is reduced when we estimate its effect net of the other variables, from 19.0 to 6.8. Even after adjusting the ratios to take account of the interrelatedness of many of the factors in the table, most of the subgroups compared remain significantly different from one another, and in many cases the differences are sizable. In addition to the income difference mentioned above we find, even after adjustment, that the likelihood of participating varies significantly for many of the variables we examined (see table 9). For example: 1. Workers in all other occupational categories have significantly higher odds on participating in retirement programs than workers in Service occupations, and the odds are nearly twice as high (OR = 1.95) for workers in Management, Business, Science, and Arts occupations as for those in Service occupations. 2. Workers with less than a high school diploma were less likely (by a factor of 0.6) than those with less than a high school education to be participating, though after adjustment those with some college or with a Bachelor’s degree are not statistically distinguishable from those with a high school diploma. 3. Workers in larger firms had higher odds on participating in retirement programs than workers in smaller firms. Workers in firms with 51 to 100 workers were about twice as likely (OR = 2.1) as workers in firms with 50 or fewer workers to participate, and workers in firms with more than 1,000 workers were about five times (OR = 4.9) as likely. 4. The youngest category of workers (ages 18-24) were only roughly one-third as likely (OR =0.37) as workers 25-34 to be participating in retirement programs; workers 35-44 were not significantly different from workers 25-34; and workers 45-54 and 55-64 were both more likely than those 25-34 to be participating in retirement programs, in both cases by a factor of 1.4. Workers 65 and over had lower odds on participating than all groups except the very youngest, and odds that were lower than workers 25-34 by a factor of 0.68. 5. Additionally, full-time workers were 1.6 times more likely than part- time workers to be participating in retirement programs; full-year workers were 2.5 times more likely than part-year workers to be participating in retirement programs; and union workers were twice as likely (OR = 2.0) as non-union workers to be participating in retirement plans. 6. Finally, male workers were less likely than female workers to be participating in retirement programs, by a factor of 0.91; currently married workers were 1.2 times more likely than never married workers to be participating in retirement programs, while widowed, divorced, and separated workers were not significantly different from those who were never married; and Black, Hispanic, and Asian workers had lower odds on participating than White, non-Hispanic workers, by factors of 0.8, 0.6, and 0.7, respectively. Some of these differences in the likelihood of participating in retirement programs are likely due to the fact that some categories of workers are more likely than others to work for employers that offer retirement programs. Table 10 shows that a great many categories of workers differ in terms of whether their employer offers a program. As in the table above, the unadjusted odds ratios indicating the differences between groups in the odds on their employers offering a program (in the penultimate column of the table) tend to be somewhat larger than the adjusted odds ratios, or the odds ratios obtained from multivariate models in which all of the different factors are considered simultaneously using a multivariate model (in the final column). But even the adjusted odds ratios reveal some sizable and significant differences, including the following: 1. Workers with higher incomes are more likely to work for employers that offer retirement programs. Workers in the 2nd, 3rd, and highest income quartiles have higher odds on working for employers that offer retirement programs than workers in the lowest income quartile, by factors of 1.7, 2.7, and 3.9, respectively. 2. Workers in all occupational categories except for Natural Resources, Construction, and Maintenance Occupations have significantly higher odds on working for employers that offer retirement programs than workers in Service occupations, and the odds are nearly twice as high (OR = 2.0) for workers in Management, Business, Science, and Arts occupations as for those in Service occupations. 3. Workers with less than a high school education were less likely to work for employers that offer retirement programs than those with a high school diploma, by a factor of 0.73, while workers with some college or a Bachelor’s degree were more likely (in both cases) by a factor of roughly 1.2. 4. Workers in larger firms had substantially higher odds on working for employers that offer retirement programs than workers in smaller firms. Workers in firms with 51 to 100 workers were about three times as likely (OR = 2.95) as workers in firms with 50 or fewer workers to work for employers that offer retirement programs, workers in firms with 101 to 500 workers and with 501 to 1,000 workers were 5 and 6 times as likely, respectively, and workers in firms with more than 1,000 workers were about 9 times (OR = 9.1) as likely to work for employers that offer retirement programs. 5. The youngest category of workers (ages18-24) were only roughly three-fourths as likely (OR =0.76) as workers 25-34 to be working for employers that offer retirement programs; workers 35-44 were also somewhat less likely than workers 25-34 to work for employers that offer retirement programs (OR = 0.88); and workers 45-54 and 55-64 did not significantly differ from those 25-34 in working for employers that offer retirement programs. Workers 65 and over had lower odds on working for employers that offer retirement programs than all groups, and odds that were lower than workers 18-24 by a factor of 0.62. 6. Additionally, full-time workers were 1.2 times more likely than part- time workers to be working for employers that offer retirement programs; full-year workers were 1.5 times more likely than part-year workers to be working for employers that offer retirement programs, and union workers were twice as likely (OR = 2.0) as non-union workers to be working for employers that offer retirement programs. 7. Finally, male workers and female workers did not significantly differ in their chance of working for employers that offer retirement programs (OR = 0.96); currently married workers were slightly but significantly (OR = 1.1) more likely than never married workers to be working for employers that offer retirement programs, while widowed, divorced, and separated workers were not significantly different from those who were never married; and Black, Hispanic, and Asian workers had lower odds on working for employers that offer retirement programs than White, non-Hispanic workers, by factors of 0.8, 0.6, and 0.6, respectively. Some of these differences in the likelihood of participating in retirement programs are also likely due to the fact that some categories of workers are more likely than others to be eligible for the retirement programs that their employers offer. Table 11 shows that a number of categories of workers differ in terms of whether they are eligible for the retirement programs their employers offer, though fewer factors are significantly associated with eligibility than participation. The differences that do exist, like those described above, are in virtually all cases somewhat smaller when we estimate them simultaneously than when we estimate them separately; nonetheless, the odds ratios from the multivariate models indicate that: 1. Workers with higher incomes are more likely to be eligible for the retirement programs their employers offer than workers with lower incomes. Workers in the 2nd, 3rd, and highest income quartiles have higher odds on being eligible than workers in the lowest quartile, by factors of 2.0, 4.4, and 7.5, respectively. 2. Occupation was not significantly associated with whether workers were eligible for the retirement programs their employers offer. Education was significantly related to eligibility, however; workers with some college and with a college degree were less likely than those with a high school degree to be eligible, by factors of 0.8 and 0.7, respectively. 3. Workers in mid-size firms were not significantly different from workers in the smallest firms in their chance of being eligible for the retirement programs their employers offer, though workers in firms with more than 1,000 workers were 1.3 times as likely to be eligible as workers in firms with 50 or fewer workers. 4. The youngest category of workers (ages 18-24) were less than one- half as likely (OR = 0.44) as workers 25-34 to be eligible for retirement programs their employers offer, while workers 35-44, 45-54, and 55- 64 were more likely to be eligible than those 25-34, by factors of 1.4, 1.6, and 1.6, respectively. Workers 65 and over were less likely to be eligible than workers 18-24, by a factor of 0.73. 5. Full-time workers were 2.6 times more likely than part-time workers to be eligible for the retirement programs their employers offer; full-year workers were 3.1 times more likely than part-year workers to be eligible for the retirement programs their employers offer; and union workers were 1.4 as likely as non-union workers to be eligible for the retirement programs their employers offer. 6. Finally, male workers and female workers did not significantly differ in their chance of being eligible for the retirement programs their employers offer (OR = 1.1); currently married workers were significantly more likely (OR = 1.4) than never married workers to be eligible for the retirement programs their employers offer, while widowed, divorced, and separated workers were not significantly different from those who were never married; and race and ethnicity was not significantly associated with eligibility. Given these results indicating that participation in retirement programs is partly the result of whether programs are offered, and whether workers are eligible for them, in table 12 we show how various categories of workers differ in their likelihood of participating in retirement programs when we restrict our attention to workers whose employers offer programs for which they are eligible. While differences are in most cases smaller than they appeared when we looked at all workers, regardless of whether they worked for companies that offered retirement programs and whether they were eligible for them, many still remain sizable and statistically significant. Focusing again on the multivariate odds ratios in the final column of the table, the results are as follows among those who were eligible for the programs their employers offered: 1. Workers with higher incomes are more likely to participate than workers with lower incomes. Workers in the 2nd, 3rd, and highest income quartiles have higher odds on participating in retirement programs than workers in the lowest income quartile, by factors of 1.2, 2.2, and 4.4, respectively. 2. Workers in all occupational categories except for Production, Transportation, and Material Moving Occupations have significantly higher odds on participating in retirement programs than workers in Service occupations, by factors ranging from 1.3 to 1.5. 3. Workers with less than a high school diploma were less likely (by a factor of 0.7) than those with a high school education to be participating, though after adjustment those with some college or with a Bachelor’s degree are not significantly different from those with a high school diploma. 4. Workers in firms with 51 to 100 workers were somewhat less likely (OR = 0.75) as workers in firms with 50 or fewer workers to participate, while workers in each of the larger firm categories (with more than 100 workers) were not significantly different from the smallest firms. 5. The youngest category of workers (ages 18-24) were only roughly one-half as likely (OR =0.46) as workers 25-34 to be participating in retirement programs; workers 35-44, 45-54, and 55-64 were more likely than those 25-34 to be participating in retirement programs, by factors of 1.3, 1.7, and 1.9, respectively. Workers 65 and over had odds of participating that were not statistically distinguishable from the odds for workers 25-34 (OR = 0.9). 6. Full-time workers were slightly more likely (OR =1.1) than part-time workers to be participating in retirement programs; full-year workers were 1.3 times more likely than part-year workers to be participating in retirement programs, though the result was not statistically significant; and union workers were 1.7 times as likely as non-union workers to be participating in retirement programs. 7. Finally, male workers were less likely than female workers to be participating in retirement programs, by a factor of 0.8; marital status was not statistically associated with participation; and Black and Hispanic workers had lower odds on participating than White, non- Hispanic workers, by factors of roughly 0.7 in both cases. Asians and other non-Hispanics were not significantly different from Whites. Ignoring for the moment the numbers in parentheses, table 13 summarizes how the differences in participating across groups change when we look at the different group characteristics (1) one at a time among all workers, (2) all at once among all workers, and (3) all at once among workers who are eligible for the retirement programs offered by their employers. Some of the differences that appear sizable and significant when we look at them in isolation (column 1) diminish in size and become insignificant when all of the different factors are considered simultaneously (column 2). This is true of the differences between 1) workers with more than a high school education vs. workers with only a high school diploma, 2) workers ages 35-44 vs. workers ages 25-34, 3) workers who are widowed, divorced, or separated vs. workers who were never married, and 4) other non-Hispanic workers and white non-Hispanic workers. Further, some of the differences that remain sizable and significant even when they are considered simultaneously (column 2) diminish and become insignificant when we restrict the sample to workers who were offered programs and were eligible for them (column 3). Such is the case with the differences between 1) workers in Production, Transportation, and Material Moving Occupations vs. workers in Service Occupations, 2) workers in each of the larger firm categories with more than 100 workers vs. those in firms with 50 or fewer workers, 3) workers 65 and older vs. workers 25-34, 4) full-year vs. part-year workers, 5) married vs. never married workers, and 6) Asian non-Hispanic vs. white non-Hispanic workers. Most of the differences that remain significant after taking account of eligibility, which are noted in the bullets above associated with table 12, are smaller than they appeared before taking account of eligibility, though some of the age differences are exceptions to this. The factors that have the most pronounced effects when they are considered jointly and restricted to eligible workers are income, occupation, age, and union membership. The numbers in parentheses in the table show these same coefficients from the same bivariate and multivariate models for the same subgroups that we obtain when we use the “corrected” data which combines W-2 information from the Census with the self-reported data from SIPP. In virtually all cases the coefficients are very similar, and only in a few instances, involving the adjusted education effect for all workers, and the adjusted differences between part-time and full-time workers and workers in firms with 1,000+ workers and in firms with fewer than 50 workers among those who are eligible, are the estimated odds ratios significant in one set of results but not in the other. In virtually all other instances, the effects are similar in both size and significance. Odds Ratio (Multivariate) Odds Ratio (Multivariate) Odds Ratio (Multivariate) Odds Ratio (Multivariate) Odds Ratio (Multivariate) Odds Ratio (Multivariate) (1) Unadjusted Odds Ratios (All Workers) (2) Adjusted Odds Ratios (All Workers) (3) Adjusted Odds Ratios ( Eligible Workers) 3.08 (2.82) 2.00 (1.94) 1.22 (1.30) 8.01 (7.07) 4.04 (3.79) 2.23 (2.44) 19.01 (18.05) 6.84 (6.84) 4.43 (5.42) 1. Management, Business, Science, and Arts Occupations 6.48 (6.46) 1.95 (1.95) 1.53 (1.56) 3. Sales and Office Occupations 2.78 (2.92) 1.54 (1.71) 1.34 (1.45) 4. Natural Resources, Construction, and Maintenance Occupations 2.23 (1.98) 1.25 (1.18) 1.37 (1.47) 5. Production, Transportation, and Material Moving Occupations 2.74 (2.68) 1.44 (1.50) 1.17 (1.22) 0.37 (0.40) 0.61 (0.66) 0.74 (0.68) 1.31 (1.39) 1.03 (1.14) 0.97 (0.97) 2.74 (2.95) 1.06 (1.18) 1.18 (1.18) 1. More Than 1,000 Workers 4.85 (4.77) 4.93 (4.89) 1.17 (1.29) 2. 501 to 1,000 Workers 3.87 (4.03) 3.57 (3.82) 0.92 (1.05) 3.57 (3.57) 3.01 (3.10) 0.86 (0.90) 2.37 (2.32) 2.07 (2.07) 0.75 (0.71) (1) Unadjusted Odds Ratios (All Workers) (2) Adjusted Odds Ratios (All Workers) (3) Adjusted Odds Ratios ( Eligible Workers) 0.18 (0.24) 0.37 (0.50) 0.46 (0.43) 1.32 (1.28) 1.08 (1.06) 1.26 (1.20) 1.69 (1.65) 1.43 (1.41) 1.67 (1.63) 1.78 (1.68) 1.44 (1.34) 1.87 (1.75) 0.77 (0.73) 0.68 (0.63) 0.90 (0.76) 3.26 (3.34) 1.62 (1.76) 1.14 (1.14) 4.46 (3.69) 2.47 (1.93) 1.27 (1.28) 2.72 (2.55) 2.00 (1.75) 1.73 (1.60) 1.17 (1.15) 0.91 (0.92) 0.84 (0.79) 2.62 (2.48) 1.19 (1.21) 1.08 (1.11) 1.69 (1.59) 0.97 (0.97) 0.86 (1.06) 2.32 (2.17) 0.99 (0.96) 0.86 (0.85) 1.33 (1.35) 0.92 (1.01) 0.73 (0.78) (1) Unadjusted Odds Ratios (All Workers) (2) Adjusted Odds Ratios (All Workers) (3) Adjusted Odds Ratios ( Eligible Workers) 0.66 (0.72) 0.75 (0.83) 0.74 (0.75) 0.37 (0.37) 0.58 (0.58) 0.67 (0.71) 0.90 (0.89) 0.66 (0.64) 0.97 (0.91) 0.73 (0.77) 0.86 (0.91) 0.91 (0.85) In addition to the contact named above, Kimberly Granger (Assistant Director), Sharon Hermes and Jessica Gray (Analysts-in-Charge), Melinda Bowman, Gustavo Fernandez, Grant Mallie, Douglas Sloane, Walter Vance, and Seyda Wentworth made key contributions to this report. Also contributing to this report were David Chrisinger, Peter Del Toro, Cynae Derose, Helen Desaulniers, Jennifer Gregory, Stephen Komadina, Kathy Leslie, Andrea Levine, Ashley McCall, Sheila McCoy, Ty Mitchell, Matthew Nattinger, Drew Nelson, Mimi Nguyen, Susan Offutt, Mark Ramage, Margie Shields, Joseph Silvestri, Jeff Tessin, Kimberly Walton, Margaret Weber, Craig Winslow, and Paul Wright. Retirement Security: Most Households Approaching Retirement Have Low Savings. GAO-15-419. Washington, D.C.: May 12, 2015. Automatic IRAs: Lower-Earning Households Could Realize Increases in Retirement Income. GAO-13-699. Washington, D.C.: Aug. 23, 2013. Retirement Security: Women Still Face Challenges. GAO-12-699. Washington, D.C.: July 19, 2012. Private Sector Pensions: Federal Agencies Should Collect Data and Coordinate Oversight of Multiple Employer Plans. GAO-12-665. Washington, D.C.: Sept. 13, 2012. Private Pensions: Better Agency Coordination Could Help Small Employers Address Challenges to Plan Sponsorship. GAO-12-326. Washington, D.C.: March 5, 2012. Private Pensions: Some Key Features Lead to An Uneven Distribution of Benefits. GAO-11-333. Washington, D.C.: March 30, 2011. Retirement Savings: Automatic Enrollment Shows Promise for Some Workers, but Proposals to Broaden Retirement Savings for Other Workers Could Face Challenges. GAO-10-31. Washington, D.C.: Oct. 23, 2009. Private Pensions: Alternative Approaches Could Address Retirement Risks Faced by Workers but Pose Trade-offs. GAO-09-642. Washington, D.C.: July 24, 2009. Individual Retirement Accounts: Government Actions Could Encourage More Employers to Offer IRAs to Employees. GAO-08-590. Washington, D.C.: June 4, 2008. Private Pensions: Low Defined Contribution Plan Savings May Pose Challenges to Retirement Security, Especially for Many Low-Income Workers. GAO-08-8. Washington, D.C.: Nov. 29, 2007.
Millions of U.S. workers have little or no savings for retirement, potentially adding to future strains on state and national safety net programs. In addition to federal efforts, a growing number of states have proposed efforts to expand coverage in private sector workplace retirement savings programs. Other countries have also implemented similar efforts. GAO was asked to study these state and international efforts. GAO examined: (1) recent estimates of coverage, including access and participation, as well as characteristics of workers who lack coverage; (2) strategies used by states and other countries to expand coverage; and (3) challenges states could face given existing federal law and regulations. GAO primarily used SIPP data from 2012 (the most recent available). GAO also interviewed federal officials, national industry stakeholders, and officials and stakeholders in six states (California, Illinois, Maryland, Massachusetts, Washington, and West Virginia) and three countries (Canada, New Zealand, and the United Kingdom) selected based on the range of strategies used in efforts to increase coverage and recommendations from knowledgeable stakeholders. About half of private sector workers in the United States—especially those who are low-income or employed by small firms—lack coverage from a workplace retirement savings program primarily because they do not have access. According to GAO's analysis of 2012 Survey of Income and Program Participation (SIPP) data, about 45 percent of private sector U.S. workers participated in a workplace retirement savings program—an estimate that is consistent with prior GAO work and other research. Using tax data to correct for under-reporting raised the share of workers participating to 54 percent, but still indicates many workers lack coverage. Among those not participating, the vast majority—84 percent—lacked access because they either worked for employers that did not offer programs or were not eligible for the programs that were offered, for example, because they were new employees or in specific jobs that were excluded from the program. In particular, lower-income workers and those employed by smaller firms were much less likely to have access to programs. However, among those who had access, the majority of these workers participated. Key strategies to expand private sector coverage identified in the states and countries GAO reviewed include encouraging or requiring workplace access, automatic enrollment, financial incentives, and program simplification. For example, pending implementation, programs in two of the states GAO studied—California and Illinois—would require certain employers to automatically enroll workers in a state-run program, though workers could choose to opt-out. In the countries GAO studied, combining workplace access with automatic enrollment and financial incentives—tax preferences or employer contributions—has helped increase participation. Moreover, states and countries have tried to simplify program designs to (1) limit the responsibility and cost for employers and (2) reduce complexity, cost, and risk for workers. For example, some states intend to not only reduce burdens for employers by selecting and monitoring providers, but also reduce complexity for workers by limiting the number of investment options. State and national stakeholders reported potential challenges with uncertainty created by the Employee Retirement Income Security Act of 1974 (ERISA) and agency regulations that could delay or deter state efforts to expand coverage. Generally, ERISA preempts, or invalidates, any state law relating to “employee benefit plans” for private sector workers, but different areas of uncertainty arise based on the details of each state effort. For example, four of the six states GAO reviewed intend to create payroll deduction individual retirement account (IRA) programs that would not be considered employee benefit plans. However, due to uncertainty created by ERISA, it is unclear whether a state can offer such programs or whether some of the program features would lead a court to find that they are, or relate to, employee benefit plans. Stakeholders also noted uncertainty caused by regulations from the Departments of Labor (DOL) and the Treasury meant to assist workers and employers. For example, DOL's regulation on payroll deduction IRAs was written before these state efforts were proposed and omits detail that, if included, could help reduce uncertainty. Given these uncertainties, states may face litigation and stakeholders noted that state programs could lose tax preferences if they were ruled preempted by ERISA. GAO suggests that Congress consider providing states limited flexibility regarding ERISA preemption to expand private sector coverage. Agency actions should also be taken to address uncertainty created by existing regulations. Agencies generally agreed with GAO's recommendation. DOL plans to issue a proposed rule on state programs by the end of 2015.
The concept of an architecture to describe an enterprise first emerged in the mid-1980s, and over the years, various frameworks for defining the content of enterprise architectures have been published. Our work in the early 1990s identified architectures as critical success factors in allowing organizations to effectively apply IT to meet mission goals. Since then, we have worked with the Congress, OMB, and the federal Chief Information Officers (CIO) Council to promote the importance of architectures and assist agencies in developing, maintaining, and using them. In our reviews of selected agency IT management practices and major systems modernization programs, we have continued to identify the lack of an architecture as a major management weakness, and we have made recommendations to address this important area. In simple terms, an enterprise can be viewed as any purposeful activity, and an architecture can be characterized as the structure (or structural description) of any activity. Building on this, we can view enterprise architectures as systematically derived and captured structural descriptions—in useful models, diagrams, and narrative—of the mode of operation for a given enterprise, which can be either a single organization or a functional or mission area that transcends more than one organizational boundary (e.g., financial management, homeland security). The architecture describes the enterprise’s operations in both logical terms (such as interrelated business processes and business rules, information needs and flows, and work locations and users) and technical terms (such as hardware, software, data, communications, and security attributes and performance standards). It provides these perspectives both for the enterprise’s current (or “as-is”) environment and for its target (or “to-be”) environment, as well as a transition plan for moving from the “as-is” to the “to-be” environment. The importance of enterprise architectures is a basic tenet of IT management, and their effective use is a recognized hallmark of successful public and private organizations. For over a decade, we have promoted the use of architectures, recognizing them as a crucial means to a challenging goal: that is, agency operational structures that are optimally defined, in terms of both business and technology. The alternative, as our work has shown, is perpetuation of the kinds of operational environments that saddle most agencies today, in which the lack of integration among business operations and the IT resources that support them leads to systems that are duplicative, not well integrated, and unnecessarily costly to maintain and interface. Managed properly, an enterprise architecture can clarify and help optimize the interdependencies and relationships among an organization’s business operations and the underlying IT infrastructure and applications that support these operations. Employed in concert with other important IT management controls (such as portfolio-based capital planning and investment control practices), architectures can greatly increase the chances that organizations’ operational and IT environments will be configured so as to optimize mission performance. Enterprise architectures are integral to managing large-scale programs as well as initiatives that span several agencies, such as those currently being undertaken to support the electronic government (e-government) efforts led by OMB. During the mid-1980s, John Zachman, widely recognized as a leader in the field of enterprise architecture, identified the need to use a logical construction blueprint (i.e., an architecture) for defining and controlling the integration of systems and their components. Accordingly, Zachman developed a structure or framework for defining and capturing an architecture, which provides for six “windows” from which to view the enterprise. Zachman also proposed six abstractions or models associated with each of these perspectives. Zachman’s framework provides a way to identify and describe an entity’s existing and planned component parts, and the relationships between those parts, before the entity begins the costly and time-consuming efforts associated with developing or transforming itself. Since Zachman introduced his framework, a number of frameworks have emerged within the federal government, beginning with the publication of the National Institute of Standards and Technology (NIST) framework in 1989. Since that time, other federal entities have issued enterprise architecture frameworks, including the Department of Defense (DOD) and the Department of the Treasury. In September 1999, the federal CIO Council published the Federal Enterprise Architecture Framework (FEAF), which was intended to provide federal agencies with a common construct for their architectures, thereby facilitating the coordination of common business processes, technology insertion, information flows, and system investments among federal agencies. The FEAF describes an approach, including models and definitions, for developing and documenting architecture descriptions for multiorganizational functional segments of the federal government. More recently, OMB established the Federal Enterprise Architecture Program Management Office to develop a Federal Enterprise Architecture (FEA) according to a collection of five “reference models,” which are intended to facilitate governmentwide improvement through cross-agency analysis and the identification of duplicative investments, gaps, and opportunities for collaboration, interoperability, and integration within and across government agencies. The FEA reference models are summarized in table 1. Although these post-Zachman frameworks differ in their nomenclatures and modeling approaches, each consistently provides for defining an enterprise’s operations in both logical terms and technical terms, provides for defining these perspectives for the enterprise’s current and target environments, and calls for a transition plan between the two. Several laws and regulations have established requirements and guidance, respectively, for agencies’ management of architectures, beginning with the Clinger-Cohen Act in 1996, which directs the CIOs of major departments and agencies to develop, maintain, and facilitate the implementation of IT architectures as a means of integrating agency goals and business processes with IT. OMB Circular A-130, which implements the Clinger- Cohen Act, requires that agencies document and submit their initial enterprise architectures to OMB and that agencies submit updates to OMB when significant changes to their enterprise architectures occur. The circular also directs the OMB Director to use various kinds of reviews to evaluate the adequacy and efficiency of each agency’s compliance with the circular. OMB was given explicit responsibility for overseeing government enterprise architectures by the E-Government Act of 2002, which established the Office of Electronic Government within OMB. This act gives OMB the responsibility for facilitating the development of enterprise architectures within and across agencies and supporting improvements in government operations through the use of IT. We began reviewing federal agencies’ use of architectures in 1994, initially focusing on those agencies that were pursuing major systems modernization programs that were high risk. These included the National Weather Service systems modernization, the Federal Aviation Administration air traffic control modernization, and the Internal Revenue Service (IRS) tax systems modernization. Generally, we reported that these agencies’ enterprise architectures were incomplete, and we made recommendations that they develop and implement complete enterprise architectures to guide their modernization efforts. Since then, we have reviewed architecture management at other federal agencies, including the Department of Education, the former Customs Service, the former Immigration and Naturalization Service, and the Centers for Medicare and Medicaid Services. We have also reviewed the use of enterprise architectures for critical agency functional areas, such as the integration and sharing of terrorist watch lists across key federal departments, and the logistics management area within DOD. These reviews have continued to identify the absence of complete and enforced enterprise architectures, which in turn has led to agency business operations, systems, and data that are not integrated (“stovepiped”), duplicative, and incompatible. These conditions have either prevented agencies from sharing data or forced them to depend on expensive, custom-developed interface systems to do so. In 2002, we published Version 1.0 of our Enterprise Architecture Management Maturity Framework (EAMMF) to provide federal agencies with a common benchmarking tool for planning and measuring their enterprise architecture efforts, as well as to provide OMB with a means for doing the same governmentwide. This framework is an extension of A Practical Guide to Federal Enterprise Architecture, Version 1.0, published by the CIO Council. The framework arranges core elements from the practical guide into a matrix of five hierarchical stages and four critical success attributes; that is, each core element appears at a particular stage of maturity, and it is also associated with one of the critical success attributes. In April 2003, we published Version 1.1 of this framework, which reflects changes and additions that are based on comments we received on the initial version. The EAMMF is made up of five stages of maturity, each of which includes an associated set of elements, along with all of the elements of the previous stages. Table 2 shows these stages, followed by the description of each as contained in Version 1.0 of our framework. Stage 1: Creating EA awareness. Agencies at this stage are characterized either by no plans to develop and use an enterprise architecture, or by plans and actions that do not yet demonstrate an awareness of the value of having and using one. Although Stage 1 agencies may have initiated some enterprise architecture core elements, these agencies’ efforts are ad hoc and unstructured, and they do not provide the management foundation that is necessary for successful enterprise architecture development. Stage 2: Building the EA management foundation. The focus at Stage 2 is on assignment of roles and responsibilities and establishment of plans for developing enterprise architecture products. Specifically, a Stage 2 agency has designated a chief architect and established and staffed a program office that is responsible for enterprise architecture development. Further, a steering committee or group that has responsibility for directing and overseeing the development has been established, and the membership of the steering committee includes business and IT representatives. At Stage 2, the agency either has plans for developing or has begun development of at least some of the necessary enterprise architecture products. This stage also requires the agency to have selected both a framework that will be the basis for the nature and content of the specific products it plans to develop and an automated tool to help in the development. Stage 3: Developing architecture products. At Stage 3, an agency focuses on actual development of enterprise architecture products. The agency has defined the scope of its enterprise architecture as encompassing the entire enterprise, whether an organization or a functional area, and it has a written and approved policy demonstrating institutional commitment. Although the products may not yet be complete, they are intended to describe the agency in terms of business, data, applications, and technology. Further, the products are to describe the current and future states and the sequencing plan for transitioning from current to future state. As the architecture products are being developed, they are to be subject to configuration control. Stage 4: Completing EA products. An agency at Stage 4 has complete and approved enterprise architecture products that it can use to help select and control its portfolio of IT investments. The complete products describe the organization in terms of business, data, applications, and technology. Also, the products are complete in that they describe the agency’s current and future states and the transition plan for sequencing from the current state to the future state. Further, the agency’s CIO has approved the enterprise architecture, and the agency has a written policy requiring that IT investments comply with the enterprise architecture. Stage 5: Leveraging the EA to manage change. At Stage 5, an agency is able to evolve the enterprise architecture products according to a written and approved policy for maintaining the architecture. Also at this stage, the steering committee, investment review board, or agency head approves the enterprise architecture. Finally, the agency has incorporated the enterprise architecture into its corporate decision making, and it has established and is using metrics to measure the effectiveness of its enterprise architecture. In addition to the maturity stages, each core element is also associated with attributes that are critical to the successful performance of any management function (see table 3). The critical success attributes are identical in Versions 1.0 and 1.1 of our framework. Attribute 1: Demonstrates commitment. Because the enterprise architecture is a corporate asset for systematically managing institutional change, the support and sponsorship of the head of the enterprise are essential to the success of the architecture effort. An approved enterprise policy statement provides such support and sponsorship, promoting institutional “buy in” and encouraging resource commitment from participating components. Equally important in demonstrating commitment is vesting ownership of the architecture with an executive body that collectively owns the enterprise. Attribute 2: Provides capability to meet commitment. The success of the enterprise architecture effort depends largely on the organization’s capacity to develop, maintain, and implement the enterprise architecture. Consistent with any large IT project, these capabilities include providing adequate resources (i.e., people, processes, and technology); defining clear roles and responsibilities; and defining and implementing organizational structures and process management controls that promote accountability and effective project execution. Attribute 3: Demonstrates satisfaction of commitment. Satisfaction of the organization’s commitment to develop, maintain, and implement an enterprise architecture is demonstrated by the production of artifacts (e.g., the plans and products). Such artifacts demonstrate “follow through”— actual enterprise architecture production. Satisfaction of commitment is further demonstrated by senior leadership approval of enterprise architecture documents and artifacts; such approval communicates institutional endorsement and ownership of the architecture and the change that it is intended to drive. Attribute 4: Verifies satisfaction of commitment. This attribute focuses on measuring and disclosing the extent to which efforts to develop, maintain, and implement the enterprise architecture have fulfilled stated goals or commitments. Measuring such performance allows for tracking progress that has been made toward stated goals, allows the appropriate actions to be taken when performance deviates significantly from goals, and creates incentives to influence both institutional and individual behaviors. Collectively, these attributes form the basis by which an organization can institutionalize management of any given function or program, such as enterprise architecture management. Within each stage, each critical success attribute includes between one and four core elements, which are descriptions of a practice or condition that is needed for effective enterprise architecture management. On the basis of the implicit dependencies among the core elements, the EAMMF associates each element with one of five hierarchical management stages, referred to as maturity stages. Each stage reflects the collection of enterprise architecture management practices and conditions (i.e., core elements) that are being undertaken by an enterprise at a given maturity level. Figure 1 is a summary of Version 1.0 of the framework, showing the key elements associated with the stages and attributes previously described. Version 1.1 of this framework was released in April 2003. Like the initial version, Version 1.1 is based on the CIO Council guidance and augmented by our research experience in reviewing architecture programs. Changes and additions to the framework were also based on comments received on the initial version. As a comparison between the two frameworks shows, a number of new elements have been added to Version 1.1. Figure 2 shows a summary of the new framework, Version 1.1. The stages and attributes remain the same as with Version 1.0, although the descriptions of the stages are updated in Version 1.1 to reflect the new elements in the framework, as follows: Stage 1: Creating EA awareness. As with Version 1.0, at Stage 1, either an organization does not have plans to develop and use an architecture, or it has plans that do not demonstrate an awareness of the value of having and using an architecture. Although Stage 1 agencies may have initiated some enterprise architecture activity, these agencies’ efforts are ad hoc and unstructured, lack institutional leadership and direction, and do not provide the management foundation that is necessary for successful enterprise architecture development as defined in Stage 2. Stage 2: Building the EA management foundation. An organization at Stage 2 recognizes that the enterprise architecture is a corporate asset by vesting accountability for it in an executive body that represents the entire enterprise. At this stage, an organization assigns enterprise architecture management roles and responsibilities and establishes plans for developing enterprise architecture products and for measuring program progress and product quality. An organization at this stage also commits the necessary resources for developing an architecture—people, processes, and tools. Specifically, a Stage 2 organization has designated a chief architect and established and staffed a program office that is responsible for enterprise architecture development and maintenance. Further, it has established a committee or group that has responsibility for enterprise architecture governance (i.e., directing, overseeing, and approving architecture development and maintenance). This committee or group membership has enterprisewide representation. At Stage 2, the organization either has plans for developing or has started developing at least some enterprise architecture products, and it has fostered an enterprisewide awareness of the value of enterprise architecture and its intended use in managing its IT investments. The organization has also selected a framework and a methodology that will be the basis for developing the enterprise architecture products and has selected a tool for automating these activities. Stage 3: Developing the EA. An organization at Stage 3 focuses on developing architecture products according to the selected framework, methodology, tool, and established management plans. Roles and responsibilities assigned in the previous stage are in place, and resources are being applied to develop actual enterprise architecture products. At this stage, the scope of the architecture has been defined to encompass the entire enterprise, whether an organization or a functional area. Although the products may not be complete, they are intended to describe the organization in terms of business, performance, information/data, service/application, and technology (including security explicitly in each), as provided for in the framework, methodology, tool, and management plans. Further, the products are to describe the current (“as-is”) and future (“to-be”) states and the plan for transitioning from the current to the future state (the sequencing plan). As the products are developed and evolve, they are subject to configuration management. Further, through the established enterprise architecture management foundation, the organization is tracking and measuring its progress against plans; identifying and addressing variances, as appropriate; and then reporting on its progress. Stage 4: Completing the EA. An organization at Stage 4 has completed its enterprise architecture products, meaning that the products have been approved by the enterprise architecture steering committee (established in Stage 2) or an investment review board, and by the CIO. The completed products collectively describe the enterprise in terms of business, performance, information/data, service/application, and technology for both its current and future operating states, and the products include a sequencing plan for transitioning from the current to the future state. Further, an independent agent has assessed the quality (i.e., completeness and accuracy) of the enterprise architecture products. Additionally, evolution of the approved products is governed by a written enterprise architecture maintenance policy that is approved by the head of the organization. Stage 5: Leveraging the EA to manage change. An organization at Stage 5 has secured senior leadership approval of the enterprise architecture products and a written institutional policy stating that IT investments must comply with the architecture, unless granted an explicit compliance waiver. Further, decision makers are using the architecture to identify and address ongoing and proposed IT investments that are conflicting, overlapping, not strategically linked, or redundant. As a result, Stage 5 entities avoid unwarranted overlap across investments and ensure maximum systems interoperability, which in turn ensures the selection and funding of IT investments with manageable risks and returns. Also, at Stage 5, the organization tracks and measures enterprise architecture benefits or return on investment, and adjustments are continuously made to both the enterprise architecture management process and the enterprise architecture products. Overall, Version 1.1 is more demanding (i.e., sets a higher standard) than Version 1.0 because Version 1.1 adds important content, clarifies existing content, and links the EAMMF framework to related IT management guidance, such as our IT investment management framework. Key differences in Version 1.1 of the framework appear first in Stage 2 and affect later stages either explicitly or implicitly. That is, some planning elements associated with Stage 2 now propagate explicitly through later stages as plans are executed and architecture products are developed, completed, and implemented. For example: Version 1.1 includes “performance” among the models that are needed to describe the “as-is” and “to-be” environments; these models are introduced into the planning elements in Stage 2 and built upon as plans are executed: that is, as architecture products are developed and completed in Stages 3 and 4, respectively. Version 1.1 explicitly recognizes the need to address security in the descriptions of the “as-is” and “to-be” environments; this element is introduced in Stage 2 and reiterated in Stages 3 and 4. Version 1.1 introduces the need to plan for metrics in Stage 2 and to measure different aspects of enterprise architecture development, quality, and use in Stages 3, 4, and 5. Other differences introduced in Version 1.1 affect later stages implicitly, since each stage includes all elements of previous stages. For example, in Stage 2, an element has been added that recognizes the need for adequate resources (people, processes, and technology). This element appears in Stage 2 explicitly, but it is included in later stages implicitly. Stage 4 now includes an element requiring that enterprise architecture products and management processes undergo independent verification and validation; this element continues in Stage 5. In addition, two core elements, both in Stage 2, have been altered from Version 1.0, as follows: Enterprise architecture maintenance, in addition to development, is now included among the responsibilities of the program office. The use of an enterprise architecture methodology is added to the use of a framework and automated tool in developing the architecture. Last, the sequence of two elements (the policies on maintenance and on IT investment compliance with the architecture) is reversed in Version 1.1. That is, maintenance policy is now associated with Stage 4 and investment compliance with Stage 5. This reordering reflects greater alignment of these elements with the definitions of their respective framework stages. Finally, several new elements were added to Stage 5 that provide for maximizing the value and use of the enterprise architecture by keeping it current and using it to manage change (including the existence of a process to formally manage enterprise architecture change, the enterprise architecture being an integral component of the IT investment management process, the periodic updating of enterprise architecture products, and the compliance of IT investments with the enterprise architecture). These and the other changes are summarized in table 4. We first surveyed enterprise architecture management maturity across the federal government in 2001, and we reported in February 2002 that about 52 percent of federal agencies reported having at least the management foundation that is needed to begin successfully developing, implementing, and maintaining an enterprise architecture, and that about 48 percent of agencies had not yet advanced to that basic stage of maturity. At the other extreme, about 4 percent of federal agencies’ enterprise architecture efforts had matured to the point that they could be considered effective, with one agency attaining the highest stage of maturity. This overall state of affairs was consistent for the three agency types that we surveyed: cabinet- level departments (e.g., Treasury); department component agencies (e.g., IRS, which is a component of Treasury); and independent agencies (e.g., Social Security Administration). We also reported that the state of architecture management across the federal government was attributable to four management challenges that agencies reported facing as they attempt to develop and use architectures. These challenges were (1) overcoming limited executive understanding, (2) inadequate funding, (3) insufficient skilled staff, and (4) organizational parochialism. Additionally, we recognized OMB’s efforts to promote and oversee agencies’ enterprise architecture efforts. Nevertheless, we determined that OMB’s leadership and oversight could be improved by, for example, using a more structured means of measuring agencies’ progress and by addressing the above management challenges. To this end, our February 2002 report provided OMB with the necessary baseline data, improvement framework, and several recommendations. OMB generally agreed with our findings and conclusions in that report and stated that it would consider our recommendations. Our 2003 survey results indicate that while some individual agencies have made progress in improving their enterprise architecture management maturity, progress for the federal government as a whole has not occurred. Specifically, while about one-fourth of all agencies improved their enterprise architecture management maturity stage relative to Version 1.0 of our framework, about one-fourth of all agencies decreased in maturity and about one-half of all agencies remained at the same stage. Furthermore, the more demanding standard established by our framework Version 1.1 caused a decline in agency maturity levels, demonstrating that improvements are needed before agencies’ enterprise architecture management practices can be considered effective. The average maturity stage for the 96 responses included in our survey was 1.76 when measured against Version 1.0 of our framework and 1.33 when compared with Version 1.1 of our framework. Appendix IV provides a list of these individual agencies and their maturity stages. Overall, little substantial change was revealed in agencies’ overall enterprise architecture maturity when their efforts were compared with Version 1.0 of our framework. Of the 93 agencies included in both our 2001 and 2003 surveys, 22 agencies (24 percent) increased their respective EAMMF maturity stages, 24 agencies (26 percent) decreased their stages, and 47 agencies (51 percent) remained the same. (See fig. 3.) At the department level, 4 departments increased their maturity stage, 4 decreased, and 6 stayed at the same stage. The Department of Homeland Security—which began operations as a department in March 2003— debuted at Stage 3. Although progress for agencies in the aggregate continued to be limited, departments as a group made the most progress: the average maturity for the 14 departments that responded to both the 2001 and 2003 surveys increased from 1.93 to 2.00 against Version 1.0 of the framework. In contrast, component agencies showed a slight decline in maturity against Version 1.0. Specifically, of the component agencies that responded to both surveys, 9 increased their maturity stage, 15 decreased in maturity, and 31 stayed the same, with the average maturity stage decreasing from 1.69 to 1.62. For independent agencies that responded to both surveys, 9 increased their maturity stage, 5 decreased in maturity, and 10 stayed at the same stage. On average, independent agencies showed an increase in maturity, from 1.75 to 1.96 against Version 1.0. Figure 4 summarizes the maturity status of departments, components, independent agencies, and all agencies, according to Version 1.0 of our framework, and compares our 2001 and 2003 survey results. Most agencies that made progress from 2001 to 2003 moved from a lower maturity stage to Stage 2 or 3 (as shown in fig. 4, most agencies were clustered in Stages 1 and 2, so this is not unexpected). In particular, of the 22 agencies that increased their maturity stage, 6 increased from Stage 1 to Stage 2, and 12 increased from Stage 1 or 2 to Stage 3. Most agencies that regressed fell to Stage 1 from Stages 2 and 3. Specifically, of the 24 agencies that decreased their maturity stage, 16 decreased to Stage 1 from Stage 2 or 3. Figure 5 shows the number of agencies whose maturity levels improved and declined between 2001 and 2003 as measured against Version 1.0 of our maturity framework. Agencies’ progress since our first survey is similarly limited when we consider the total number of core elements satisfied. The 93 agencies that responded to both the 2001 and 2003 surveys satisfied an average of about 11 of the 19 elements in Version 1.0 in both 2001 and 2003. As a whole, the 93 agencies satisfied about 57 percent of all possible framework elements in 2001 and about 60 percent of all possible framework elements in 2003. From 2001 to 2003, agencies showed improvements in satisfying certain core elements, but these improvements were offset by declines in agency satisfaction of other core elements. Examples of core elements where agency satisfaction significantly improved are as follows: “Metrics exist for measuring EA benefits” (about a 38 percent increase), “Chief architect exists” (about a 23 percent improvement), and “EA products are under configuration management” (about an 18 percent increase). Examples of core elements where agency satisfaction significantly declined are as follows: “EA products describe ‘as-is’ environment, ‘to-be’ environment, and sequencing plan” (about a 39 percent decrease); “EA products describe enterprise’s business—and the data, applications, and technology that support it” (about a 36 percent decrease); “Either EA steering committee, investment review board, or agency head has approved EA” (about a 25 percent decrease); and “Program office responsible for EA development exists” (about a 23 percent decrease). Figures 6 to 10 show the number of agencies that satisfied the framework elements in each stage of Version 1.0 in 2001 and in 2003. Appendixes V, VI, and VII provide detailed tables showing each of the 93 agencies’ status regarding the elements of the framework. For the 22 agencies that advanced one or more maturity stages from 2001 to 2003, fulfillment of no single core element resulted in these advancements. That is, for the 22 agencies, increases in maturity stages are attributable to the fulfillment of 7 core elements spanning three stages of maturity. Table 5 shows those newly satisfied core elements that accounted for increases in maturity stage. As with increases in agency maturity levels, no single core element accounted for the decreases in agency maturity between our 2001 and 2003 surveys. However, as shown in table 6, the Stage 2 framework element requiring a program office was the most significant newly unsatisfied element for the 24 agencies that decreased maturity levels. One factor accounting for decreases in maturity is improved accuracy in agencies’ responses to our survey. Improved accuracy is a function of (1) improved agency familiarity with and understanding of enterprise architecture management and our framework since our last survey and (2) the requirement in our 2003 survey for documentation to support certain survey responses. When compared with Version 1.1 of our framework, the state of enterprise architecture management across the federal government is not mature. In particular, about 21 percent of federal agencies (20 of 96) have the Stage 2 management foundation that is needed to begin successfully developing, implementing, and maintaining an enterprise architecture, and about 79 percent of agencies (76 of 96) have not yet advanced to this basic stage of maturity. One agency, the Executive Office of the President, provided responses placing it at a stage of enterprise architecture management maturity that can be considered mature and effective. This overall state of federal government maturity is consistent for each of the three groups that make up the 96 agencies surveyed: departments, component agencies, and independent agencies. Figure 11 summarizes the maturity status of departments, component agencies, independent agencies, and all agencies according to Version 1.1 of our framework. No single core element that was added to our framework contributed significantly to this situation, but the “methodology” subelement of the Stage 2 element “EA is being developed with a framework, methodology, and automated tool” was the most significant factor keeping agencies from achieving Stage 2. Specifically, the absence of a “methodology” kept 7 agencies from attaining Stage 2 status. Nevertheless, certain core elements of Version 1.1 of our framework were frequently not satisfied by agencies. Of the 31 core elements in Version 1.1, 17 were not satisfied by over 50 percent of agencies. Furthermore, 8 elements associated with maturity Stages 4 and 5 were not satisfied by over 80 percent of agencies. Figures 12 to 16 show how departments, component agencies, and independent agencies were rated against each of the Version 1.1 core elements. Although significant gaps existed across federal agencies in meeting the core elements of Version 1.1 of the framework, at least 80 percent of agencies reported performing 8 core elements that were related to Stages 2 and 3 of our framework. The most often satisfied elements included the following Stage 2 elements: “EA plans call for describing both the ‘as-is’ and the ‘to-be’ environments of the enterprise, as well as a sequencing plan for transitioning from the ‘as-is’ to the ‘to-be’”(about 94 percent); “EA plans call for describing both the ‘as-is’ and the ‘to-be’ environments in terms of business, performance, information/data, application/service, and technology” (about 90 percent); and “EA plans call for business, performance, information/data, application/service, and technology descriptions to address security” (about 86 percent). The most often satisfied elements also included the Stage 3 element: “EA products describe or will describe both the ‘as-is’ and the ‘to-be’ environments of the enterprise, as well as a sequencing plan for transitioning from the ‘as-is’ to the ‘to-be’” (about 88 percent). In addition, although only one agency has achieved Stage 5, most agencies reported satisfying the Stage 5 core elements requiring that IT investments comply with their enterprise architecture (about 80 percent) and that enterprise architecture is an integral component of IT investment management process (about 69 percent). Furthermore, 96 percent of agencies in Stages 1 through 4 are performing at least 1 core element above their current maturity stage, which means that agencies as a whole are, to varying degrees, performing above their assigned maturity stages. Specifically, of the 76 agencies at Stage 1, about 95 percent are performing at least 1 core element in a higher maturity stage. About 35 percent of agencies need to satisfy only 1 additional core element to advance to at least the next maturity stage. Two of these agencies, Commerce and the U.S. Mint, could advance two stages by satisfying just 1 additional core element. Commerce, currently a Stage 1 agency, could advance to Stage 3 by satisfying the framework element “Program office responsible for development and maintenance exists.” The Mint, also currently a Stage 1 agency, could advance to Stage 3 by satisfying the framework element “Adequate resources exist.” Departments, component agencies, and independent agencies had varying degrees of success satisfying certain core elements within individual stages. In general, departments had more success satisfying lower stage elements than did components and independent agencies. In Stage 2, for example, while 69 percent of departments reported using a framework, methodology, and automated tool to develop their enterprise architecture, only 29 percent of components and 50 percent of independent agencies reported the same. Additionally, in Stage 3, while 81 percent of departments reported that progress against plans is measured and reported, only 25 percent of components and 25 percent of independent agencies reported the same. One possible reason for this situation, which is discussed later in this report, is that OMB’s oversight of agency enterprise architecture efforts focuses on departments and major independent agencies—not on component agencies. Although, as a whole, departments satisfied more lower level framework elements than did component agencies and independent agencies, departments generally still need to satisfy several lower level framework elements to achieve a Stage 3 maturity level. On average, each department needs to satisfy 2 core elements to satisfy all Stage 2 and 3 framework elements. The maturity stage of a department generally was not indicative of the maturity of its component agencies. For example, the Departments of Health and Human Services and Transportation reached Stage 2, while their component agencies averaged Stage 1. DOD’s Global Information Grid (GIG) architecture was at Stage 3 and its Business Enterprise Architecture was at Stage 1, while DOD components averaged slightly over 1. Conversely, the Departments of Commerce, Justice, and the Treasury were at Stage 1, with their component agencies averaging higher maturity levels. Component agencies of Commerce showed a slightly higher maturity level than did component agencies of other departments. Although the average maturity level of the 56 department component agencies we surveyed was 1.23, the five Commerce component agencies showed an average maturity level of 1.80, largely owing to the maturity levels for the Bureau of the Census (Stage 3), the U.S. Patent and Trademark Office (Stage 2), and the National Oceanic and Atmospheric Administration (Stage 2). The Department of Agriculture’s maturity level (Stage 1) was the same as the average maturity level of its component agencies. Figure 16 summarizes the average maturity level for departments and their respective component agencies. The results of our survey and analysis of survey responses against Version 1.1 of our maturity framework show that the Executive Office of the President (EOP) is the sole Stage 5 agency. However, 7 other agencies are close to becoming models of enterprise architecture management. For example, the DOD GIG architecture and IRS, both of which attained Stage 3 of Version 1.1, need to satisfy only 3 more elements to become Stage 5 agencies. To achieve Stage 5, the GIG architecture needs to satisfy the Stage 4 element “EA products describe both the ‘as-is’ and the ‘to-be’ environments of the enterprise, as well as a sequencing plan for transitioning from the ‘as-is’ to the ‘to-be’ ” and the Stage 5 elements “Return on EA investment is measured and reported” and “Organization head has approved current version of EA.” IRS could become a Stage 5 agency by satisfying the Stage 4 elements “Business, performance, information/data, application/service, and technology descriptions address security” and “EA products and management processes undergo independent verification and validation” and the Stage 5 element “Return on EA investment is measured and reported.” Table 7 shows the agencies that need to satisfy 5 or fewer elements to achieve Stage 5 under Version 1.1. OMB has taken a number of steps to promote, standardize, and improve enterprise architecture use across the government. For example, OMB now requires agencies to submit enterprise architectures for review. It also leads various CIO Council initiatives to develop the FEA, including associated models, and to facilitate cross-agency efforts and major initiatives such as e-government. However, despite OMB’s actions, the same management challenges facing agencies 2 years ago have increased in prevalence, and agencies report mixed results from OMB’s efforts to address these challenges. The persistence of these challenges can be attributed, at least in part, to the office not implementing our prior recommendations aimed at addressing them and improving its enterprise architecture oversight. OMB recognizes the importance of enterprise architectures and has supported their use since the passage of the Clinger-Cohen Act of 1996, with particular emphasis and attention in the last 2 years. For example, in collaboration with others and us, OMB issued guidance on the purpose and use of enterprise architectures shortly after passage of the act. It has also incorporated enterprise architecture considerations into its oversight processes and issued guidance directing that agency IT investments be based on agency IT enterprise architectures. More recently, it has launched efforts to promote the development and use of enterprise architectures through the budget process and various CIO Council initiatives. As a means of promoting agencies’ enterprise architecture use, OMB has also included requirements for having and using enterprise architectures as part of the budget process, which began with the fiscal year 2002 budget cycle and, according to OMB officials, has continued through the current cycle (fiscal year 2005). More specifically: For the fiscal year 2002 budget cycle, OMB required agency budget submissions to provide investment plans in several areas, including enterprise architectures. For fiscal year 2003, OMB required departments and major agencies that are CIO Council members to address how IT investment decision making incorporated architecture alignment and, for agencies that do not have architectures, to provide a plan for developing one. OMB also assessed the status of major department and agency architectures against the CIO Council’s Practical Guide for Federal Enterprise Architecture and reported the assessment results in the President’s fiscal year 2003 budget. However, this assessment covered only 23 of the 96 agencies included in this survey, and assessment results were not reported in a way to permit a clear understanding of the agencies’ enterprise architecture management status or to facilitate year-to-year progress determinations. For example, for the Environmental Protection Agency (EPA), the assessment resulted in the following report: “EPA has the fundamental elements of an EA documented.” As part of the fiscal year 2004 budget cycle, OMB again assessed major department and agency architectures and reported the assessment results in the President’s fiscal year 2004 budget. However, the scope of the assessment again was not as comprehensive and meaningful as our survey results and covered only 22 of the 96 agencies included in this survey. For example, for the Department of Agriculture, OMB reported, “USDA’s EA is continuing to focus on the business, data, application, and technology layers of the EA. USDA is also working to integrate the EA efforts throughout the department.” Also for the fiscal year 2004 cycle, the office evaluated major IT investment business cases for consistency with agency architectures and with the FEA business reference model. OMB has also worked through the CIO Council, which is co-chaired by OMB’s Deputy Director of Management, to improve enterprise architecture management and use. Specifically, the CIO Council established the Architecture and Infrastructure Committee to, for example, develop simpler and more consistent enterprise architecture terminology and facilitate cross-agency enterprise architecture efforts. This committee has three subcommittees that, since being chartered in October 2002, have organized, appointed leaders, established membership, and begun implementing plans. The name and objective of each subcommittee are provided below. The Enterprise Architecture Governance Subcommittee was established to provide policy guidance and advice and assistance in the definition, design, and implementation of enterprise architecture discipline and practice throughout the federal government. It is expected to support the alignment of the FEA with agency enterprise architectures and to serve as the core federal group providing advocacy for enterprise architecture integration of business and technology architectures across state, local, and international boundaries. The Emerging Technology Subcommittee was created to identify technologies with the potential to improve the value and quality of the FEA. The Component Subcommittee is expected to foster the identification, maturation, use, and reuse of component-based architectures and architectural components in the federal government. OMB’s development of the FEA is intended to facilitate governmentwide improvement through cross-agency analysis and the identification of duplicative investments, gaps, and opportunities for collaboration, interoperability, and integration within and across government agencies. According to OMB, the result will be a more citizen-centered, customer- focused government that maximizes technology investments to better achieve mission outcomes. As previously mentioned, the FEA will be composed of five reference models: Business reference model. The business reference model serves as the foundation for the FEA. It is intended to describe the federal government’s businesses, independent of the agencies that perform them. The model consists of four business areas: (1) services for citizens, (2) mode of delivery, (3) support delivery of services, and (4) management of government resources. These four business areas are decomposed into 39 lines of business, which are made up of 153 subfunctions. Examples of lines of business under the services for citizens business area are homeland security, law enforcement, and economic development. Each of these lines of business includes a number of subfunctions. For example, for the homeland security line of business, a subfunction is border and transportation security; for law enforcement, a subfunction is citizen protection; and for economic development, a subfunction is financial sector oversight. Version 1.0 of the model was released to agencies in July 2002 and was used in the fiscal year 2004 budget process. According to OMB, Version 1.0 of the model revealed that many federal agencies were involved in each line of business, and that agencies’ proposed fiscal year 2004 IT investments offered multibillion-dollar consolidation opportunities. In June 2003, Version 2.0 was released, which, according to OMB, reflects changes to align the model with other governmentwide management frameworks (e.g., budget function codes) and improvement initiatives (e.g., the President’s Budget Performance Integration Initiative) and addresses comments from agencies. OMB expects agencies to use the model, as part of their capital planning and investment control processes, to help identify opportunities to consolidate IT investments across the federal government. Service component reference model. The service component reference model is intended to identify and classify IT service (i.e., application) components that support federal agencies and promote the reuse of components across agencies. The model is organized as a hierarchy, beginning with seven service domains, as shown in table 8. These service domains are decomposed into 29 service types, which are further broken down into 168 components. For example, the customer services domain is made up of 3 service types: customer relationship management, customer preferences, and customer-initiated assistance. Components of the customer relationship management service type include call center management and customer analytics, components of the customer preferences service type include personalization and subscriptions, and components of the customer-initiated assistance service type include on-line help and on-line tutorials. Version 1.0 of the service component reference model was released in June 2003. The model is intended to help agencies and OMB identify, among other things, agencies that are building or have already built similar service components that can be reused. Technical reference model. The technical reference model is intended to describe the standards, specifications, and technologies that collectively support the secure delivery, exchange, and construction of service components. The model is made up of the following four core service areas: Service access and delivery: the collection of standards and specifications that support external access, exchange, and delivery of service components. Service platform and infrastructure: the delivery platforms and infrastructure that support the construction, maintenance, and availability of a service component or capabilities. Component framework: the underlying foundation, technologies, standards, and specifications by which service components are built, exchanged, and deployed. Service interface and integration: the collection of technologies, methodologies, standards, and specifications that govern how agencies will interface internally and externally with a service component. Each of these service areas is made up of service categories, which identify lower levels of technologies, standards, and specifications; service standards, which define the standards and technologies that support the service category; and the service specification, which details the standard specification or the provider of the specification. For example, within the first core service area (service access and delivery), an example of a service category is access channels, and service standards are Web browsers and wireless personal digital assistants. Examples of service specifications for the Web browser service standard are Internet Explorer and Netscape Navigator. Version 1.0 of the technical reference model was released in January 2003, followed by Version 1.1, reflecting minor revisions that were based, in part, on agencies’ reviews, in August 2003. The model is intended to help agencies in defining their target technical architectures. Performance reference model. The performance reference model is intended to describe a set of performance measures for the federal government (i.e., outcome and output measures for each line of business and subfunction identified in the business reference model). Thus, the model is expected to support the measurement of cross-agency initiatives. Version 1.0 of the model was released in September 2003. Data and information reference model. The data and information reference model is intended to describe the type of data and information that support program and business line operations and the relationships among these types. Thus, the model is to help describe the types of interactions and information exchanges that occur between the government and its customers. OMB plans to release Version 1.0 of the model in October 2003. For the fiscal year 2005 budget cycle, OMB officials told us that they will use the FEA performance, service component, and technical reference models to evaluate agencies’ major IT investments. Agency responses to our survey indicated high levels of understanding and support for OMB’s FEA work. For example, about 80 percent of agencies responded that they understand the goals and objectives of the FEA (about 8 percent did not) and that they support those goals and objectives (about 6 percent did not), and about 72 percent of agencies responded that their agency’s architecture is traceable to the FEA (about 6 percent were not). Additionally, about 67 percent responded that they understand the approach to developing the FEA (about 13 percent did not), and about 63 percent stated that they support this approach (about 10 percent did not). About 61 percent of agencies responded that their enterprise architecture would change as a result of the FEA (about 8 percent would not). (See table 9.) Despite OMB’s architecture-related activities, agencies continue to face the same management challenges that we identified 2 years ago—that is, obtaining top management support and commitment, overcoming parochialism, and having the requisite resources (financial and human capital) to get the job done. Moreover, the percentage of agencies identifying these management challenges has grown. For example, getting top management to understand the purpose, content, and value of architectures was seen as a challenge by about 50 percent of agencies—up from 39 percent in our last survey. As our framework recognizes, obtaining executive understanding and support is essential to having an effective enterprise architecture program. Without it, agencies will have increased difficulty in addressing other challenges, such as overcoming parochialism among organizational components and obtaining requisite resources (funding and human capital). Our survey results bear this out—at the same time that the percentage of agencies identifying top management understanding and support as a challenge rose, the percentage of agencies identifying these other challenges almost all rose. For example, the percentage that identified parochialism as a challenge grew from 39 to 47 percent. Also, while 50 percent of agencies continued to report funding as a significant challenge, the percentage of agencies that reported obtaining skilled staff as a challenge grew from 32 to 49 percent. (See table 10.) Agencies also reported mixed levels of satisfaction with OMB’s efforts to address these management challenges. Specifically, just over half of agencies were satisfied with OMB’s efforts to foster top management understanding and to overcome agency component organization parochialism (58 and 53 percent, respectively). Moreover, fewer than half of agencies (40 percent) were satisfied with OMB’s actions to address their enterprise architecture funding and staffing challenges. (See table 11.) Our February 2002 report concluded that OMB needed to advance the level of enterprise architecture management maturity by exercising improved oversight and identifying governmentwide solutions to common enterprise architecture management challenges facing agencies. Specifically, we recommended that the OMB Director, in collaboration with the federal CIO Council, use the maturity framework and agency baseline information provided in our February 2002 report as the basis for helping agencies to advance the state of their respective enterprise architecture development, implementation, and maintenance efforts, and for measuring agency progress. We further recommended that in doing so, the director require each of the 116 agencies surveyed in our 2002 report to (1) submit to OMB an annual update of the agency’s satisfaction of each of the core elements contained in the maturity framework and (2) have this update verified by the agency’s inspector general or comparable audit function before it is submitted to OMB. Additionally, we recommended in our 2002 report that the OMB Director, in collaboration with the CIO Council, develop and implement a plan to address the governmentwide impediments to greater agency use of enterprise architectures. We recommended that, at a minimum, this plan should include the two primary challenges identified in the 2002 report— that is, agency executive management understanding of enterprise architectures and the availability of enterprise architecture human capital expertise. Finally, we recommended that the director report annually to the Senate Committee on Governmental Affairs and the House Committee on Government Reform on the results of OMB’s annual update of the state and progress of federal agencies’ enterprise architecture efforts. OMB officials generally agreed with the findings and conclusions of our 2002 report and stated that they would consider using our framework. However, after 18 months, office officials told us that they are still considering using our framework as a basis for evaluating agencies’ progress in developing and implementing their architectures, but had not committed to doing so because they were still reviewing the options for evaluating agencies’ progress in developing and implementing their enterprise architectures using our framework and other potential tools. Additionally, the office did not report any plans to address governmentwide impediments to greater agency use of architectures. Further, OMB reported that it has and plans to continue to provide information to the Congress on the state of agency enterprise architecture efforts and on progress in implementing the FEA. Overall, the federal government’s state of enterprise architecture management remains less than satisfactory, with little progress being made over the last 2 years. As a result, most federal agencies continue to run the serious risk of investing in IT solutions that will not overcome, but rather will perpetuate, long-standing incompatibilities and duplication within agency operational and systems environments. OMB has taken steps to promote the development and use of enterprise architectures; however, these steps have yet to produce desired results. It is thus important for OMB to take additional actions, such as those that we have previously recommended and OMB has yet to implement. To do less risks continued exposure of agency IT investments to the unnecessary risk of being duplicative, incompatible, and needlessly costly. We reiterate the recommendations we made in our February 2002 report on the governmentwide status of enterprise architecture use, with the modification that OMB use Version 1.1 of our framework and the baseline data from our 2003 survey included in this report, rather than Version 1.0 of our framework and our 2001 survey data. Additionally, we recommend that the OMB Director, in developing and implementing the plan we previously recommended to address governmentwide impediments to greater agency use of enterprise architectures, ensure that the plan provides for identifying agencies that have effectively overcome enterprise architecture management challenges and sharing those and other lessons learned and best practices. Also, we recommend that the director, in annually reporting to the Senate Committee on Governmental Affairs and the House Committee on Government Reform, as we previously recommended, include in the report what steps have been taken to implement our recommendations, including reasons for not adopting our maturity framework. In oral comments on a draft of this report, officials from OMB’s Office of Information and Regulatory Affairs and the Federal Enterprise Architecture Program Management Office stated that they generally agreed with our findings and recommendations. They also stated that they agreed with the need for agency assessments using Version 1.1 of our framework, and that these assessments should be independently verified. They added that fully implementing our recommendations would require sustained management attention. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies to interested congressional committees, the OMB Director, and agencies that participated in our survey. We will also provide copies to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions concerning this information, please contact me at (202) 512-3439 or by e-mail at [email protected]. Key contributors to this report are listed in appendix IX. In response to our 2003 survey, agencies reported additional information related to the implementation of their enterprise architectures. This information includes architecture benefits and architecture tool, framework, methodology, and contractor experiences. Office of Management and Budget (OMB) policy, Chief Information Officer (CIO) Council guidance, and our research and reviews of agencies’ management of information technology (IT) have identified multiple benefits of effectively using enterprise architectures, including avoiding duplication between IT systems, promoting integration of systems, reducing system-related costs, and optimizing agency mission performance. Agency responses to our 2001 survey affirmed these and offered additional benefits, such as lower system-related costs and benefits related to enhanced productivity and improved efficiency. Agencies responding to our 2003 survey reported similar benefits. For example, benefits related to improved systems interoperability were cited by 53 percent of agencies, while improved organization and change management were cited by 51 percent of agencies. Also, enhanced productivity and lower system-related costs were cited by 41 percent and 39 percent, respectively. Table 12 shows the benefits that were most frequently identified by survey respondents. One new benefit cited by 56 percent of agencies was the use of “enterprise licenses.” Such licenses take advantage of the economies of scale associated with purchasing a large number of commercial product licenses. An automated enterprise architecture tool serves as the repository of architecture artifacts, which are work products that are produced and used to capture and convey architectural information. An agency’s choice of tool should be based on a number of considerations, including agency needs and the size and complexity of the architecture. Agencies reported using various automated tools to develop and maintain their enterprise architectures. The most commonly identified architecture products were Microsoft Office (72 agencies), System Architect (31 agencies), the Enterprise Architecture Management System (18 agencies), Rational Rose (17 agencies), Metis (11 agencies), and Framework (7 agencies). Forty-one agencies reported using “other” architecture products. Figure 17 indicates the proportion of agencies that indicated using each architecture tool. Agencies reported different levels of satisfaction with the enterprise architecture tools they are using. As shown in table 13, about 68 percent of agencies using System Architect were satisfied, about 73 percent of agencies using Metis were satisfied, and about 61 percent of agencies using Microsoft’s Office Suite were satisfied. In contrast, about 17 percent of agencies using the EA Management System were satisfied (about 67 percent of agencies using EA Management System responded that it was too early to comment on satisfaction levels), and about 41 and 43 percent of agencies using Rational Rose and Framework, respectively, were satisfied. With respect to agencies’ dissatisfaction with their tools, about 3 percent of agencies using System Architect were dissatisfied, and about 13 percent of agencies using Microsoft’s Office Suite were dissatisfied. Also, about 11 percent of agencies using the EA Management System were dissatisfied, and about 12 and about 29 percent of agencies using Rational Rose and Framework, respectively, were dissatisfied with those tools. No agencies using Metis were dissatisfied. An enterprise architecture framework (or model) provides a formal structure for representing the architecture and serves as the basis for the nature and content of the specific products that the agency plans to develop, use, and maintain. As such, a framework helps to ensure the consistent representation of information from across the organization and supports orderly capture and maintenance of architecture content. Agencies reported using various frameworks. The most frequently cited frameworks in our survey responses were the Federal Enterprise Architecture Framework (FEAF) (61 agencies), the Federal Enterprise Architecture Program Management Office (FEAPMO) Reference Models (56 agencies), and the Zachman Framework (36 agencies). Figure 18 indicates the proportion of agencies that indicated using each framework. Other frameworks used included the Treasury Enterprise Architecture Framework (TEAF); the National Institute of Standards and Technology Framework (NIST framework); the Command, Control, Communications, Computers, Intelligence, Surveillance, and Reconnaissance (C4ISR) Framework; and the Department of Defense Architecture Framework (DoDAF). Agencies reported different levels of satisfaction with the enterprise architecture frameworks they are using, as shown in table 14. The levels of satisfaction ranged from 81 percent, reported by agencies using the Zachman Framework, to 45 percent, reported by agencies using the NIST framework. As table 14 shows, few agencies reported being dissatisfied out of 209 responses. An enterprise architecture methodology provides a common set of procedures for developing architecture products and, if implemented properly, helps to ensure consistency in the procedures used across the organization for developing and maintaining the enterprise architecture. An organization’s methodology or methodologies should govern how the architecture products will be developed, maintained, and validated. Methodologies need to be documented, understood, and consistently applied. They should prescribe the standards, steps, tools, techniques, and measures to be used to provide reasonable assurance that expected product quality is attained. Less than half (41 percent) of the federal agencies that we surveyed had selected a methodology. About 55 percent (23 of 42) of the methodologies that agencies reported using were Spewak’s enterprise architecture planning methodology or a variation. Four of the remaining 19 methodologies were developed by META Group, and 2 were developed by Gartner, Inc. Two agencies cited James Martin’s Information Strategy Planning, and 2 agencies cited the Department of Commerce’s Enterprise Architecture Methodology. The remaining 21 percent (9 of 42) were unique methodologies. Agencies reported heavy use of contractor support for developing their respective architectures. Most agencies (72 of 92 agencies that responded to this question—78 percent) stated that their architectures were developed in-house with contractor support. Ten agencies (11 percent) reported that contractors developed their enterprise architectures. Ten agencies (11 percent) reported that they developed their enterprise architectures in-house without any contractor support. Table 15 describes the level of contractor use, by agency type. Agency-reported data revealed a wide variance in the cost of developing, completing, and maintaining enterprise architectures. Agencies generally reported that their architecture development costs could be allocated to several categories, with the majority of costs attributable to agency and contractor personnel. As we have previously reported, the scope and nature of the enterprise and the extent of enterprise transformation and modernization envisioned will dictate the depth and detail of the architecture to be developed and maintained. Restated, the architecture should be tailored to the individual enterprise and that enterprise’s intended use of the architecture. Accordingly, the level of resources that an agency invests in its architecture is likely to vary. Agency responses to our survey showed this to be the case. Agencies that reported cost data reported $599 million being spent to date on the development of architectures, with individual agency development costs to date ranging from $5,000 to $248 million. Departments’ architecture development costs varied more than component and independent agencies’ costs, while component agencies reported spending the most to date, with independent agencies spending the least. Agencies reported estimated costs to complete architecture development ranging from $3,000 to $319 million, and annual estimated maintenance costs ranging from $1,000 to $36 million. Figures 19 through 27 depict the variability of cost data reported by departments, component agencies, and independent agencies. Of the $599 million reported in architecture development costs, agencies allocated $511 million to the following seven cost categories that we identified in our questionnaire: agency personnel, contractor personnel, tools, methodologies, independent validation and verification, training, and other. For those agencies that reported and allocated costs, the majority of these costs were for agency and contractor personnel—$116.7 million (23 percent) were attributed to agency personnel and $188.9 million (37 percent) were attributed to contractor personnel. About $193.3 million (38 percent) were attributed to “other” costs, $7.1 million (1 percent) to architecture tools, and $3.9 million (eight-tenths of 1 percent) to independent validation and verification contract personnel. Further, $1.0 million (two-tenths of 1 percent) of costs were attributed to methodologies and another $1.0 million (two-tenths of 1 percent) to training. Figure 28 shows the architecture development costs by category. Table 16 shows enterprise architecture development, completion, and maintenance costs for each agency that provided cost data. Our objectives were to determine (1) what progress federal agencies have made in effectively developing, implementing, and maintaining their enterprise architectures and (2) the actions of the Office of Management and Budget (OMB) to advance the state of enterprise architecture development and use across the federal government. To address our objectives, we obtained and reviewed relevant guidance on enterprise architectures, such as OMB Circular A-130 and guidance published by the federal Chief Information Officers (CIO) Council, including the Federal Enterprise Architecture Framework Version 1.1 and the Practical Guide. We also researched our past reports and guidance on the management and use of enterprise architectures, including the results of our 2001 governmentwide enterprise architecture survey and our enterprise architecture management maturity framework. Next, we used the CIO Council’s Practical Guide and our enterprise architecture management maturity framework to develop two data collection instruments—one for federal departments and one for agencies that are either components within a department or are independent (see app. VIII). We pretested our survey instruments at one federal department and one component agency. To ensure consistency and comparability with our 2001 governmentwide enterprise architecture survey, we based our survey population on the same 116 agencies, with appropriate additions and deletions. These agencies consisted of all cabinet-level departments, major component agencies within departments, and other independent agencies. We modified our 2001 survey population to reflect the federal government’s reorganization of March 1, 2003, in which the Department of Homeland Security (DHS) and its directorates (i.e., component agencies) became operational, resulting in the addition of 5 agencies. At the same time, the establishment of DHS resulted in 4 agencies that were included in our 2001 survey being eliminated from our survey population because they were absorbed into DHS directorates. We also eliminated the U.S. Marine Corps as a separate agency within our population so that the Department of the Navy, at its request, could provide a single response for the Navy and the Marine Corps. Table 17 lists additions to and deletions from our 2001 survey population and provides explanations for each change. For each of the 116 agencies, we identified the CIO or comparable official and notified them of our work and distributed the appropriate survey instrument to designated officials via e-mail. We also discussed the purpose and content of the survey instrument with agency officials when requested. After receiving our survey, officials from DHS and the Departments of the Interior and Veterans Affairs told us that their respective architectures cover their component agencies and, thus, a single response would be provided. (When departments opted to provide a departmental response inclusive of component agencies, our analysis pertains to the department as a whole. Conversely, when departments and their component agencies reported separately, our departmental analysis is exclusive of component agencies.) Additionally, officials from the Department of Agriculture’s Farm Service Agency, Natural Resources Conservation Service, and Rural Utilities Service told us they would provide a response that reflects the Service Center Modernization Initiative, which encompasses those three component agencies. We agreed with these proposed approaches. Both the Department of Defense’s Business Enterprise Architecture and Agriculture’s previously mentioned Service Center Modernization Initiative provided responses that were not solicited in our survey population, which we included in our analysis and in this report. Tables 18 and 19 show the consolidated, omitted, and additional responses that led to the difference between our survey population of 116 agencies and the 96 respondents included in this report, including an explanation for each adjustment. The timing of the 96 responses varied, ranging from April 1 to July 9, 2003, and thus the determinations in this report regarding the state of enterprise architecture development and use and progress at specific agencies and groups of agencies are linked to particular points in time. Appendixes V, VI, and VII, which contain the results of our analysis of each agency’s response to our survey, identify the date that each agency responded. To verify the accuracy of agencies’ responses to our survey regarding enterprise architecture management policies, organizations, and responsibilities, we required agencies to submit documentation or additional information for survey questions related to certain framework criteria. Specifically, we requested agencies to submit documentation or additional information for questions 6 to 11, 18, 20 to 24, 26, and 35 to 39. Although our survey requested that agencies provide data about the status of various enterprise architecture products, we did not independently verify the data that agencies provided about the comprehensiveness or completeness of their architecture products. Additionally, we contacted agency officials when necessary to clarify their responses. To determine the progress of federal agencies’ enterprise architecture efforts, we analyzed agency survey responses using Version 1.0 of our maturity framework and compared them with the results of our 2001 survey, which were also based on Version 1.0. We also analyzed survey responses using Version 1.1 of our maturity framework to establish a new baseline against which future progress can be measured. When an agency’s response and our subsequent analysis indicated that it did not meet a core element as defined in the framework, we assigned that agency to the next lowest stage of framework maturity (i.e., to achieve a given stage of maturity, an agency must meet all core elements at that stage). For example, if an agency satisfied all Stage 2 and Stage 4 elements, but did not satisfy one Stage 3 element, that agency is considered to be a Stage 2 agency. When determining agency maturity levels, we did not consider whether agency enterprise architecture plans or products included “performance” because explicitly including enterprise performance data is a relatively new concept, and there was a minimal amount of federal guidance related to enterprise performance data available to agencies at the time our surveys were distributed. Tables 20 to 23 show the relationship between the survey questions and the framework elements for Version 1.0 of the framework, as well as identify where documentation was required to support answers. Tables 24 to 27 show the relationship between the survey questions and the framework elements for Version 1.1 of the framework. After compiling agency responses and determining agencies’ respective maturity stages, we analyzed responses across different slices of our respondent population to determine patterns and issues. Finally, to determine OMB’s actions to oversee agency enterprise architecture management efforts, we analyzed relevant policy and budget guidance, obtained information about OMB’s roles in the CIO Council and efforts to develop and use the Federal Enterprise Architecture (FEA) (including OMB’s use of the FEA in the budget process), and interviewed OMB officials about ongoing and planned management actions. We also analyzed agency responses to survey questions regarding OMB’s enterprise architecture-related oversight and guidance. We conducted our work in the Washington, D.C., metropolitan area, from September 2002 to November 2003, in accordance with generally accepted government auditing standards. The following table presents three assessments of the maturity stage of each listed organization on the basis of the following: (1) responses to our 2001 survey evaluated against Version 1.0 of our framework, (2) responses to our 2003 survey evaluated against Version 1.0 of our framework, and (3) responses to our 2003 survey evaluated against Version 1.1 of our framework. The Department of Agriculture provided its 2001 survey responses on July 9, 2001, and its 2003 responses on May 12, 2003. The Department of Commerce provided its 2001 survey responses on June 29, 2001, and its 2003 responses on April 18, 2003. The Department of Defense provided its 2001 survey responses on July 25, 2001, and its 2003 response for its Global Information Grid on June 5, 2003. The Department of Defense provided its 2003 response for its Business Enterprise Architecture on May 30, 2003. The department did not provide a similar response to our 2001 survey. The Department of Education provided its 2001 survey responses on July 23, 2001, and its 2003 responses on April 28, 2003. The Department of Energy provided its 2001 survey responses on June 28, 2001, and its 2003 responses on April 23, 2003. The Department of Health and Human Services provided its 2001 survey responses on August 14, 2001, and its 2003 responses on May 12, 2003. The Department of Homeland Security was not involved in our 2001 survey because it was established on March 1, 2003. It provided its 2003 responses on June 10, 2003. The Department of Housing and Urban Development provided its 2001 survey responses on June 28, 2001, and its 2003 responses on April 21, 2003. The Department of the Interior provided its 2001 survey responses on June 29, 2001, and its 2003 responses on April 21, 2003. The Department of Justice provided its 2001 survey responses on July 10, 2001, and its 2003 responses on May 20, 2003. The Department of Labor provided its 2001 survey responses on July 2, 2001, and its 2003 responses on April 17, 2003. The Department of State provided its 2001 survey responses on July 13, 2001, and its 2003 responses on May 12, 2003. The Department of Transportation provided its 2001 survey responses on June 29, 2001, and its 2003 responses on April 24, 2003. The Department of the Treasury provided its 2001 survey responses on June 28, 2001, and its 2003 responses on April 26, 2003. The Department of Veterans Affairs provided its 2001 survey responses on August 17, 2001, and its 2003 responses on April 21, 2003. The Agricultural Marketing Service provided its 2001 survey responses on July 9, 2001, and its 2003 responses on May 13, 2003. The Agricultural Research Service provided its 2001 survey responses on July 13, 2001, and its 2003 responses on April 18, 2003. The Animal and Plant Health Inspection Service provided its 2001 survey responses on June 26, 2001, and its 2003 responses on April 21, 2003. The Cooperative State Research, Education, and Extension Service provided its 2001 survey responses on July 9, 2001, and its 2003 responses on April 16, 2003. The Food and Nutrition Service provided its 2001 survey responses on July 17, 2001, and its 2003 responses on April 24, 2003. The Food Safety and Inspection Service provided its 2001 survey responses on July 9, 2001, and its 2003 responses on June 10, 2003. July 12, 2001, and its 2003 responses on May 5, 2003. The Forest Service provided its 2001 survey responses on August 3, 2001, and its 2003 responses on April 21, 2003. The Risk Management Agency provided its 2001 survey responses on July 27, 2001, and its 2003 responses on May 6, 2003. The Service Center Modernization Initiative provided its responses on May 16, 2003. The Bureau of the Census provided its 2001 survey responses on June 29, 2001, and its 2003 responses on April 21, 2003. The Economic Development Administration provided its 2001 survey responses on July 10, 2001, and its 2003 responses on April 28, 2003. The International Trade Administration provided its 2001 survey responses on June 26, 2001, and its 2003 responses on April 29, 2003. The National Oceanic and Atmospheric Administration provided its 2001 survey responses on June 29, 2001, and its 2003 responses on April 21, 2003. The U.S. Patent and Trademark Office provided its 2001 survey responses on June 29, 2001, and its 2003 responses on April 21, 2003. The Ballistic Missile Defense Organization provided its 2001 survey responses on July 25, 2001, and its 2003 responses on June 10, 2003. The Defense Advanced Research Projects Agency provided its 2001 survey responses on July 25, 2001, and its 2003 responses on April 7, 2003. The Defense Commissary Agency provided its 2001 survey responses on July 25, 2001, and its 2003 responses on June 9, 2003. The Defense Contract Audit Agency provided its 2001 survey responses on July 25, 2001, and its 2003 responses on May 30, 2003. The Defense Contract Management Agency provided its 2001 survey responses on July 3, 2001, and its 2003 responses on May 30, 2003. The Defense Information Systems Agency provided its 2001 survey responses on July 11, 2001, and its 2003 responses on June 10, 2003. July 25, 2001, and its 2003 responses on June 20, 2003. The Defense Logistics Agency provided its 2001 survey responses on July 25, 2001, and its 2003 responses on May 22, 2003. The Defense Security Cooperation Agency provided its 2001 survey responses on July 25, 2001, and its 2003 responses on June 19, 2003. The Defense Security Service provided its 2001 survey responses on July 25, 2001, and its 2003 responses on June 9, 2003. The Defense Threat Reduction Agency provided its 2001 survey responses on July 25, 2001, and its 2003 responses on May 29, 2003. July 27, 2001, and its 2003 responses on June 2, 2003. The Department of the Army provided its 2001 survey responses on July 25, 2001, and its 2003 responses on June 2, 2003. The Department of the Navy provided its 2001 survey responses on July 25, 2001, and its 2003 responses on June 9, 2003. The National Imagery and Mapping Agency provided its 2001 survey responses on July 25, 2001, and its 2003 responses on June 6, 2003. The Administration for Children and Families provided its 2001 survey responses on June 29, 2001, and its 2003 responses on May 12, 2003. The Agency for Healthcare Research and Quality provided its 2001 survey responses on July 12, 2001, and its 2003 responses on May 12, 2003. The Centers for Disease Control and Prevention provided its 2001 survey responses on July 23, 2001, and its 2003 responses on May 12, 2003. The Centers for Medicare and Medicaid Services provided its 2001 survey responses on June 29, 2001, and its 2003 responses on May 12, 2003. The Food and Drug Administration provided its 2001 survey responses on July 13, 2001, and its 2003 responses on May 12, 2003. The Health Resources and Services Administration provided its 2001 survey responses on June 29, 2001, and its 2003 responses on May 12, 2003. The Indian Health Service provided its 2001 survey responses on June 29, 2001, and its 2003 responses on May 12, 2003. The Program Support Center provided its 2001 survey responses on June 29, 2001, and its 2003 responses on May 12, 2003. The Bureau of Alcohol, Tobacco, Firearms and Explosives provided its 2001 survey responses on July 16, 2001, and its 2003 responses on April 21, 2003. The Drug Enforcement Administration provided its 2001 survey responses on July 18, 2001, and its 2003 responses on May 20, 2003. The Federal Bureau of Investigation provided its 2001 survey responses on July 18, 2001, and its 2003 responses on May 28, 2003. The Federal Bureau of Prisons provided its 2001 survey responses on July 18, 2001, and its 2003 responses on May 22, 2003. The U.S. Marshals Service provided its 2001 survey responses on June 29, 2001, and its 2003 responses on May 19, 2003. The Federal Aviation Administration provided its 2001 survey responses on June 29, 2001, and its 2003 responses on April 29, 2003. The Federal Highway Administration provided its 2001 survey responses on June 29, 2001, and its 2003 responses on April 1, 2003. The Federal Motor Carrier Safety Administration provided its 2001 survey responses on June 29, 2001, and its 2003 responses on April 24, 2003. The Federal Railroad Administration provided its 2001 survey responses on June 29, 2001, and its 2003 responses on April 24, 2003. The Federal Transit Administration provided its 2001 survey responses on June 29, 2001, and its 2003 responses on April 24, 2003. The National Highway Traffic Safety Administration provided its 2001 survey responses on June 29, 2001, and its 2003 responses on April 24, 2003. The Bureau of Engraving and Printing provided its 2001 survey responses on June 29, 2001, and its 2003 responses on April 21, 2003. The Bureau of the Public Debt provided its 2001 survey responses on July 5, 2001, and its 2003 responses on April 21, 2003. June 28, 2001, and its 2003 responses on April 16, 2003. The Financial Management Service provided its 2001 survey responses on June 28, 2001, and its 2003 responses on May 19, 2003. The Internal Revenue Service provided its 2001 survey responses on July 20, 2001, and its 2003 responses on April 21, 2003. The Office of Thrift Supervision provided its 2001 survey responses on June 29, 2001, and its 2003 responses on June 9, 2003. The U.S. Mint provided its 2001 survey responses on June 29, 2001, and its 2003 responses on April 21, 2003. The Agency for International Development provided its 2001 survey responses on June 29, 2001, and its 2003 responses on April 22, 2003. The Central Intelligence Agency provided its 2001 survey responses on August 6, 2001, and its 2003 responses on May 30, 2003. The Corporation for National and Community Service provided its 2001 survey responses on July 20, 2001, and its 2003 responses on April 22, 2003. The Environmental Protection Agency provided its 2001 survey responses on June 28, 2001, and its 2003 responses on May 15, 2003. The Equal Employment Opportunity Commission provided its 2001 survey responses on August 1, 2001, and its 2003 responses on May 2, 2003. The Executive Office of the President provided its 2001 survey responses on October 1, 2001, and its 2003 responses on June 6, 2003. The Export-Import Bank provided its 2001 survey responses on September 20, 2001, and its 2003 responses on June 11, 2003. The Federal Deposit Insurance Corporation provided its 2001 survey responses on July 20, 2001, and its 2003 responses on April 18, 2003. The Federal Energy Regulatory Commission provided its 2001 survey responses on August 27, 2001, and its 2003 responses on May 12, 2003. The Federal Reserve System provided its 2001 survey responses on August 23, 2001, and its 2003 responses on April 23, 2003. The Federal Retirement Thrift Investment Board provided its 2001 survey responses on July 20, 2001, and its 2003 responses on July 9, 2003. The General Services Administration provided its 2001 survey responses on July 2, 2001, and its 2003 responses on April 23, 2003. The National Aeronautics and Space Administration provided its 2001 survey responses on July 25, 2001, and its 2003 responses on April 21, 2003. The National Credit Union Administration provided its 2001 survey responses on July 18, 2001, and its 2003 responses on April 10, 2003. The National Labor Relations Board provided its 2001 survey responses on August 9, 2001, and its 2003 responses on June 9, 2003. The Nuclear Regulatory Commission provided its 2001 survey responses on July 23, 2001, and its 2003 responses on April 21, 2003. The Office of Personnel Management provided its 2001 survey responses on June 29, 2001, and its 2003 responses on April 28, 2003. The Peace Corps provided its 2001 survey responses on July 20, 2001, and its 2003 responses on May 15, 2003. The Railroad Retirement Board provided its 2001 survey responses on July 11, 2001, and its 2003 responses on April 18, 2003. The Securities and Exchange Commission provided its 2001 survey responses on July 19, 2001, and its 2003 responses on April 22, 2003. The Small Business Administration provided its 2001 survey responses on June 29, 2001, and its 2003 responses on April 22, 2003. 2001, and its 2003 responses on April 21, 2003. The Social Security Administration provided its 2001 survey responses on July 3, 2001, and its 2003 responses on April 21, 2003. The U.S. Postal Service provided its 2001 survey responses on August 13, 2001, and its 2003 responses on April 21, 2003. To assess agency enterprise architecture management maturity levels, we developed two similar surveys, one addressed to departments and the other to component and independent agencies. These two surveys were largely identical, with the following differences: Throughout, questions referred to “departments” in the department survey and to “agencies” in the agency survey. Two questions on the department survey (questions 39 and 40) and three questions on the agency survey (questions 39 to 41) were addressed specifically to departments and agencies, respectively. The last five questions on the two surveys were numbered differently, since they followed the department- and agency-specific questions described above. Questions 41 to 45 on the department survey were numbered 42 to 46 on the agency survey. (Note, however, that these five questions were not used in the decision criteria described in app. III.) The following reproduced survey combines the two surveys into one display by using the phrase “agency/department” in places where one or the other term had been used in the separate surveys. It also displays both the two department questions and the three agency questions that were addressed specifically as described above. We are also asking that you provide the name and government, GAO is conducting a survey of federal telephone number of a contact for your departments’ and agencies’ enterprise architecture (EA) agency/department who can answer any questions we efforts to gauge progress towards meeting Clinger-Cohen may have about your survey responses. Act and OMB requirements and to identify successes that can be shared with other federal agencies. There are two versions of this survey. One version is being sent to federal agencies and a different version is being sent to cabinet-level departments. Enterprise architectures are well defined and enforced blueprints (i.e., descriptions) for operational and technological change. Such architectures provide a clear ( ) and comprehensive picture of an entity, whether it is an organization (e.g., federal department, agency, or bureau) ( ) If you have any questions, please contact: environment; and (3) a capital investment roadmap for transitioning from the current to the target environment (i.e. sequencing plan). We are requesting departments and agencies to provide information from readily available data. We are not asking that extensive analyses be performed in order to respond to these questions. Please complete this survey and return it to GAO no later than April 21, 2003. You may return your completed survey and any supporting materials by E-mail, fax, or Federal Express. If you return your survey by E-mail, the address is: [email protected]. If you return your survey by fax, the fax number is: (202) 512-6450 - Attn: Scott Pettis. If you return your survey by Federal Express, the address is: Scott Pettis, Senior IT Analyst, 441 G St. NW, Rm. 4Y12, Washington, DC 20548. 1. Which of the following best describes your agency/department’s status with respect to enterprise architecture? (Check one box.) 1. We have developed an enterprise architecture Skip to question 3. 2. We do not have an enterprise architecture, but are in the process of developing one Skip to question 3. 3. We do not have an enterprise architecture, but plan to develop one Skip to question 3. 4. We do not plan to develop an enterprise architecture Answer question 2. 2. Please explain why your agency/department does not plan to develop an enterprise architecture. (Enter your response in the box below.) If you were directed to answer question 2, you have completed the survey. Please return it as soon as possible. Thank you. YOU SHOULD ANSWER THE FOLLOWING QUESTIONS IF YOUR AGENCY/DEPARTMENT HAS AN ENTERPRISE ARCHITECTURE, IS IN THE PROCESS OF DEVELOPING ONE, OR PLANS TO DEVELOP ONE. 3. Which of the following best describes the scope of your agency/department’s completed, in-process, or planned enterprise architecture(s). (Check all that apply and provide additional information if necessary.) 1. Agency/department wide, organization based (i.e., all mission and business functions) 2. Agency/department wide, function based (e.g., financial management, logistics management, grant management, etc.) 3. Non-agency/department wide organization based 4. Non-agency/department wide function based If you checked box 3 or 4 above because your architecture is not agency/department wide, please list the organizations or functions covered by your enterprise architecture, and explain the basis for the defined scope. (Enter your response in the box below.) 4. Does (or will) this particular enterprise architecture include the following? (Check one box for each row.) (1) (2) (3) performance measurement, information/data, services/applications, and technology descriptions of the agency/department. A description of the agency/department’s future or “to-be” environment, An explicit discussion of security in the “to be” business operations, performance measurement, information/data, services/applications, and technology descriptions of the agency/department. A description of the sequencing plan for moving from the “as is” to the “to be” environment. If you answered “No” to any of the items in question 4, please explain why. (Enter your response in the box below.) Is your agency/department’s enterprise architecture published? (Check one box and provide additional information if necessary.) 1. Yes Please provide a list naming each enterprise architecture product/artifact with a brief description of each product/artifact. 7. Does your agency/department have a written and approved policy for the development, maintenance, and use of enterprise architecture? (Check one box for each row. If policy is written but not approved, please check “No”.) (1) (2) Maintenance of the enterprise architecture Use of the enterprise architecture If you checked “yes” for development, maintenance, or use, please provide a copy of the written and approved 8. Has your agency/department established committees or groups that represent the agency/department and have responsibility for the following? (Check one box for each row.) (1) (2) Oversight of the enterprise architecture Approval authority for the enterprise architecture Other aspects of the enterprise architecture (Describe) If yes, please provide a copy of the charter or comparable documentation. 9. Has your agency/department established an official program office with responsibility for the following? (Check one box for each row.) (1) (2) Maintenance of the enterprise architecture If yes, please provide a copy of the charter or comparable documentation. 10. Does your agency/department have an individual designated as the chief architect? (Check one box and provide additional information if necessary.) 1. Yes Please provide this individual’s name and phone number: ( ) Does this individual report to the chief information officer? 2. No What position does the chief architect report to? 2. No Skip to question 12. 11. Is your agency/department’s chief architect responsible for each of the following? (Check one box for each row.) (1) (2) Directing development of the enterprise architecture Directing maintenance of the enterprise architecture Please provide a position description or comparable document describing the chief architect’s responsibilities. 12. Please provide the costs of developing and maintaining your enterprise architecture by the following major cost elements: (If you are in the process of developing your Enterprise Architecture, please enter data in all three columns.) if any, to Complete (to date) Other (describe) $ . Please quantify your agency/department’s requested and approved enterprise architecture resources. Personnel (FTEs) If any gap exists between requested and approved resources for Fiscal Year 2001, 2002, or 2003, please answer question 14. Otherwise, proceed to question 15. 14. How much of an impact, if any, has the gap between enterprise architecture resources requested and resources finally approved had on your agency/department’s enterprise architecture program? (Check one and provide additional information if necessary.) 1. Very adverse impact 2. Somewhat adverse impact 3. Moderate adverse impact 4. Slight adverse impact 5. No adverse impact Please provide any additional details about the impact of any gap noted above. (Enter your response in the box below.) 15. Which of the following automated tools are being used for this enterprise architecture? For each tool being used, how satisfied or dissatisfied are you with it? (Check yes or no in each row. If yes, check additional box.) If tool is being used, are you . . . used? (1) (2) (3) (4) (5) (6) (EAMS) Framework by Ptech Inc. JCAPS by Logicon Inc. Yes 16. Which of the following model(s) or framework(s) (i.e., a formal structure for representing the enterprise architecture) is your agency/department using to develop this enterprise architecture? For each model or framework being used, how satisfied or dissatisfied are you with it? (Check yes or no in each row. If yes, check additional box.) If model or framework is being used, are you . . . being used? (1) (2) (3) (4) (5) (6) (C4ISR) Framework (DoDAF) Framework (FEAF) (NIST) Framework (TEAF) Yes 17. Which of the following best describes how your agency/department’s enterprise architecture was or is being developed? (Check one box and provide additional information if necessary.) 1. Developed in-house using contractor(s) support 2. Developed in-house without any contractor(s) support 3. Developed by contractor(s) Please provide the contractor’s name(s): 18. Is your agency/department using an enterprise architecture development methodology or methodologies (i.e., a common set of procedures, such as Spewak’s Enterprise Architecture Planning methodology, for developing enterprise architecture products)? (Check one box and provide additional information if necessary.) 1. Yes Provide the following information about the enterprise architecture methodology or methodologies your agency/department is using: 19. To what extent was or is your agency/department’s “business” side involved in developing the enterprise architecture? (Check one.) 1. Very great extent 4. Some or little extent 5. No extent 20. Was the current version (i.e., latest major release) of your agency/department’s enterprise architecture submitted and approved by the following entities: (Check one box in each row under submitted and approved. If the enterprise architecture was submitted but not approved, please check “No”. If no, indicate whether action is planned.) If no, is action planned? information officer? committee? Approved by a committee or enterprise? investment review board? Approved by the head of your agency/department? Approved by other official or committee? Please specify: Submitted to OMB? Please provide documentation for each approval indicated above. 21. Do your agency/department’s enterprise architecture products undergo independent verification and validation (IV&V)? (Check one box and provide additional information if necessary.) 1. Yes If IV&V is contractor-provided, please provide a copy of the contractor’s statement of work. 22. Do your agency/department’s enterprise architecture management processes undergo independent verification and validation (IV&V)? (Check one box and provide additional information if necessary.) 1. Yes If IV&V is contractor-provided, please provide a copy of the contractor’s statement of work. 2. No 23. Does your agency/department periodically update its enterprise architecture products? (Check one box and provide additional information if necessary.) 1. Yes If yes, please provide Date of last update: 24. Is your agency/department’s enterprise architecture under configuration management (i.e., a process for establishing and maintaining the integrity of work products)? (Check one box and provide additional information if necessary.) 1. Yes If yes, please provide Date of current version: 25. Does a process exist for formally managing changes to your agency/department’s enterprise architecture? (Check 26. Does your agency/department have a written and approved policy that requires that IT investments comply with the enterprise architecture? (Check one box and provide additional information if necessary. If policy is written but not approved, please check “No”.) 1. Yes Please provide a copy of the written policy. Continue with question 27 2. No Skip to question 28 27. Does your agency/department permit waivers to its requirement that IT investments comply with the enterprise architecture? (Check one.) 1. Yes, only if the request provides a written justification 2. Yes, a waiver can be granted based on an informal request 3. No, the agency/department does not provide for waivers to this policy 28. Is your agency/department’s enterprise architecture an integral component of your agency/department’s IT investment management process? (Check one.) 29. To what extent does your agency/department’s IT investments comply with the enterprise architecture? (Check one.) 1. Very great extent 4. Some or little extent 5. No extent 30. Was your agency/department’s decision to develop an enterprise architecture based on: 1) a business case that provided economic justification (i.e., benefits in excess of costs); 2) the need to comply with the Clinger-Cohen Act and/or OMB requirements; 3) the need to respond to the President’s Management Agenda; and/or, 4) some other factor(s) that was considered? (Check all that apply.) 1. A business case that anticipated a positive return 2. The need to comply with Clinger-Cohen and/or OMB requirements 3. The need to respond to the President’s Management Agenda 4. Other factor(s) - Please specify in the box below: 31. What benefits, if any, can be attributed to your agency/department’s use of an enterprise architecture? If the benefit can be attributed to the use of an enterprise architecture, to what extent, if at all, has the benefit been attained thus (Check yes or no in each row. If yes, indicate extent benefit attained.) Architecture? (1) (2) (3) (4) (5) (6) Yes No 32. To what extent, if at all, did the following challenges affect the development of your agency/department’s enterprise architecture? (Check one box in each row.) (1) (2) (3) (4) (5) From this point, the agency and department surveys differ. bureaus) enterprise architecture development, maintenance, or use? (Check one box and provide 1. Yes Continue with question 40 additional information if necessary.) 2. No Skip to question 42 1. Yes Please provide a copy of the policy or guidance with your response. department provided oversight of your enterprise architecture efforts? (Check one.) (e.g., oversight and approval processes)? a. b. c. by your department’s chief information officer? (Check one.) From this point, the questions are again the same for each survey, except for their numbering: on the department survey, each question number was one less than the numbering shown in the following (the numbering shown corresponds to that on the agency survey). Note that none of the questions that follow were used in the decision criteria that determined the maturity stage assigned to any respondent (see appendix III for these criteria). 42. Overall, how satisfied or dissatisfied is your agency/department with OMB’s direction and guidance to your agency/department regarding development, maintenance, and implementation of your enterprise architecture? (Question responses will be aggregated and not directly attributable to any agency/department.) (Check one and provide additional information if necessary.) 3. Neither satisfied nor dissatisfied If you indicated that your agency/department is other than “Very satisfied” or “Satisfied,” please describe why and what improvements are needed. 43. How satisfied is your agency/department with OMB’s efforts to address the following enterprise architecture management challenges GAO reported in its February 2002 report (GAO-02-6)? (Question responses will be aggregated and not directly attributable to any agency/department.) (Check one box in each row and provide additional information if necessary.) (1) (2) (3) (4) (5) If you indicated that your agency/department is other than “Very satisfied” or “Satisfied,” to any of the above, please describe why and what improvements are needed. 44. Do you agree or disagree with the following statements as they apply to OMB’s Federal Enterprise Architecture (FEA)? (Question responses will be aggregated and not directly attributable to any agency/department.) (Check one box in each row.) (1) (2) (3) (4) (5) change as a result of the FEA If you indicated other than “Strongly agree” or “Agree,” to any of the above, please describe why and what improvements are needed. 45. In your agency/department’s opinion, what impact has the FEA had (or will the FEA have) on your agency/department’s enterprise architecture? (Question responses will be aggregated and not directly attributable to any agency/department.) (Check one.) 1. Very positive impact 2. Generally positive impact 3. Neither positive nor negative impact 4. Generally negative impact 5. Very negative impact 6. No basis to judge 46. Please provide any additional comments on your agency/department’s enterprise architecture program in the box Thank you for your assistance. Please return your survey and any requested supporting materials to the E-mail address or fax number indicated on page 1. In addition to the person named above, Barbara S. Collier, William B. Cook, Neal J. Doherty, Michael Holland, Catherine M. Hurley, Stuart M. Kaufman, Scott Pettis, and David B. Shumate made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
A well-defined enterprise architecture (EA) is a blueprint for institutional modernization and evolution that consists of models describing how an entity operates today and how it intends to operate in the future, along with a plan for how it intends to transition to this future state. Such architectures are essential tools whose effective development and use are recognized hallmarks of successful organizations. Because of the importance of these architectures, GAO was asked to determine (1) what progress federal agencies have made in effectively developing, implementing, and maintaining their EAs and (2) the Office of Management and Budget's (OMB) actions to advance the state of EA development and use across the federal government. Federal agencies' progress toward effective EA management is limited. GAO surveyed federal agencies on their EA programs and compared the results with those of a similar survey that GAO conducted in 2001 (GAO-02-6). To assign a maturity level to agencies, GAO used its EA management maturity framework, which is a five-stage model that defines criteria that govern where an EA program stands in its progression toward being effectively managed (with Stage 1 being ineffective and Stage 5 being highly effective). Comparing the 2001 and 2003 survey results revealed a very similar overall picture, in which slight increases in agencies achieving Stage 3 status were offset by slight increases in agencies being at Stage 1. In addition, when GAO assessed the 2003 survey results against a recent update of the framework (GAO-03-584G), agencies' average maturity was slightly lower. An exception to this is the Executive Office of the President, which is a Stage 5 agency under the latest version of the framework. Part of the reason for this limited progress across the federal government is that agencies continue to face long-standing EA challenges, such as limited executive understanding of EA and a scarcity of skilled architecture staff. Since 2001, more agencies now report these as significant challenges. OMB has undertaken a variety of actions to advance the state of EA use across the federal government, such as collecting and analyzing architectures for major departments and agencies and requiring that major information technology (IT) investments comply with them. Additionally, OMB has developed parts of a governmentwide EA, and by requiring a mapping of agency architectures to this federal EA as part of the budget review process, it has called attention to the need for agencies to further their own architecture efforts. However, despite OMB's actions, the agencies' responses indicate that only about one-half are satisfied with OMB's leadership in addressing long-standing EA challenges. Until these challenges are effectively addressed, agencies' maturity levels as a whole are likely to remain stagnant, limiting their ability to effectively invest in IT.
A basic management objective for any organization is to protect the resources that support its critical operations from unauthorized access, use, destruction, or disruption. Organizations accomplish this objective by designing and implementing controls that are intended to, among other things, prevent, limit, and detect unauthorized access to computing resources, programs, information, and facilities. At LANL, these assets include Category I special nuclear material, such as plutonium and highly enriched uranium; thousands of classified nuclear weapons parts and components; millions of classified documents; thousands of pieces of classified removable electronic media that contain nuclear weapon design information; over 100 vaults and vault-type rooms that store classified assets; and computer networks and the hardware on which these networks run that protect classified information as well as sensitive unclassified information. LANL is subject to a series of DOE security orders that outline requirements for implementing effective physical and cyber security protection strategies. These orders include an assessment of the potential size and capabilities of terrorist forces that could physically attack a laboratory and against which a laboratory must be prepared to defend. The orders further describe different levels of physical protection for sensitive and classified assets, depending on the risk they would pose if they were lost, stolen, or otherwise compromised. Appropriate physical protection safeguards include locks and keys, fences, means to detect unauthorized entry, perimeter alarms, vehicle barriers, and armed guards. In addition, the Congress enacted the Federal Information Security Management Act (FISMA) in December 2002 to strengthen the security of information and information systems across the federal government. FISMA requires each agency to develop, document, and implement an agencywide information security program that supports the operations and assets of the agency, including those provided or managed by another agency or contractor on its behalf. Examples of appropriate information security controls include user identification and authentication that allow computer systems to differentiate between users and verify their identities; cryptography that ensures the confidentiality and integrity of critical and sensitive information; configuration management that identifies and manages security features for all hardware, software, and firmware components of an information system and controls changes to them; and audit and monitoring controls that help establish individual accountability and monitor compliance with security policies. LANL is managed and operated by a corporate entity, Los Alamos National Security LLC (LANS). NNSA’s Los Alamos Site Office serves as the primary federal overseer of laboratory security performance. Annually, the Site Office determines how much money LANS will earn for its management of the laboratory according to a maximum available performance-based fee established in the laboratory’s contract. The Site Office bases its determination on the laboratory’s success in meeting the goals laid out in performance evaluation plans. These plans allocate portions of the maximum available performance award fee to NNSA performance objectives, including measures related to both physical and cyber security. In addition, two DOE organizations are required to periodically review physical and cyber security at LANL. NNSA’s Los Alamos Site Office is required to conduct security surveys annually. These surveys are based on observations of performance, including compliance with DOE and NNSA security directives. In fiscal year 2008, the results of this survey are directly tied to NNSA’s performance evaluation plan, and are therefore a factor in LANS’ ability to earn the maximum available performance award fee. DOE’s Office of Independent Oversight also conducts evaluations, typically every 18 months for facilities that store Category I special nuclear material. These evaluations identify weaknesses in the laboratories’ security programs and produce findings that laboratory officials must take action to correct. The reviews overlap substantially, but each is required to provide a comprehensive assessment of the laboratory’s security programs. Physical security at LANL is in a period of significant improvement, and LANL is implementing over two dozen initiatives to reduce, consolidate, and better protect its classified assets, as well as reduce the physical footprint of the laboratory by closing unneeded facilities. LANL officials believe that these initiatives will reduce the risk of incidents that can result in the loss of control over classified assets. For example, to reduce and consolidate classified assets and its physical footprint, as of March 2008, LANL had (1) reduced from nine to one the number of areas containing Category I special nuclear material; (2) reduced the amount of accountable classified removable electronic media from 87,000 pieces to about 4,300 and made information previously accessible on removable media available only through the laboratory’s classified computer network; (3) eliminated about 30,000 classified nuclear weapon parts; and (4) reduced the number of vault-type rooms from 142 to 111. In addition, during fiscal year 2007, LANL reduced the physical footprint of existing facilities by over 500,000 square feet. In concert with these actions, LANL is implementing a series of engineered and administrative controls to better protect and control classified assets, such as removing the functions from classified computers that enable them to create new pieces of removable electronic media and streamlining physical security procedures to make them easier to implement across the laboratory. We found that DOE’s Office of Independent Oversight and the Los Alamos Site Office identified significant physical security problems at LANL that the laboratory had not fully addressed. Specifically, while LANL’s storage of classified parts in unapproved storage containers and its process for ensuring that actions to correct identified security deficiencies have been cited in external security evaluations for years, complete security solutions in these areas had not yet been implemented at the time of our review. In addition, external security evaluations had repeatedly identified concerns about the adequacy of LANL’s assessments of its own security performance. The security self-assessment program provides LANL with the opportunity to self-identify security deficiencies and address them before they can be exploited. External security evaluations found that LANL’s self-assessments were not comprehensive and did not include discussions of all internal findings. These evaluations also noted that findings identified through self-assessments were not always analyzed and addressed through corrective actions. At the time of our review, Los Alamos Site Office and DOE Office of Independent Oversight officials noted that LANL’s self-assessment program was improving. LANL officials identified three management approaches that they asserted would sustain security improvements over the long term. However, these approaches were either in an early stage of development or contained important weaknesses that may impair their ability to ensure the sustainability of security improvements at the laboratory for the foreseeable future. First, LANL officials identified completing the management actions required by the Secretary of Energy’s Compliance Order issued as a result of the October 2006 thumb drive incident as an approach to ensure that security improvements are sustained, yet the Compliance Order itself does not provide a mechanism to sustain security improvements over the long-term. Second, LANL officials told us they will track the implementation of longer-term actions, including those required by the Compliance Order, by developing and implementing the Contractor Assurance System required under the LANS contract. However, the extent to which LANL can rely on the Contractor Assurance System to ensure the long-term sustainability of security improvements is unclear. According to a Los Alamos Site Office official, the Contractor Assurance System will not be fully completed for 3 to 4 years and, thus, will not be fully implemented by the time actions under the Compliance Order are completed. Finally, according to LANL officials, the laboratory plans to realize security improvements by meeting the security-related performance incentives in the annual performance evaluation plans NNSA uses to measure performance and determine an award fee for LANS. However, the annual performance evaluation plans focus principally on compliance with DOE requirements and do not sufficiently reward security program improvement. In that regard, according to a senior NNSA security official, compliance with current DOE requirements does not assure that LANL’s security program is functioning effectively. Indeed, we found that all but $30,000 of the total $1.43 million fiscal year 2008 performance fee allocated to physical security was associated with LANL’s achievement of compliance-oriented milestones, such as issuing plans, publishing policies, and completing equipment maintenance requirements. The management attention dedicated to improving physical security following the October 2006 thumb drive incident mirrors the level of attention that followed LANL’s 2004 shutdown, when over 3,400 safety and security deficiencies were identified for correction. This shutdown lasted up to 10 months for some laboratory activities and cost as much as $370 million. Given how quickly LANL’s security performance declined between the full resumption of laboratory activities in May 2005 and the discovery of the thumb drive on private property, LANL’s ability to sustain the improved security posture it has recently achieved is unproven. Strong federal oversight will help ensure that these improvements are sustained. However, we reported that the Los Alamos Site Office suffers from a shortage of security personnel and lacks funding needed for training. Specifically, as of October 2007, the Los Alamos Site Office employed 13 security staff—enough for 1 person to oversee each of the topical areas the Site Office had to evaluate. This staffing level, officials said, was sufficient to cover only 15 percent of LANL’s facilities. In April 2008, a senior security official at the Site Office said security staffing levels had decreased since October 2007. Furthermore, while NNSA had identified the need to train and certify Site Office security personnel in specific subject matters, according to Site Office officials no specific training funds had been made available. We made three recommendations to the Secretary of Energy and the Administrator of NNSA that, if effectively implemented, will improve physical security at LANL and help ensure that improvements LANL has achieved are sustained over the long term. Specifically, we recommended that LANL be required to develop a comprehensive strategic plan for laboratory security that addresses all previously identified security weaknesses and focuses on improving security program effectiveness. Furthermore, we recommended that NNSA provide meaningful financial incentives in future performance evaluation plans for implementation of this comprehensive strategic plan for laboratory security. In June 2008, the Committee requested that we review the security status at Livermore. This request came as a result of an evaluation by DOE’s Office of Independent Oversight in April 2008, in which Livermore received the lowest possible ratings for protective force performance and for physical protection of classified resources. The evaluation also identified issues in other areas, such as security sensors and alarms, and security program management. We are currently verifying the findings of the evaluation and Livermore’s actions to correct security deficiencies. Specifically: Self-assessment and performance assurance testing programs at Livermore need improvement. DOE’s Office of Independent Oversight evaluations and Livermore Site Office security surveys found shortcomings in Livermore’s fiscal year 1999, 2000, 2002, and 2008 self- assessment programs. In addition, Livermore and NNSA security officials acknowledged that a lack of comprehensive performance assurance testing was a significant contributing factor to the poor performance of Livermore protective forces during their April 2008 exercise. Between December 2006 and April 2008, Livermore did not hold an integrated performance assurance test of its protective forces or operationally test equipment key to the laboratory’s protective strategy. During our visit to the laboratory 2 weeks ago, Livermore officials told us they are finalizing corrective action plans to address deficiencies in their performance assurance and self-assessment programs and have already conducted a significant number of performance assurance tests with the protective force and on equipment since the completion of the Office of Independent Oversight’s 2008 evaluation. NNSA and the Livermore Site Office have not always provided effective security oversight. Six months before the Office of Independent Oversight’s 2008 evaluation, the 2007 Livermore Site Office’s annual security survey gave the laboratory a 100-percent satisfactory rating on its security performance, the highest possible rating. The results of the Office of Independent Oversight inspection not only differed markedly, but also found that the Livermore Site Office survey was not comprehensive and the ratings provided did not reflect what was actually observed. The Livermore Site Office is currently in the process of fundamentally rebuilding and restructuring its survey program and has embarked on a training program for its security personnel. Though our observations are preliminary, Livermore appears to be experiencing difficulties similar to LANL’s in sustaining physical security performance. For example, in 1999, DOE’s Office of Independent Oversight identified significant weaknesses in Livermore’s programs to secure the laboratory’s Category I special nuclear material facility against a potential terrorist attack. Livermore then embarked on a major program to improve security and, according to the Office of Independent Oversight, addressed most issues by 2002. This improved level of security performance appears to have been sustained through 2006. Between December 2006—when Livermore’s protective force performed well in an exercise—and April 2008, security performance at Livermore declined. In response to the negative results of the 2008 Office of Independent Oversight evaluation, Livermore appears to be refocusing management attention on security performance. While our work is preliminary, we believe the actions taken by Livermore, the Livermore Site Office, and NNSA, if and when fully implemented, will address identified physical security issues. However, just as at LANL, sustaining attention on physical security performance will continue to be a challenge. LANL has implemented measures to enhance its cyber security, but weaknesses remain in protecting the confidentiality, integrity, and availability of information on its unclassified network. In particular, LANL has implemented a network security system that is capable of detecting potential intrusions on the network. However, LANL has vulnerabilities in several critical areas, including (1) identifying and authenticating users of the network, (2) encrypting sensitive information, (3) monitoring and auditing compliance with security policies, (4) controlling and documenting changes to a computer system’s hardware and software, and (5) restricting physical access to computing resources. For example, although LANL had implemented strong authentication measures for accessing the network, these measures were not always used. Once a user successfully accessed the network, the user could create a separate, simple password that would allow alternative access to certain sensitive information. Furthermore, LANL neither conducted comprehensive vulnerability scans of the unclassified network nor included sensitive applications in these scans, thus leaving the network at increased risk of compromise or disruption. In addition to these weaknesses, LANL’s computing facilities had physical security weaknesses and could be vulnerable to intentional disruption. Specifically, we observed lax restriction of vehicular traffic entering the laboratory and inadequate fencing. A key reason for the information security weaknesses we identified is that LANL has not yet fully implemented an information security program to ensure that controls are effectively established and maintained. Although LANL has implemented a security awareness training program, we identified a number of shortcomings in its overall information security management program. For example, (1) its risk assessment was not comprehensive, (2) specific guidance was missing from policies and procedures, (3) the network security plan was incomplete, (4) system testing had shortcomings, (5) remedial action plans were incomplete and corrective actions were not always timely, and (6) the network contingency plan was incomplete and inadequately tested. Until LANL ensures that the information security program associated with its unclassified network is fully implemented, it will have limited assurance that sensitive data are adequately protected against unauthorized disclosure or modification or that network services will not be interrupted. Many of LANL’s cyber security deficiencies have been the subject of prior evaluations conducted by DOE’s Office of Independent Oversight and the Los Alamos Site Office. The most recent reports, covering fiscal years 2006 and 2007, documented significant weaknesses with LANL’s unclassified information security program, including foreign nationals’ access to the laboratory’s unclassified network. As of May 2008, LANL had granted unclassified network access to 688 foreign nationals, including about 300 from countries identified as sensitive by DOE, such as China, India, and Russia. In addition, foreign nationals from sensitive countries have been authorized remote access to LANL’s unclassified network. The number of foreign nationals who have access to the unclassified network has raised security concerns among some laboratory and NNSA officials because of the sensitive information contained on the network. According to LANL, the percentage of foreign nationals with authorized remote access to the unclassified network has steadily declined over the last 5 years. NNSA and LANL have not agreed on the level of funding necessary for protecting the unclassified network. From fiscal years 2001 through 2007, LANL spent $51.4 million to protect and maintain its unclassified network. Although LANL cyber security officials told us that funding has been inadequate to address some of their security concerns, NNSA officials raised questions about the basis for LANL’s funding request for cyber security. NNSA’s Chief Information Officer told us that LANL has not adequately justified requests for additional funds to address the laboratory’s stated shortfalls. In addition, NNSA officials informed us that LANL’s past budget requests were prepared on an ad hoc basis and were not based on well-defined threat and risk assessments. In response to these concerns, in fiscal year 2006, NNSA implemented a more systematic approach to developing cyber security budgets across the nuclear weapons complex, including LANL. This effort, however, does not provide guidance that clearly lays out funding priorities. Furthermore, NNSA does not consistently document resource allocation decisions and identify how funding shortfalls affect critical cyber security issues. To help strengthen information security controls over LANL’s unclassified network, we made a series of recommendations to the Secretary of Energy and the Administrator of NNSA, 11 of which focus on improving LANL’s information security program and determining resource requirements for the unclassified network. For example, we recommended that the Secretary of Energy and the NNSA Administrator require the Director of LANL to, among other things, (1) ensure that the risk assessment for the unclassified network evaluates all known vulnerabilities and is revised periodically and (2) strengthen policies with a view toward further reducing, as appropriate, foreign nationals’ access to the unclassified network, particularly those from countries identified as sensitive by DOE. We made an additional 41 recommendations in a separate report with limited distribution. These recommendations consist of actions to be taken to correct the specific information security weaknesses related to identification and authentication, cryptography, audit and monitoring, configuration management, and physical security that we identified. Mr. Chairman, this concludes our prepared statement. We would be happy to respond to any questions that you or Members of the Subcommittee may have at this time. For further information on this testimony, please contact Gene Aloise at (202) 512-3481 or [email protected]; Nabajyoti Barkakati at (202) 512-6412 or [email protected]; and Gregory C. Wilshusen at (202) 512-6244 or [email protected]. Jonathan Gill, Ed Glagola, Jeff Knott, and Glen Levis, Assistant Directors; Allison Bawden; Preston Heard; Tom Twambly; Ray Rodriguez; John Cooney; Carol Herrnstadt Shulman; and Omari Norman made key contributions to this testimony. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Los Alamos National Laboratory (LANL) is one of three National Nuclear Security Administration (NNSA) laboratories that designs and develops nuclear weapons for the U.S. stockpile. LANL employees rely on sensitive and classified information and assets that are protected at different levels, depending on the risks posed if they were lost, stolen, or otherwise compromised. However, LANL has experienced several significant security breaches during the past decade. This testimony provides GAO's (1) views on physical security at LANL, as discussed in Los Alamos National Laboratory: Long-Term Strategies Needed to Improve Security and Management Oversight, GAO-08-694 (June 13, 2008); (2) preliminary observations on physical security at Lawrence Livermore National Laboratory; and (3) views on cyber security at LANL as discussed in Information Security: Actions Needed to Better Protect Los Alamos National Laboratory's Unclassified Computer Network, GAO-08-1001 (Sept. 9, 2008). To conduct this work, GAO analyzed data, reviewed policies and procedures, interviewed laboratory officials, and conducted site visits to the two laboratories. Physical security at LANL is in a period of significant improvement, and LANL is implementing over two dozen initiatives to better protect its classified assets. However, while LANL's current initiatives address many physical security problems previously identified in external security evaluations, other significant security problems have received insufficient attention. In addition, the management approaches that LANL and NNSA intend to use to sustain security improvements over the long term are in the early stages of development or contain weaknesses. Furthermore, LANL's ability to sustain its improved physical security posture is unproven because (1) the laboratory appears not to have done so after a significant security incident in 2004, with another significant security breach in 2006, and (2) NNSA's Los Alamos Site Office--which is responsible for overseeing security at LANL--may not have enough staff or the proper training to execute a fully effective security oversight program. GAO's report made recommendations to help further improve physical security at LANL and ensure that these improvements are sustained over the long term. As a result of poor performance on an April 2008 physical security evaluation conducted by the Department of Energy's (DOE) Office of Independent Oversight, GAO is reviewing physical security at Lawrence Livermore National Laboratory (Livermore). GAO's preliminary observations are that Livermore appears to experience difficulties similar to LANL's in sustaining security performance. Furthermore, it appears that NNSA has not always provided effective oversight of Livermore. Specifically, an NNSA security survey conducted only 6 months prior to the April 2008 DOE evaluation gave Livermore the highest possible rating on its security program's performance. These results differ markedly from those documented by DOE's Office of Independent Oversight. LANL has implemented measures to enhance cyber security, but weaknesses remain in protecting information on its unclassified network. This network possesses sensitive information such as unclassified controlled nuclear information, export control information, and personally identifiable information about LANL employees. GAO found vulnerabilities in critical areas, including (1) identifying and authenticating users, (2) encrypting sensitive information, and (3) monitoring and auditing security policy compliance. A key reason for these information security weaknesses is that the laboratory has not fully implemented an information security program to ensure that controls are effectively established and maintained. Furthermore, deficiencies in LANL's policies and procedures raise additional concern, particularly with respect to foreign nationals' accessing the network from the laboratory and remotely. Finally, LANL cyber security officials told GAO that funding to address some of their security concerns with the laboratory's unclassified network has been inadequate. However, NNSA officials asserted that LANL had not adequately justified its requests for additional funds. GAO made 52 recommendations to help strengthen LANL's information security program and controls over the unclassified network.
Subject to EPA’s oversight, state and local permitting agencies generally administer NSR and operate under one of two arrangements. Under the first arrangement, state and local agencies receive “delegated authority” from EPA under which they implement EPA’s NSR regulations. Under the second arrangement, states and localities are also responsible for administering NSR, but instead of implementing EPA’s NSR regulations, state and local agencies develop plans, known as state implementation plans, that regulate the construction and modification of stationary sources. These plans provide assurances that the states and localities will have adequate personnel, funding, and authority under state law to carry out the plan, among other provisions. State implementation plans also must include NSR regulations that are at least as stringent as EPA’s NSR regulations, although states and local agencies are authorized to include more stringent or additional requirements. States and localities must submit these plans, as well as any revisions to them, to EPA for approval. Once EPA approves the plans, they become federally enforceable requirements. Although this report focuses on NSR, the Clean Air Act and its implementing regulations subject electricity generating units to additional emissions control requirements. For example, the Acid Rain Program, created by the Clean Air Act Amendments of 1990, established a cap on the amount of sulfur dioxide that may be emitted by electricity generating units nationwide and authorizes those generating units to trade emissions allowances for sulfur dioxide. These facilities must also continuously monitor their emissions and report them to EPA. Furthermore, EPA has recently finalized or proposed several other regulations that will affect many fossil fuel generating units. These regulations include the (1) Mandatory Reporting of Greenhouse Gas rule finalized in 2009, which established reporting requirements for greenhouse gas emissions above certain thresholds; (2) Cross-State Air Pollution Rule, finalized in 2011, which limits sulfur dioxide and nitrogen oxides emissions from a number of states that contribute significantly to nonattainment or interference with maintenance of certain national ambient air quality standards in downwind states; (3) National Emissions Standards for Hazardous Air Pollutants from Coal- and Oil-Fired Electric Utility Steam Generating Units, also known as the Mercury and Air Toxics Standards, which establish emissions limitations on mercury and other pollutants and was finalized on February 15, 2012; and (4) Standards of Performance for Greenhouse Gas Emissions for New Stationary Sources for Electric Utility Generating Units, proposed in April 2012, which establishes new source performance standards for emissions of carbon dioxide for certain new fossil fuel electricity generating units. EPA does not maintain complete information on NSR permits issued to fossil fuel electricity generating units. State and local permitting agencies track the NSR permits they issue, but EPA does not maintain data on these permits in a complete and centralized source of information, which limits the agency’s ability to assess the impact of NSR. In addition, EPA has the opportunity to review and comment on every draft NSR permit issued by state and local permitting agencies, but the agency does not compile data on which permitting authorities address EPA’s comments. The absence of this information makes it difficult for EPA to measure the impact of its comments and may impede its ability to assess how state and local permitting agencies may differ from EPA in their interpretation of NSR requirements. EPA does not maintain complete information on NSR permits issued for construction of new fossil fuel electricity generating units or for major modifications to existing units. State and local permitting agencies, which issue NSR permits in most parts of the country, track the NSR permits they issue. (Figure 1 describes the roles of state and local permitting agencies and EPA in issuing NSR permits.) State and local agencies vary widely in the types of data they collect on NSR permits and the systems they use to compile the data. Some states maintain detailed information on NSR permits in electronic form available on publicly accessible websites. For instance, in seven of the nine states where we conducted interviews, state officials maintain information online that can be used to identify the electricity generating units that have received NSR permits, as well as the requirements of the permits.maintained in different formats across these states and cannot be readily compiled into a complete source of information on NSR permitting for the electricity generating sector. In addition to a lack of comprehensive permitting data, EPA and state and local agencies face other challenges in ensuring that owners of fossil fuel electricity generating units comply with requirements to obtain NSR permits. Many of the challenges stem from two overarching issues: (1) determining whether an NSR permit is required and (2) identifying instances where unit owners should have obtained NSR permits but did not. As a result, EPA’s enforcement efforts involve long, resource- intensive investigations. A major challenge to EPA, states, and local agencies in ensuring NSR compliance is that it can be difficult for unit owners and regulators to know whether an NSR permit is needed, because NSR’s rules governing applicability are complex and because NSR applicability is determined on a case-by-case basis. EPA and state officials we spoke with said that NSR as it applies to new units is fairly straightforward, because newly constructed units generally must obtain NSR permits before starting operation. In contrast, determining what constitutes a major modification of an existing unit, and, thus, what requires an NSR permit, is more complex. Under NSR regulations, owners are to apply for an NSR permit before making any physical or operational change that would result in a significant net increase of emissions. These changes, such as adding new equipment, must be evaluated in the specific context of the unit and its intended use. State officials and industry representatives we interviewed said it can be difficult to determine whether these activities trigger NSR because the two steps for determining applicability—first, whether the unit is making a physical or operational change and, second, whether this change would result in a significant net increase of emissions—are not categorically defined and have changed over time. The first step for determining NSR applicability can be complicated because the definition of “physical or operational change” excludes activities that are considered routine maintenance, repair, and replacement. NSR regulations, first finalized in 1978, contained no description or definition of the “routine maintenance” exclusion, instead relying on a case-by-case approach that involves weighing several factors, including the nature, extent, purpose, frequency, and cost of proposed activities. Federal courts, however, have issued inconsistent decisions on whether the factors should be analyzed with respect to industry practice or a particular unit’s history. In 2003, in part because of concerns about the case-by-case approach, EPA finalized a rule that categorically excluded certain activities from NSR by defining them as “routine maintenance, repair, and replacement” to provide more certainty to generating units and permitting agencies. Specifically, the rule categorically deemed certain replacement activities to be routine maintenance, repair, and replacement if certain conditions were met, such as replacement activities’ costs not exceeding a specified threshold. In 2006, however, a federal appeals court struck down this rule because it was contrary to the plain language of the Clean Air Act. As a result, a case-by-case approach is still used to determine which activities qualify for the exclusion. Several state officials and industry representatives we interviewed said that the case-by-case approach makes it difficult to know when NSR applies. A number of industry representatives also said that uncertainty around NSR applicability can deter owners from making improvements to units that would improve efficiency. One senior EPA enforcement official we interviewed, however, noted that NSR regulations are written broadly to cover many disparate industries and said it would not be possible for EPA to develop detailed regulations tailored to each industry. One state official we spoke with also said that attempts to more precisely specify what activities are considered routine maintenance might not be worthwhile, since EPA’s previous efforts to do so were struck down in court. The second step in determining NSR applicability—assessing whether a change results in a significant net increase in emissions—presents additional complications. Like the routine maintenance exclusion, regulations governing what constitutes an increase in emissions have been subject to litigation, leading to changes in the process used to measure emissions increases over time. For example, in 1992, in response to a court decision, EPA finalized a regulation changing how future emissions from generating units are to be calculated. Rather than calculating future emissions based on a unit’s potential to emit, under the revised regulation, future emissions are calculated based, in part, on the maximum emissions that can be generated while operating the unit as it is intended to be operated and as it is normally operated. Some state officials and industry representatives we interviewed said that calculating emissions increases can be challenging because the regulations are complex, and EPA’s interpretation has changed over time. NSR’s complexity can be particularly difficult for owners of smaller generating units who may lack the legal and technical expertise to properly comply with NSR, according to an EPA official and industry representative we interviewed. EPA officials acknowledged that the process is not always simple, but they also noted that it is much easier for companies to make these calculations than for permitting agencies to verify them, since permitting agencies are less familiar with—and have less access to— information about a generating unit, its activities, and its data systems, than the companies. According to several state officials and industry representatives we interviewed, assessing whether a change results in a significant net increase in emissions can also be complicated because EPA regulations authorize certain emissions increases to be excluded from this assessment—specifically, those emissions increases that are attributable to growth in demand. Several state officials we interviewed said that some owners have had difficulty distinguishing between emissions increases due to projected growth in demand and emissions increases resulting from the change to the unit, a process made more difficult because EPA has not offered clarification or guidance regarding this exclusion. One senior EPA enforcement official disagreed with this assessment, noting that utilities commonly employ models that help project demand as a way to guide their operations and investment decisions. According to this official, EPA’s approach is based on methods already widely employed throughout the electricity sector. EPA and state agency officials, who are responsible for verifying owners’ calculations when they apply for a permit or seek guidance on NSR applicability, said that verifications are further complicated by other NSR provisions that exclude certain activities from NSR. For example, a change that significantly increases a generating unit’s emissions will not trigger NSR if it does not cause a net increase in emissions. Specifically, an NSR permit is not required if the increase in emissions resulting from a change is offset by certain contemporaneous emissions reductions, a process called “netting.” EPA has defined “contemporaneous” as within 5 years before construction on the change commences, although states can define the term differently. Thus, an owner could compensate for an emissions increase in a given year by subtracting emissions decreases that were made in the previous 5 years, although any other emissions increases during that 5-year period must also have been included in the calculation. Several state agency officials we spoke with said that unit owners often pursue this option so they do not have to obtain an NSR permit and install costly emissions controls. Several EPA and state officials we interviewed also said, however, that it can be difficult to verify that calculations are valid, in part because they must rely on information provided by the unit owners. Some of these officials said it can be difficult to determine what types of emissions reductions and increases may be aggregated together under the netting option. One EPA regional office official said that, overall, options such as netting complicate and lengthen the permitting process because they require unit owners to submit additional documentation that the regulator must in turn review. To aid owners and regulators in determining when NSR should apply, EPA and state officials identified several sources of available guidance, including the following: Consultations with state and local agencies. Before seeking a permit, owners of units can request assistance from state and local permitting agencies in determining whether NSR applies. Some state agency officials said that unit owners in their state regularly seek guidance, particularly on how to qualify for one of NSR’s exclusions. However, other EPA and state officials we spoke with said that such requests are uncommon; many unit owners may hesitate to contact a regulatory agency because regulators may have a different interpretation of NSR that could require them to install costly emissions controls. EPA’s 1990 draft NSR workshop manual. Several state agency officials we spoke with said they rely on a draft EPA manual from 1990 issued as guidance for implementing the federal NSR permitting process, although the manual was never finalized and has not been updated. Regionally maintained databases. Through one of its regional offices, EPA maintains an online database containing more than 600 EPA- issued policy and guidance documents. Several EPA and state officials we interviewed said that the database was helpful in providing current information on how to apply NSR, although one state official said that these determinations are not always consistent. Court decisions. Several EPA and state permitting officials we interviewed said they rely primarily on court rulings for guidance on interpreting NSR regulations to ensure that their determinations are up-to-date. EPA officials said that the agency’s ability to generate comprehensive, nationwide guidance is limited because of the case-by-case nature of NSR, ongoing litigation, and the variation in NSR requirements across states. For example, some states and localities have adopted NSR requirements that are more stringent than the federal regulations. Furthermore, some states’ regulations differ because they have not revised their state implementation plans to incorporate the 2002 NSR reforms or had those revisions approved by EPA. The second major challenge EPA and state and local agencies face in ensuring compliance with NSR is that it is often difficult for regulators to identify noncompliance—that is, instances where owners did not obtain NSR permits before making major modifications to their generating units. According to several EPA officials we interviewed, identifying noncompliance can be challenging because unit owners—not regulatory agencies—have responsibility for determining whether they need an NSR permit. Most owners do not ultimately obtain NSR permits before making changes to their units, according to EPA officials we interviewed, because the owners determine that the changes fall under one of NSR’s exclusions, such as routine maintenance, or because they offset emissions increases through netting. These unit owners are generally not required to notify EPA or state or local permitting agencies when they use these exclusions. Therefore, EPA would not review the owners’ determinations unless (1) the owner proactively sought a permit and the state or local permitting agency determined that an NSR permit was required or (2) EPA initiated an investigation. In instances where a unit did not apply for and receive a permit as required, it can take EPA several years to identify the noncompliance and take corrective action. Moreover, under an EPA rule finalized in 2007, known as the “reasonable possibility recordkeeping” rule, a unit owner who determines that a change will not trigger NSR is not required to keep records of the change and its resulting emissions unless the owner believes there is a reasonable possibility that the change could result in a significant emissions increase, and other conditions are met. According to one state official we interviewed, this rule may complicate efforts to identify noncompliance because EPA and state regulators generally have to retroactively determine whether an NSR permit should have been obtained for past activities, and without the benefit of company records, such a determination is difficult. According to EPA and state officials we interviewed, state and local permitting agencies are generally not well positioned to identify noncompliance. State and local permitting agencies routinely inspect units, but officials told us these inspections focus on compliance with the terms of existing operating permits, not on whether an owner failed to obtain a permit. Several EPA and state officials told us that, given the complexity of most units, routine compliance inspections are not well suited to detect NSR violations, in part because it is difficult to distinguish work that might be considered a major modification from other work that is routine. According to one EPA official, to identify noncompliance with NSR, agency investigators need to identify what changes have already occurred; gather information on the nature of these changes; and determine whether NSR should have applied at the time the changes occurred, considering all possible exclusions and other factors. EPA officials we spoke with said that this process requires investigators to analyze information on historic emissions and a large volume of records on work conducted over the course of a unit’s life. According to these and other EPA officials, such extensive review would not be possible during routine compliance inspections. Several state and EPA officials we spoke with also said that, given the complexity and case-by-case nature of NSR, state and local agencies generally do not have the resources—and in some cases expertise—to detect noncompliance. As result, several state officials we spoke with said they rely on EPA to identify instances of noncompliance with NSR. EPA has therefore taken a lead role in enforcing NSR, beginning in the mid-1990s and continuing to the present. In 1996, EPA began targeting older, coal-fired generating units for compliance assessments and, on the basis of its investigations, alleged that several of the largest coal-fired electricity generating units in the country had violated NSR provisions by making major modifications without obtaining an NSR permit. In 1999 and early 2000, after receiving a number of cases from EPA, DOJ filed seven enforcement actions in U.S. federal courts in what is known as EPA’s Coal-Fired Power Plant Enforcement Initiative. For their part, owners of units targeted by the NSR enforcement initiative contended that, among other things, their projects should have qualified for the routine maintenance exclusion. Nonetheless, almost all of these cases ultimately resulted in settlements mandating the installation of emissions controls and civil penalties. Since then, EPA and DOJ have continued this enforcement initiative and secured additional settlements for alleged noncompliance with NSR. According to EPA, steps to develop an NSR enforcement case include: 1. Section 114 requests. Under Section 114 of the Clean Air Act, EPA may obtain information from owners of generating units to determine whether violations have occurred. Such information includes detailed cost information on capital construction projects suspected to be NSR violations. According to EPA officials, collecting and reviewing such information can take several months to over a year. 2. Settlement negotiations. After reviewing generating units’ records, EPA determines whether NSR violations have occurred. If EPA determines that the unit is not in compliance, it will notify owners of generating units and encourage the owner to install emissions controls. EPA initially tries to resolve noncompliance through a settlement. 3. Referral. If settlement negotiations are unsuccessful, EPA will determine whether enough evidence exists to refer the case to DOJ for potential litigation. DOJ then reviews the accumulated evidence and determines whether there is merit to file suit against the company. Before filing the case in court, DOJ generally discusses the matter with the owner in a further attempt to settle. According to EPA and DOJ officials, EPA’s investigations for NSR compliance, and subsequent enforcement actions, take a long time to conclude and involve substantial EPA resources. In instances where EPA’s investigations have uncovered suspected violations, it can take years to litigate a case or bring it to conclusion through a settlement. Specifically, the 22 settlements resulting from EPA’s enforcement initiative took, on average, 7 years to conclude. According to several industry representatives we interviewed, these efforts have also placed a large burden on owners and operators of generating units, given the amount of information required on past activities at the unit. Available data, while not complete, suggest that a substantial number of generating units have not complied with requirements to obtain NSR permits. Complete data on NSR compliance do not exist for two primary reasons. First, EPA has not yet investigated all electricity generating units for compliance with requirements to obtain NSR permits. Second, NSR compliance is determined at a point in time, and EPA’s interpretation of compliance has, in some cases, differed from that of federal courts. Nonetheless, EPA has investigated a majority of coal-fired generating units, and data from these investigations suggest that a substantial number of generating units have not complied. From our review of relevant documentation and EPA-provided data, we identified two primary reasons why complete data on NSR compliance are not available. First, EPA has not yet investigated all generating units for NSR compliance, and second, available data do not provide a complete picture of compliance. EPA has investigated most—but not all—coal-fired generating units for compliance with NSR at least once. According to our review of EPA- provided documents and data, EPA has investigated 831 generating units at least once since it began its Coal-Fired Power Plant Enforcement Initiative. These 831 units represent about 81 percent of all coal-fired units that generated electricity in 2010 and about 24 percent of all fossil fuel-fired units (those using coal, natural gas, or oil) that produced electricity in 2010.coal-fired units—have not been investigated by EPA. According to EPA officials we interviewed, the agency has focused most of its NSR compliance efforts on large, coal-fired units because they produce dramatically higher levels of harmful air emissions. Most natural gas units—as well as some smaller Data on units investigated by EPA are not conclusive because compliance is determined at a point in time; therefore, subsequent changes to the unit could affect its future compliance with NSR. NSR is required each time an existing generating unit undertakes a major modification. Thus, an owner of electricity generating unit that has obtained an NSR permit in the past—or was subject to an EPA investigation—is not exempt from the requirement to obtain an NSR Moreover, allegations of permit for any future major modifications. noncompliance stemming from EPA’s investigations do not necessarily mean that a violation has occurred, because in some cases federal courts have ultimately disagreed with EPA about the need for an NSR permit. Given these issues, it is difficult to provide a comprehensive assessment of NSR compliance at any given time. Although units must undergo NSR review for major modifications, some of the settlement agreements EPA has reached with electricity generating units include a provision precluding EPA, in certain circumstances, from suing the owner for making a major modification and not undergoing NSR. of coal-fired units that produced electricity in 2010, and about 14 percent of all fossil fuel-fired units that produced electricity in 2010. According to EPA, the Coal-Fired Power Plant Enforcement Initiative is perhaps the most comprehensive and coordinated enforcement effort under the Clean Air Act to date. The initiative has led to 22 settlements covering a total of 263 units, or approximately 32 percent of the units EPA has investigated. According to our analysis of EPA data, the settlements will require affected unit owners to install and operate emissions controls costing an estimated $12.8 billion in total and levy civil penalties totaling around $80 million. Some companies are also required to fund environmentally beneficial projects, such as restoring watersheds and forests in national parks. These settlements are projected to reduce sulfur dioxide emissions by more than 1.8 million tons annually and nitrogen oxides emissions by about 596,000 tons annually. reached companywide settlements in which companies agreed to put emissions controls on units constituting most of their production capacity. Two of the largest settlements—with American Electric Power and the Tennessee Valley Authority—represent 105 units, around 40 percent of the total, and about $8.6 billion in control costs, or around two-thirds of the total. A senior Department of Justice official we interviewed said that, in addition to the 22 concluded settlements, 7 additional NSR cases are in various stages of litigation. See appendix III for more details on EPA’s concluded NSR settlements. These reductions are to be phased in over an agreed-upon time frame, often 10 years. substantial number of generating units EPA investigations have allegedly found to be noncompliant suggests that many generating units have not obtained NSR permits as required. Addressing NSR’s complexity and improving compliance could reduce the need for long and resource- intensive enforcement actions and more effectively protect air quality by averting emissions before they occur. Yet EPA’s ability to simplify NSR or develop comprehensive, nationwide guidance is limited for several reasons, including the case-by-case nature of NSR applicability, ongoing litigation, and the variation in NSR requirements across states. Nonetheless, EPA has an opportunity to improve its efforts by collecting more comprehensive NSR permitting data. Several EPA regional offices maintain some information on the NSR permits issued by the state and local permitting agencies in their regions, but this information is in different formats and not compiled by EPA into a complete and centralized source of information on NSR permits issued nationwide, as recommended by the National Research Council in 2006. More complete information on NSR permitting would help EPA and external parties gauge the extent to which fossil fuel generating units have obtained NSR permits and help inform enforcement efforts that have already found widespread alleged noncompliance. In cases where unit owners apply for permits before making physical or operational changes that would result in a significant net increase of emissions, EPA plays an important role because it has an opportunity to comment on every draft NSR permit under consideration by state and local permitting agencies and to influence decisions about the appropriate level of pollution control, among others. A key benefit of EPA’s involvement in the permitting process is that the agency can review and comment on permits issued in different geographic areas and assess various aspects of draft permits, including the level of emissions control required. Because emissions controls can cost owners and operators of generating units hundreds of millions of dollars, EPA’s review of the required level of emissions control is critically important. Although EPA and headquarters staff devote resources to commenting on draft permits, EPA does not track whether state and local permitting agencies incorporate the agency’s comments. Without such information, EPA cannot fully assess the extent to which state and local agencies incorporate its comments in NSR permits or the extent to which emissions control requirements imposed by state and local permitting agencies reflect suggestions made by EPA in its comments. To help improve EPA’s implementation of NSR, we recommend that the EPA Administrator direct the entities responsible for implementing and enforcing NSR—specifically, the Office of Enforcement and Compliance Assurance, Office of Air Quality Planning and Standards, and EPA regions—to take the following two actions: Working with EPA regions and state and local permitting agencies, consider ways to develop a centralized source of information on NSR permits issued to fossil fuel electricity generating units, and Using appropriate methods, such as sampling or periodic assessments, develop a process for evaluating the effects of its comments on draft NSR permits. We provided a draft of this report to the Department of Energy, the Department of Justice, and Environmental Protection Agency (EPA). The Department of Energy said they had no comments on the report’s findings and recommendations. The Department of Justice provided technical comments, which we incorporated as appropriate. EPA provided written comments, a copy of which can be found in appendix IV. In its written comments, EPA agreed with the importance of having good systems for tracking and compiling information to efficiently and effectively administer its programs, while enhancing accountability and transparency, but disagreed with the need for the actions called for in our recommendations. Regarding our first recommendation that EPA work with state and local permitting authorities to consider ways to develop a centralized source of information on permits issued to electric generating units, EPA said that it believes it has a number of permit tracking mechanisms in place, and raised four concerns about our recommendation. First, EPA said that it has maintained a centralized permit information database for many years—the RACT/BACT/LAER Clearinghouse, which is capable of capturing and sharing information on NSR permits that have been issued. However, EPA acknowledged that this database is incomplete—including about half of issued NSR permits—primarily because, in some areas, state and local agencies are not required to enter information about the permits they issue. Nonetheless, EPA said it is taking steps to improve participation. We continue to believe that comprehensive permitting data would enable EPA, Congress, and other interested parties to better understand the scope and impact of NSR. Second, EPA said that its regional offices track NSR permitting by the states in their jurisdiction and that the agency believes it is most appropriate for the regional offices, rather than headquarters, to be responsible for this information. However, our work found that the tracking of NSR permits by EPA’s regional office varied in completeness. For example, of the four regions we included in our sample, one region had a robust system for tracking issued NSR permits, and one had no system at all. EPA also said that its regional offices provide oversight of state and local agencies and that an EPA-wide compilation of permit data would be redundant, add costs, and provide little benefit to its oversight function. We continue to believe that a centralized source of complete information on NSR permits would enhance EPA’s oversight of state and local permitting agencies and help ensure consistency across regions. EPA headquarters could build on the ongoing efforts of some regional offices and develop more complete data using a simple, low-cost system. For example, we found that two regional offices use a spreadsheet to compile and maintain basic data on permits issued by state and local agencies. Additionally, we believe that any costs incurred in developing more comprehensive data should be considered relative to the benefits that could accrue from having better information on the universe of permitted facilities including, as noted by the National Research Council, the ability to assess the impact of policy changes. Third, EPA said that a centralized database of all NSR permits would not help most members of the public because most members of the public are interested in permits issued to specific facilities rather than the entire universe of all permits issued. Our report focused on the importance of more complete data to enhance programwide oversight of NSR permitting and targeting of enforcement efforts. More complete data could potentially assist the public and other interested parties in understanding the extent of NSR permitting for individual facilities, but this was not the basis of our findings and recommendations. We continue to believe that a centralized source of permitting data is important for EPA’s oversight of state and local permitting agencies and to enhance its enforcement efforts. Fourth, EPA questioned the value of more comprehensive information in targeting noncompliance with requirements to obtain permits. Specifically, EPA said that identifying noncompliance involves targeting facilities that should have obtained permits but did not and that information on facilities that have obtained permits would not assist in these efforts. Moreover, EPA said that getting data on noncompliant sources is time- and resource-intensive. We continue to believe that compiling complete information on facilities that have obtained permits could help identify facilities that have not obtained permits and enhance targeting of these facilities for potential noncompliance. We also believe that understanding which facilities have obtained permits as required could decrease these time and resource demands because the agency would have a better starting point for identifying noncompliance. Regarding our second recommendation that EPA develop a process for evaluating the effect of its comments on issued permits, the agency said that its regional offices already do so and described the interactions between these offices and state and local agencies during the permitting process. EPA also said that its regional offices already conduct oversight of state and local permitting agencies, including whether these agencies adequately address EPA’s comments on draft permits. We acknowledge these efforts in the report and believe that, as part of its overall oversight of nationwide permitting efforts, EPA headquarters could benefit from a broader and more comprehensive assessment of the extent to which its comments on draft permits were adequately considered and incorporated. Because the terms of issued permits can result in the installation of pollution controls that cost hundreds of millions of dollars, it is important to conduct higher level review of issued permits to identify variability in the terms of issued permits across geographic areas. We therefore continue to believe that implementing this recommendation would enhance oversight of NSR permitting nationwide and that EPA has an opportunity to build on the information already collected through the oversight activities of its regional offices. EPA also provided technical comments that we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Administrator of EPA, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact David Trimble at (202) 512-3841 or [email protected] or Frank Rusco at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributors to this report are listed in appendix V. To assess what information the Environmental Protection Agency (EPA) maintains on New Source Review (NSR) permits issued for fossil fuel electricity generating units, we gathered information from EPA and selected states on the status of their NSR permitting programs and efforts to collect and maintain permitting data. We selected a nonprobability sample of nine states on the basis of (1) the number of older electricity generating units in the state; (2) the quantity of electricity generated by such units in those states; (3) the volume of sulfur dioxide, nitrogen oxides, and carbon dioxide emitted by units in those states; and (4) the region in which the generating unit was located. We obtained these data from the Ventyx Velocity Suite EV Market-Ops database, a proprietary database containing consolidated energy and emissions data from EPA, the Energy Information Administration (EIA), and other sources. To assess the reliability of the Ventyx data, we reviewed documentation provided by Ventyx and tested key variables to verify their accuracy and determined the Ventyx data to be sufficiently reliable for our purposes. The nine states we selected were Alabama, Georgia, Indiana, Kentucky, Missouri, New York, North Carolina, Ohio, and Pennsylvania. To assess how permitting information is collected and used, we reviewed relevant documentation from these nine states and from EPA. We also interviewed permitting officials from these nine states, the four EPA regional offices that oversee these states, EPA’s Office of Air and Radiation, its Office of Inspector General, and its Office of Enforcement and Compliance Assurance. In three of the states, some localities are responsible for NSR permitting; we also spoke with officials at two of those localities, which we selected on the basis of the number of older units in their jurisdictions. To examine what challenges, if any, EPA, state, and local agencies face in ensuring compliance by electricity generating units with requirements to obtain NSR permits, we reviewed relevant provisions of the Clean Air Act and NSR regulations; guidance and other information on implementing NSR maintained by EPA; and literature on NSR from government agencies, academic and research institutions, environmental organizations, and industry groups. We also interviewed knowledgeable officials and stakeholders from these agencies and institutions, as well as officials from the selected states and localities. To review what available data show about compliance with requirements to obtain NSR permits, we reviewed information published by EPA on the estimated rate of noncompliance by industrial sectors. We also reviewed information on EPA’s enforcement activities maintained by enforcement officials in EPA’s Office of Enforcement and Compliance Assurance, including (1) data on notices of violation sent to owners of generating units alleging noncompliance with NSR; (2) lawsuits filed in court for alleged NSR violations; and (3) information on the settlements concluded by EPA and the Department of Justice with owners of generating units, which ended or prevented lawsuits alleging noncompliance. To assess the reliability of the EPA-provided data, we interviewed knowledgeable agency officials and tested key variables to verify their accuracy. We determined these data to be sufficiently reliable for the purposes of our analysis. We also interviewed knowledgeable enforcement and compliance officials from EPA’s headquarters Office of Enforcement and Compliance Assurance and four regional offices. We conducted this performance audit from April 2011 to June 2012, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individuals named above, Michael Hix (Assistant Director), Ellen W. Chu, Philip Farah, Cindy Gilbert, Jessica Lemke, Jon Ludwigson, Nancy Meyer, Mick Ray, and Jeanette Soares made key contributions to this report.
Electricity generating units that burn fossil fuels supply most of the nation’s electricity and are major sources of air pollution. Under the Clean Air Act, such units are subject to NSR, a permitting process that applies to (1) units built after August 7, 1977, and (2) existing units that undertake a major modification. Owners of such units must obtain from the appropriate permitting agency a preconstruction permit that sets emission limits and requires the use of certain pollution control technologies. EPA oversees states’ implementation of NSR, including reviewing and commenting on draft permits issued by state and local permitting agencies. GAO was asked to examine (1) what information EPA maintains on NSR permits issued to fossil fuel electricity generating units; (2) challenges, if any, that EPA, state, and local agencies face in ensuring compliance with requirements to obtain NSR permits; and (3) what available data show about compliance with requirements to obtain NSR permits. GAO reviewed relevant documentation and interviewed EPA, state, and local officials, as well as representatives from industry, research, and environmental groups. The Environmental Protection Agency (EPA) does not maintain complete information on New Source Review (NSR) permits issued to fossil fuel electricity generating units. State and local permitting agencies track the NSR permits they issue, but EPA does not maintain complete or centralized information on permits, despite a 2006 recommendation by the National Research Council that it do so. EPA maintains several databases that compile data on draft and issued NSR permits, but these sources are incomplete and thus cannot be used to identify all of the NSR permits that have been issued nationwide. In addition, EPA has the opportunity to review and comment on every draft NSR permit issued by state and local permitting agencies, but it does not compile data on whether the permitting agencies address EPA’s comments in final permits. The absence of more complete information on NSR permitting makes it difficult to know which units have obtained NSR permits or to assess how state and local permitting agencies vary from EPA in their interpretations of NSR requirements. Officials from EPA, state, and local agencies face challenges in ensuring that owners of fossil fuel electricity generating units comply with requirements to obtain NSR permits. Many of these challenges stem from two overarching issues. First, in some cases it is difficult to determine whether an NSR permit is required. NSR applicability depends on, among other factors, whether a change to a unit qualifies as routine maintenance, repair, and replacement; and whether the change results in a significant net increase in emissions. The rules governing NSR are complex, however, and applicability is determined on a case-by-case basis. Second, it is often difficult to identify noncompliance—instances where unit owners made a major modification without first obtaining an NSR permit—partly because owners of generating units determine whether a permit is needed, and in many cases their determinations are not reviewed by permitting agencies or EPA. State permitting agencies generally issue NSR permits, but EPA typically leads enforcement efforts, since identifying instances of noncompliance involves extensive investigations that go beyond the routine inspections conducted by state and local permitting agencies. EPA identifies NSR noncompliance through a lengthy, resource-intensive process that involves reviewing large amounts of information on units’ past emissions and construction activities. Available data on compliance, although incomplete, suggest that a substantial number of generating units did not comply with requirements to obtain NSR permits. Complete NSR compliance data do not exist for two main reasons: (1) EPA has not yet investigated all generating units for compliance, and (2) NSR compliance is determined at a point in time, and in some cases federal courts have disagreed with EPA about the need for an NSR permit. Nonetheless, EPA has investigated most coal-fired generating units at least once, and has alleged noncompliance at more than half of the units it investigated. Specifically, of the 831 units EPA investigated, 467 units were ultimately issued notices of violation, had complaints filed in court, or were included in settlement agreements. In total, EPA reached 22 settlements covering 263 units, which will require affected unit owners to, among other things, install around $12.8 billion in emissions controls. These settlements will reduce emissions of sulfur dioxide by an estimated 1.8 million tons annually, and nitrogen oxides by an estimated 596,000 tons annually. GAO recommends that EPA, among other actions, consider ways to develop a centralized source of data on NSR permits issued to electricity generating units. EPA expressed its commitment to filling gaps in its data systems, but disagreed with the actions GAO recommended. GAO believes that its recommendations would enhance oversight of NSR permitting and enforcement.
According to ONDCP, disrupting the illicit flow of drugs will reduce their availability, increase their cost, and eventually, reduce the rate of illicit drug usage. One part of the ONDCP strategy to disrupt the illicit drug market focuses interdiction efforts on seizing cocaine and other illicit drugs in the transit zone that are bound for the United States (arrival zone) from South America (source zone). Virtually all of the cocaine shipped to the United States travels through the transit zone from South America—entering Central America, Mexico, and the Caribbean en route to the United States. The transit zone is a 6 million square mile area that encompasses Central America, Mexico, the eastern Pacific Ocean, the Gulf of Mexico, and the Caribbean Sea. The transit zone is divided into four maritime trafficking routes: Eastern Pacific, Western Caribbean, Central Caribbean, and Eastern Caribbean. Drug traffickers use go-fast boats, fishing vessels, submersible vessels, noncommercial aircraft, and other types of conveyances to smuggle cocaine from the source zone to Central America, Mexico, and the Caribbean en route to the United States. ONDCP’s strategy for drug interdiction in the transit zone is focused on cocaine because ONDCP has identified cocaine as a leading drug threat to the United States. According to Coast Guard officials, the largest estimated share of cocaine has been smuggled through the Eastern Pacific and Western Caribbean routes of the transit zone for nearly two decades. For example, the principal source of information about cocaine flow in the transit zone is the CCDB. According to the CCDB, in fiscal year 2013, approximately 84 percent of the estimated cocaine flow, as measured in metric tons, was by noncommercial maritime means through these two routes. Figure 1 shows a map indicating the source, transit, and arrival zones—with the fiscal year 2013 estimated noncommercial maritime cocaine flow through the four smuggling routes and the locations of Puerto Rico and the U.S. Virgin Islands within the transit zone. As the southernmost points of entry into the United States and the only U.S. territories within the transit zone, Puerto Rico and the U.S. Virgin Islands are key entry points for illicit drugs being smuggled into the United States. Like the continental United States, Puerto Rico and the U.S. Virgin Islands are considered part of the arrival zone, yet they are located geographically within the Eastern Caribbean route of the transit zone. According to a 2011 Department of Justice National Drug Intelligence Center report, Puerto Rico and the U.S. Virgin Islands are attractive targets for illicit drug smuggling because of their proximity to the source zone and Puerto Rico’s location within the United States’ Customs zone. According to Coast Guard officials, the illicit drug flow through the Central and Eastern Caribbean routes generally consists of maritime smuggling from South America to the Dominican Republic and eventual transshipment to Puerto Rico (secondary flow) and, to a lesser extent, maritime smuggling directly from South America to Puerto Rico and the U.S. Virgin Islands (primary flow). CCDB drug flow estimates show that in fiscal year 2013, about 3 percent of the cocaine flow in the transit zone was smuggled toward Puerto Rico and the U.S. Virgin Islands. The Department of Justice has reported that most of this flow is destined for the continental United States—with the rest remaining on the islands for local consumption. However, estimates indicate that illicit cocaine smuggling toward Puerto Rico and the U.S. Virgin Islands has increased each year since fiscal year 2009. For example, according to CCDB estimates, cocaine flow toward Puerto Rico and the U.S. Virgin Islands has more than doubled in recent years, from 6.4 metric tons in fiscal year 2009 to 17.3 metric tons in fiscal year 2013. Federal and local government officials in Puerto Rico and the U.S. Virgin Islands have raised concerns about the illicit drug flow and have identified it as a key contributor to the high levels of murder and other violent crime on the islands. In particular, homicide rates in the two territories have risen in recent years, and federal and local officials have linked the rise of the homicide rates, in part, to illicit cocaine trafficking on the Island. According to a 2014 study by the United Nations Office on Drugs and Crime, the 2010 homicide rate in Puerto Rico was about 27 per 100,000 persons and in the U.S. Virgin Islands it was about 53 per 100,000 persons—more than 5 times (Puerto Rico) and 11 times (U.S. Virgin Islands) the U.S. national rate. According to a 2011 report issued by the High Intensity Drug Trafficking Area office that oversees the territories, most of this violence is associated with turf wars for control over the local drug market. JIATF-S relies on DHS (Coast Guard and CBP) and the Department of Defense (Navy) to provide vessels and aircraft for conducting drug interdiction operations in the transit zone. JIATF-S also receives operational resources from allied countries, with Canada, the Netherlands, and the United Kingdom providing maritime detection and monitoring assistance. According to JIATF-S and Coast Guard officials, the JIATF-S strategy is to use its available vessel and aircraft resources to patrol the transit zone far from U.S. shores and close to the source zone countries in South America in order to increase chances the interdictions are of larger load sizes and higher purity than would otherwise be the case and to cause greater disruption to illicit drug- smuggling organizations. JIATF-S officials reported deploying the majority of available vessels and aircraft to patrol the Eastern Pacific and Western Caribbean routes of the transit zone because the routes have accounted for the largest drug flow—and therefore deploying resources to these routes will have the greatest impact on efforts to disrupt cocaine flow. The Coast Guard is the lead federal agency for maritime drug interdiction in the transit zone, and its operations with JIATF-S are a key element of the Coast Guard’s counter-drug efforts. Overall, the Coast Guard is a major contributor of JIATF-S vessel and aircraft resources. The resources the Coast Guard provides to JIATF-S generally include major cutters, maritime patrol aircraft (planes), and helicopters capable of deploying airborne use of force (AUF). In addition, the Coast Guard provides JIATF-S with deployable specialized forces—specifically Law Enforcement Detachments (LEDET)—embarked on U.S. naval and allied vessels. We discuss AUF and LEDETs in more detail later in this report. The Coast Guard’s process for allocating drug interdiction resources is focused on meeting commitments for strategic priorities, such as JIATF-S transit zone operations, first, before dividing up its remaining resources between its Atlantic and Pacific Area Commands. According to Coast Guard guidance and discussions with DHS officials, the Coast Guard determines the amount of resource levels—targets for the amount of time selected vessels, aircraft, and LEDETs are provided to JIATF-S—through an annual operational planning process that considers factors, including resource requirements for strategic priorities, evolving maritime risks, and the availability of the Coast Guard’s fleet of vessels and aircraft. JIATF-S requests for resource requirements specify the capabilities (types of vessels or aircraft) and corresponding capacities (number of days for vessels or resource hours for aircraft) for the Coast Guard, CBP, and the Department of Defense. The Coast Guard reviews JIATF-S resource requirement requests, sets resource deployment targets for JIATF-S, and communicates these targets to DHS for inclusion in a DHS-wide Statement of Intent of planned deployments to JIATF-S. The Coast Guard then allocates its remaining available resources to the Atlantic and Pacific Area Commands, which further allocate the resources for implementing the Coast Guard’s 11 missions, including drug interdiction. See appendix I for more details on the Coast Guard’s drug interdiction mission resource allocation process. Unlike with overall transit zone operations, JIATF-S does not oversee detection and monitoring efforts for drug smuggling in the U.S. territories.and coordinating operations to interdict the maritime flow of illicit drugs in Puerto Rico and the U.S. Virgin Islands because they are U.S. territories and part of the arrival zone. In addition to the Coast Guard, CBP’s Puerto Rico-based Caribbean Air and Marine Branch conducts marine interdiction and patrol operations using a mix of planes, helicopters, and Rather, DHS has the lead federal responsibility for planning small boats for coastal drug interdiction operations, generally within U.S. territorial waters. The Puerto Rico Police Department also deploys small boats for drug interdiction operations. Overall, from fiscal years 2009 through 2013, the amount of resources the Coast Guard provided to JIATF-S—including vessels, aircraft, and LEDETs—varied. During this period, the Coast Guard generally did not meet annual targets for its primary drug interdiction mission performance measure. Coast Guard officials cited the declining readiness of the Coast Guard’s aging major cutter fleet; delays in the delivery of new, more capable replacement cutters; and budget constraints, including sequestration, as key factors affecting the Coast Guard’s ability to meet its resource deployment and drug interdiction mission performance targets. Figure 2 shows the key resources the Coast Guard uses to support drug interdiction operations. The Coast Guard’s deployment of vessels to JIATF-S to carry out drug interdiction operations in the transit zone varied during fiscal years 2009 through 2012, and then sharply declined in 2013. Specifically, the Coast Guard’s coverage targets—the planned number of days major cutters (national security cutters, high endurance cutters, and medium endurance cutters) are to operate under JIATF-S tactical control throughout the year—have varied since fiscal year 2009, and the Coast Guard has not fully met them. For example, according to Coast Guard documents, in fiscal year 2009, the Coast Guard’s cutter coverage target was 2,555 days (the equivalent of 7 major cutters under JIATF-S tactical control throughout the year) and the Coast Guard provided 2,036 days—about 80 percent of its target. In fiscal year 2013, the cutter coverage target was 2,008 days (or 5.5 major cutters) and the Coast Guard provided 1,346 days—about 67 percent of its target. Overall, the Coast Guard met an average of 76 percent of its annual JIATF-S cutter coverage targets Figure 3 compares the Coast during fiscal years 2009 through 2013.Guard’s cutter coverage targets with the actual cutter days provided to JIATF-S for fiscal years 2009 through 2013. The Coast Guard’s primary aircraft deployments to JIATF-S are long- range maritime patrol aircraft—generally the HC-130—to detect and monitor drug smuggling activity in the transit zone. The Coast Guard also deploys helicopters—generally modified MH-65s—with marksmen on board in what is known as airborne use of force. AUF-capable helicopters are deployed aboard major cutters and allied vessels to conduct short-range patrols and pursuit actions in the transit zone using marksmen who are trained to shoot out and disable the engines of fleeing drug-smuggling vessels—a capability JIATF-S and Coast Guard officials cite as being critical to drug interdiction success. Maritime patrol aircraft: According to Coast Guard data, the number of maritime patrol aircraft hours the Coast Guard provided to JIATF-S varied during fiscal years 2009 through 2012, ending with an overall decline in 2013, although the numbers remained below target levels. As can be seen in figure 4, since fiscal year 2009, the Coast Guard’s annual maritime patrol aircraft hour allocation target (the number of hours the aircraft are to be under JIATF-S tactical control) has been 4,700 hours. According to Coast Guard data, the Coast Guard approached the target in fiscal year 2011, when the Coast Guard provided 4,416 resource hours—or about 94 percent of its target. Since 2011, though, the Coast Guard has reduced the number of maritime patrol aircraft hours that it has provided to JIATF-S. Coast Guard officials attributed this reduction to a smaller HC-130 fleet size and maintenance needs, including modifications to extend the HC-130s’ airframe life. In fiscal year 2013, the Coast Guard provided 3,506 maritime patrol aircraft resource hours— roughly 75 percent of its targeted level. Airborne use of force: The Coast Guard measures its deployment of AUF to JIATF-S in the number of days AUF-capable helicopters are deployed under JIATF-S tactical control. Coast Guard data show the Coast Guard increased its AUF deployments to JIATF-S during fiscal years 2009 through 2012, before declining in 2013, while remaining below target levels. Specifically, the Coast Guard’s AUF deployments increased from 1,030 days in 2009 to 1,232 days in fiscal years 2012, before declining to 1,169 days in fiscal year 2013. According to Coast Guard data, in fiscal year 2013, the Coast Guard’s AUF deployment target was 1,460 days and the Coast Guard provided 1,169 days—approximately 80 percent of its AUF days target goal. Figure 5 shows the AUF deployment day targets compared with actual AUF days provided to JIATF-S during fiscal years 2009 through 2013. Beyond vessels and aircraft, the Coast Guard provides JIATF-S with LEDETs—specially trained personnel who deploy primarily aboard U.S. Navy and allied vessels to conduct maritime law enforcement operations such as boarding suspect vessels and taking custody of suspected drug smugglers in the transit zone. The Coast Guard is the only JIATF-S resource provider that has law enforcement authority and LEDET personnel deployed in maritime areas far from U.S. waters. By deploying LEDETs on Navy and allied vessels, JIATF-S increases the resources it has available for apprehending suspected drug smugglers, their contraband, and their vessels. According to Coast Guard data, and as shown in figure 6, the Coast Guard’s deployment of LEDETs to JIATF-S (as measured in days) varied from fiscal years 2009 through 2013, but experienced an overall decline during this time period. The Coast Guard has not met its LEDET allocation target levels to JIATF-S since establishing targets in fiscal year 2010. The Coast Guard provided its lowest LEDET allocation to JIATF-S in fiscal year 2013, when it provided 895 days, or just under half of its targeted level of 1,825 days. According to Coast Guard officials, the Coast Guard’s ability to deploy LEDETs to JIATF-S is largely dependent on the availability of Navy and allied vessels, as discussed later in this report. The Coast Guard has generally not met targets for its primary drug interdiction performance measure—removal rate for cocaine from noncommercial vessels in the maritime transit zone. According to Coast Guard officials, this measure focuses on transit zone drug operations because the Coast Guard’s drug interdiction mission priority is removing illicit drugs as close to their origins in South America and as far from U.S. shores as possible, where drug shipments are in their most concentrated bulk form. The measure assesses the percentage of cocaine directly seized or observed being jettisoned, scuttled, or destroyed as a result of Coast Guard actions relative to the total known flow of cocaine through the transit zone using noncommercial maritime vessels, as estimated in the CCDB. According to Coast Guard data, since establishing performance targets for this measure in fiscal year 2009, the Coast Guard met its target in 1 year—fiscal year 2013.Coast Guard reported a cocaine removal rate of 15.3 percent in the transit zone, exceeding its performance target rate of 14.1 percent. Figure 7 shows the Coast Guard’s performance in meeting this primary drug interdiction performance measure from fiscal years 2009 through 2013. The Coast Guard is supporting a DHS-wide effort to combat the growing level of violence associated with drug trafficking in Puerto Rico and the U.S. Virgin Islands. Specifically, in September 2012, DHS implemented Operation Caribbean Guard to address violence and drug trafficking into The Coast Guard’s and within Puerto Rico and the U.S. Virgin Islands.role in this DHS-wide effort has been to increase vessel and aircraft operations to interdict the flow of drugs being trafficked by noncommercial maritime vessels toward the islands. Since September 2012, the Coast Guard’s Seventh District has implemented a surge operation, known as Operation Unified Resolve, which has provided Sector San Juan—the Coast Guard field unit whose area of responsibility includes Puerto Rico and the U.S. Virgin Islands—with additional vessels and aircraft to regularly patrol Puerto Rico and the eastern approaches of the U.S. Virgin Islands. Operation Unified Resolve initially began as a surge operation, but in October 2013, the Coast Guard made the surge operation a standing operation—and, according to Coast Guard officials, established a new baseline for drug interdiction operations in support of Puerto Rico and the U.S. Virgin Islands. Under Operation Unified Resolve, the Coast Guard has placed special emphasis on targeting the primary and secondary flow of illicit drugs from South America to Puerto Rico and the U.S. Virgin Islands. According to Sector San Juan officials, a key challenge for the Coast Guard is the relatively short distance between the Dominican Republic and Puerto Rico. For example, officials noted that it would take approximately 4 hours for a go-fast vessel to transit the 70 to 80 miles between the Dominican Republic and Puerto Rico. Coast Guard officials reported that this places a premium on the need for good intelligence on potential drug-smuggling vessels and the effective placement of assets to interdict them. According to Coast Guard officials, the Coast Guard’s decision to provide additional resources to Sector San Juan resulted from Coast Guard analyses that found Sector San Juan lacked sufficient vessels and aircraft to reduce maritime drug smuggling into Puerto Rico and the U.S. Virgin Islands. For example, according to an August 2012 Coast Guard memorandum, Sector San Juan’s fleet of vessels faced readiness concerns and lacked the capability to effectively conduct operations against the primary drug flow of go-fast boats smuggling illicit drugs from South America into Puerto Rico and the U.S. Virgin Islands. Further, the memorandum notes that the Coast Guard did not have maritime patrol aircraft permanently assigned to the territories. According to the memorandum, the only permanently assigned Coast Guard aircraft in Puerto Rico were helicopters based in the northwest corner of the island and their endurance and position made them impractical for patrolling the eastern approaches to Puerto Rico and the U.S. Virgin Islands. Coast Guard officials reported that the Coast Guard has not received additional resources to support Operation Unified Resolve. Rather, to implement the operation, the Coast Guard reported that it supplemented its annual allocation of vessels and aircraft to Sector San Juan by reallocating medium endurance cutters and maritime patrol aircraft to Puerto Rico from other locations within the Coast Guard—largely from the Coast Guard’s Seventh District. According to Coast Guard officials, these vessels and aircraft, in general, had previously been allocated for alien migrant interdiction operations. As noted earlier and as further described in appendix I, the Coast Guard’s process for allocating drug interdiction resources is focused on meeting commitments for strategic priorities, such as JIATF-S transit zone operations, first, before dividing up its remaining resources among its field locations such as Sector San Juan. In this way, the Coast Guard reported that the additional resources provided for Operation Unified Resolve did not come at the expense of its JIATF-S deployments. Beyond Operation Unified Resolve, the Coast Guard is scheduled to modernize Sector San Juan’s vessel fleet. According to Coast Guard officials, during fiscal years 2015 and 2016, the Coast Guard plans to replace Sector San Juan’s six 110-foot patrol boats with six new 154-foot fast response cutters (FRC). According to Coast Guard officials, the FRCs’ impact on the drug interdiction mission will be significant, as the FRC is expected to provide (1) increased interdiction capabilities; (2) improved sea keeping; (3) greater endurance; (4) the ability to deploy a pursuit-capable small boat; (5) improved weapons systems; and (6) improved command, control, and communications systems. Coast Guard officials reported that Sector San Juan would accommodate a mix of the new FRCs and 110-foot patrol boats until the 110-foot patrol boats are phased out by the end of fiscal year 2016. According to senior officials from Sector San Juan, the additional resources Sector San Juan is utilizing for Operation Unified Resolve, along with the scheduled arrival of the six FRCs by the end of 2016, will put Sector San Juan in a better position to meet its mission needs. According to Coast Guard data, the total amount of vessel hours in support of drug interdiction operations in the Sector San Juan area of responsibility more than tripled in recent years—from 2,051 hours in fiscal year 2009 to 6,839 hours in fiscal year 2013. According to the data, much of the increase in vessel drug interdiction operational hours occurred from fiscal years 2012 through 2013, when the Coast Guard was implementing Operation Unified Resolve. Coast Guard data show that medium endurance cutters accounted for a rising share of the drug interdiction vessel operational hours, increasing from 3 percent in fiscal year 2011 to 28 percent in fiscal year 2013. In fiscal year 2013, drug interdiction operations accounted for 40 percent of reported medium endurance cutter and patrol boat hours in the Sector San Juan area of responsibility.operational hours in support of the drug interdiction mission has risen since 2009 in response to increased drug-smuggling events and the additional resources provided for Operation Unified Resolve beginning in late fiscal year 2012. According to Coast Guard officials, the number of vessel Figure 8 shows the total vessel hours (major cutter and patrol boat hours) the Coast Guard reported for conducting drug interdiction operations in the Sector San Juan area of responsibility during fiscal years 2009 through 2013, as well as, the relative share of the vessel hours provided by Sector San Juan and other Coast Guard locations. According to Coast Guard data, maritime patrol aircraft resource hours reported for drug interdiction operations in the Sector San Juan area of responsibility declined during fiscal years 2009 through 2011, before increasing considerably in fiscal years 2012 and 2013. For example, in fiscal year 2011, the Coast Guard reported conducting 148 flight hours patrolling Puerto Rico and the U.S. Virgin Islands, and this number more than tripled to 502 hours in fiscal year 2012 before doubling to 1,000 hours in fiscal year 2013. The Coast Guard attributes this considerable increase of flight hours in recent years to increased aircraft provided in support of Operation Unified Resolve. Since implementing Operation Unified Resolve in September 2012, the Coast Guard has conducted surveillance patrols of Puerto Rico and the U.S. Virgin Islands using maritime patrol aircraft and crews forward deployed from Coast Guard field locations in the continental United States. Figure 9 shows the Coast Guard’s maritime patrol aircraft hours in support of drug interdiction operations in the Sector San Juan area of responsibility during fiscal years 2009 through 2013. Coast Guard officials reported that the additional resources the Coast Guard provided for Operation Unified Resolve have led to increasing interdictions of illicit drugs being smuggled in and around Puerto Rico and the U.S. Virgin Islands. According to Coast Guard officials, as of March 25, 2014, Operation Unified Resolve had led to the removal of 32,669 kilograms of cocaine and roughly 11,000 pounds of marijuana. Further, Coast Guard officials reported that since deploying additional vessels and aircraft for Operation Unified Resolve in September 2012, the Coast Guard found the estimated primary flow of cocaine into Puerto Rico to be considerably higher than previously thought. For example, according to CCDB data provided by the Coast Guard, the estimated noncommercial maritime primary flow of cocaine toward Puerto Rico and the U.S. Virgin Islands more than doubled, from 7.1 metric tons in fiscal year 2012 to 14.9 metric tons in fiscal year 2013.and secondary noncommercial maritime drug flow toward Puerto Rico and the U.S. Virgin Islands during fiscal years 2009 through 2013. We are not making recommendations in this report. We provided a draft of this report to DHS, the Department of Justice, ONDCP, and JIATF-S for review and comment. We received technical comments that we have incorporated, as appropriate. We are sending copies of this report to the Secretary of Homeland Security, the Commandant of the Coast Guard, and appropriate congressional committees. In addition, this report is available at no charge on GAO’s web-site at http://gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9610 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff acknowledgments are provided in appendix III. This appendix provides a summary of the Coast Guard’s process for allocating vessels, aircraft, and other resources for its drug interdiction mission. The Coast Guard’s process for allocating drug interdiction resources is focused on meeting commitments for strategic priorities, including for the Joint Interagency Task Force South (JIATF-S)—a reporting unit of the Department of Defense’s Southern Command that oversees detection and monitoring operations of drug-smuggling events in the transit zone. The Coast Guard allocates drug interdiction resources for these strategic priorities, first, before dividing up its remaining resources among its Atlantic and Pacific Area Commands for further allocation to Coast Guard districts and sectors across the United States. The Coast Guard determines the targets for the amount of time selected vessels, aircraft, and law enforcement detachments (LEDET) are provided to JIATF-S for transit zone operations—through an annual operational planning process that considers factors including resource requirements for strategic priorities, evolving maritime risks, and the availability of vessels and aircraft. Through this process, the Coast Guard reviews JIATF-S resource requests and sets resource targets. The Coast Guard then allocates the remaining resources among its field locations across the United States for implementing 11 missions, including drug interdiction. In general, the Coast Guard’s annual drug interdiction resource allocation planning process includes four steps. First, JIATF-S submits its resource allocation requirements for meeting National Drug Control Strategy targets to the Department of Homeland Security (DHS) and the Department of Defense as directed by the National Interdiction Command and Control Plan. These requirements specify the capabilities (types of vessels or aircraft) and corresponding capacities (number of days for vessels or resource hours for aircraft). The DHS Office of Policy’s Counter Illicit Trafficking Section communicates the resource requests to the Coast Guard with resource hour requests for Coast Guard cutters, boats, aircraft, and LEDETs. Second, Coast Guard planners determine the amount of resources that the Coast Guard intends to provide in the upcoming fiscal year. The Coast Guard considers its support for JIATF-S drug interdiction operations as one of three strategic commitment priorities. In this way, Coast Guard planners determine the number of vessel days and aircraft hours to provide to JIATF-S before allocating remaining vessels and aircraft to its field locations across the United States for other missions (as described in more detail below). The Coast Guard determines its JIATF-S resource targets based on various factors, including strategic priority and resource availability. Third, the Coast Guard provides its JIATF-S resource target—or Statement of Intent—to the DHS Office of Counter Illicit Trafficking, which liaises with JIATF-S and the Office of National Drug Control Policy (ONDCP). The Statement of Intent details target levels of resources the Coast Guard intends to provide to JIATF-S for the next fiscal year. The Statement of Intent outlines asset availability level targets for major cutters, maritime patrol aircraft, and other resources, such as deployable forces. DHS then combines the Coast Guard Statement of Intent with those of Customs and Border Protection (CBP) and submits an overall DHS Statement of Intent to ONDCP and JIATF-S. Fourth, after allocating resources for JIATF-S and other strategic commitments, the Coast Guard divides its remaining resource hours for vessels and aircraft between its Pacific and Atlantic Area Commands. Coast Guard officials reported that the Coast Guard’s field units use a greater variety of vessels for coastal drug interdiction operations than provided to JIATF-S. These generally include the 110-foot patrol boats in addition to a variety of smaller boats. For example, whereas the Coast Guard generally provides major cutters to JIATF-S, field units rely on a greater variety of smaller vessels to conduct coastal drug interdiction operations because the missions are conducted much closer to shore than are JIATF-S operations. Outside of JIATF-S, the Coast Guard’s Seventh District (headquartered in Miami, Florida, and having responsibility for the Caribbean area including Puerto Rico and the U.S. Virgin Islands) and Eleventh District (headquartered in Alameda, California, and having responsibility for the Eastern Pacific area, including coastal areas from the U.S.-Mexico border to South America) have accounted for the largest shares of the Coast Guard’s drug interdiction resource hours. According to Coast Guard officials, these districts’ areas of responsibility include high drug-trafficking areas, and therefore drug interdiction accounts for a larger mission focus than at other Coast Guard districts. From fiscal years 2009 through 2013, the Coast Guard’s budget included about $1.2 billion per year for its drug interdiction mission. This mission accounted for between 10 and 12 percent of the Coast Guard’s budget during this time. The Coast Guard reported, based on the enacted fiscal year 2014 budget, that its fiscal year 2014 estimate to perform the drug interdiction mission is $1,305,271,000. Figure 10 shows the flow of the Coast Guard drug interdiction resource allocation process. This appendix identifies and describes the Department of Homeland Security (DHS) component agencies involved in Operation Caribbean Guard. In September 2012, DHS implemented Operation Caribbean Guard to intercept illegal weapons, drugs, and money flowing to and from Puerto Rico and the U.S. Virgin Islands. Operation Caribbean Guard is a DHS-wide surge effort involving multiple component agencies. Table 2 identifies DHS component agencies involved in Operation Caribbean Guard and reported examples of actions they have taken. In addition to the contact named above, Christopher Conrad (Assistant Director), Jason Berman, Michele Fejfar, Eric Hauswirth, Susan Hsu, Tracey King, and Lerone Reid made key contributions to this report. Coast Guard: Observations on Progress Made and Challenges Faced in Developing and Implementing a Common Operational Picture. GAO-13-784T. Washington, D.C.: July 31, 2013. Coast Guard: Clarifying the Application of Guidance for Common Operational Picture Development Would Strengthen Program. GAO-13-321. Washington, D.C.: April 25, 2013. International Affairs: Status of Funding, Equipment, and Training for the Caribbean Basin Security Initiative. GAO-13-367R. Washington, D.C.: March 20, 2013 Coast Guard: Portfolio Management Approach Needed to Improve Major Acquisition Outcomes. GAO-12-918. Washington, D.C.: September 20, 2012. Coast Guard: Legacy Vessels’ Declining Conditions Reinforce Need for More Realistic Operational Targets. GAO-12-741. Washington, D.C.: July 31, 2012. Observations on the Coast Guard’s and the Department of Homeland Security’s Fleet Studies. GAO-12-751R. Washington, D.C.: May 31, 2012. Coast Guard: Action Needed as Approved Deepwater Program Remains Unachievable. GAO-11-743. Washington, D.C.: July 28, 2011. Coast Guard: Deepwater Requirements, Quantities, and Cost Require Revalidation to Reflect Knowledge Gained. GAO-10-790. Washington, D.C.: July 27, 2010. Drug Control: Cooperation with Many Major Drug Transit Countries Has Improved, but Better Performance Reporting and Sustainability Plans Are Needed. GAO-08-784. Washington, D.C.: July 15, 2008. Drug Control: Agencies Need to Plan for Likely Declines in Drug Interdiction Assets, and Develop Better Performance Measures for Transit Zone Operations. GAO-06-200. Washington, D.C.: November 15, 2005.
One part of the U.S. National Drug Control Strategy is to disrupt the flow of cocaine through the transit zone. Puerto Rico and the U.S. Virgin Islands, the only U.S. territories located geographically within the transit zone, have served as entry points for cocaine destined for the continental United States. In recent years, federal and local government agencies have cited growing levels of violent crime in these territories and attribute this violence to illicit drug trafficking. Within DHS, the U.S. Coast Guard is the lead federal agency for maritime drug interdiction and a key provider of resources to support drug interdiction operations in the transit zone and the two territories. GAO was asked to examine the Coast Guard's drug interdiction efforts in the transit zone, Puerto Rico, and the U.S. Virgin Islands. This report addresses (1) trends in the Coast Guard's deployment of resources in the transit zone and the extent to which the Coast Guard met its performance targets; and (2) actions taken by the Coast Guard to combat drug smuggling into Puerto Rico and the U.S. Virgin Islands, and trends in vessel and aircraft deployments. GAO analyzed Coast Guard data for fiscal years 2009 through 2013 on drug interdiction resource deployments and mission performance, and interviewed Coast Guard and DHS officials involved in drug interdiction operations. The Coast Guard provided varying levels of resources for drug interdiction operations in the “transit zone”—the area from South America through the Caribbean Sea and the eastern Pacific Ocean that is used to transport illicit drugs to the United States—during fiscal years 2009 through 2013, and generally did not meet its performance targets for several reasons. As the figure shows, Coast Guard resources included vessels (cutters), aircraft, and law enforcement detachments. The number of cutter days, aircraft hours, and law enforcement detachment days the Coast Guard provided for drug interdiction operations in the transit zone varied during fiscal years 2009 through 2012, and then sharply declined in fiscal year 2013. For example, in fiscal year 2012, the Coast Guard provided 1,947 cutter days for transit zone operations and in fiscal year 2013 the Coast Guard provided 1,346 days—a 30 percent decline. During fiscal years 2009 through 2013, the Coast Guard met targets for its primary drug interdiction mission performance measure—the removal rate of cocaine from noncommercial vessels in the transit zone—once, in fiscal year 2013. Coast Guard officials cited the declining readiness of its aging vessels, delays in the delivery of replacement vessels, and sequestration as factors affecting Coast Guard resource deployments and the ability to meet its drug interdiction mission performance targets. In support of a Department of Homeland Security (DHS) effort to address the increased violent crime associated with illicit drug smuggling into Puerto Rico and the U.S. Virgin Islands, the Coast Guard has increased vessel and aircraft operations for drug interdiction efforts in these territories by reallocating resources from elsewhere in the Coast Guard. According to Coast Guard officials, these additional resources are drawn from other missions, such as alien migrant interdiction. Beginning in September 2012, the Coast Guard implemented a surge operation to provide additional vessels and aircraft to regularly patrol Puerto Rico and the U.S. Virgin Islands. According to Coast Guard officials, the increased vessel and aircraft deployments have since become the new baseline level of resources to be provided for drug interdiction operations there. According to Coast Guard data, the number of vessel hours spent conducting drug interdiction operations in these territories more than tripled from fiscal years 2009 through 2013. Similarly, the number of maritime patrol aircraft hours spent conducting drug interdiction operations in the territories increased—from about 150 flight hours in fiscal year 2011 to about 1,000 hours in fiscal year 2013. GAO is not making recommendations in this report. DHS provided technical comments on a draft of this report, which were incorporated, as appropriate.
In July 2002, President Bush issued the National Strategy for Homeland Security. The strategy set forth overall objectives to prevent terrorist attacks within the United States, reduce America’s vulnerability to terrorism, and minimize the damage and assist in the recovery from attacks that occur. The strategy set out a plan to improve homeland security through the cooperation and partnering of federal, state, local, and private sector organizations on an array of functions. The National Strategy for Homeland Security specified a number of federal departments, as well as nonfederal organizations, that have important roles in securing the homeland. In terms of federal departments, DHS was assigned a leading role in implementing established homeland security mission areas. In November 2002, the Homeland Security Act of 2002 was enacted into law, creating DHS. This act defined the department’s missions to include preventing terrorist attacks within the United States; reducing U.S. vulnerability to terrorism; and minimizing the damages, and assisting in the recovery from, attacks that occur within the United States. The act also specified major responsibilities for the department, including to analyze information and protect infrastructure; develop countermeasures against chemical, biological, radiological, and nuclear, and other emerging terrorist threats; secure U.S. borders and transportation systems; and organize emergency preparedness and response efforts. DHS began operations in March 2003. Its establishment represented a fusion of 22 federal agencies to coordinate and centralize the leadership of many homeland security activities under a single department. A variety of factors have affected DHS’s efforts to implement its mission and management functions. These factors include both domestic and international events, such as Hurricanes Katrina and Rita, and major homeland security-related legislation. Figure 1 provides a timeline of key events that have affected DHS’s implementation. Our report assesses DHS’s progress across 14 mission and management areas. We based these areas on those identified in the National Strategy for Homeland Security, the goals and objectives set forth in the DHS strategic plan and homeland security presidential directives, our reports, and studies conducted by the DHS IG and other organizations and groups, such as the 9/11 Commission and the Century Foundation. The 14 we identified are 5. Surface transportation security 7. Emergency preparedness and response 8. Critical infrastructure and key resources protection 9. Science and technology 12. Human capital management 13. Information technology management 14. Real property management For each mission and management area, we identified performance expectations and vetted them with DHS officials. These performance expectations are a composite of the responsibilities or functions—derived from legislation, homeland security presidential directives and executive orders, DHS planning documents, and other sources—that the department is to achieve. Our analysts and subject matter experts reviewed our prior work, DHS IG work, and evidence DHS provided between March and July 2007, including DHS officials’ assertions when supported by documentation. On the basis of this analysis and our experts’ judgment, we then assessed the extent to which DHS had achieved each of the expectations we identified. We made preliminary assessments for each performance expectation based solely on GAO and DHS IG work. In March through July, we received additional information from DHS, which we reviewed and used to inform our final assessments. In some cases the assessments remained the same as our preliminary ones, and in other cases they changed. When our review of our prior work, the DHS IG’s work, and DHS’s documentation indicated that DHS had satisfied most of the key elements of a performance expectation, we concluded that DHS had generally achieved it. When our reviews showed that DHS had not yet satisfied most of the key elements of a performance expectation, we concluded that DHS had generally not achieved it. More specifically, where our prior work or that of the DHS IG indicated DHS had not achieved a performance expectation and DHS did not provide documentation to prove otherwise, we concluded that DHS had generally not achieved it. For a small number of performance expectations we could not make an assessment because neither we nor the DHS IG had completed work and the information DHS provided did not enable us to clearly assess DHS’s progress. We used these performance expectation assessments to determine DHS’s overall progress in each mission and management area. After making an assessment for each performance expectation, we added up those rated as generally achieved. We divided this number by the total number of performance expectations for the mission or management area, excluding those performance expectations for which we could not make an assessment. If DHS generally achieved more than 75 percent of the identified performance expectations, we identified its overall progress as substantial. When the number achieved was more than 50 percent but 75 percent or less, we identified its overall progress as moderate. If DHS generally achieved more than 25 percent but 50 percent or less, we identified its overall progress as modest. For mission and management areas in which DHS generally achieved 25 percent or less of the performance expectations, we identified overall progress as limited. We and the DHS IG have completed varying degrees of work for each mission and management area, and DHS’s components and offices provided us with different amounts and types of information. As a result, our assessments of DHS’s progress in each mission and management area reflect the information available for our review and analysis and are not equally comprehensive across all 14 mission and management areas. It is also important to note that while there are qualitative differences between the performance expectations, we did not weigh some more heavily than others in our overall assessments of mission and management areas. We also recognize that these expectations are not time bound, and DHS will take actions to satisfy these expectations over a sustained period of time. Our assessment of DHS’s progress relative to each performance expectation refers to the progress made by the department since March 2003 and does not imply that DHS should have fully achieved each performance expectation at this point. In commenting on a draft of our report, DHS took issues with our methodology. First, DHS believed that we altered the criteria we used to judge the department’s progress. We did not change our criteria; rather we made a change in terminology to better convey the intent behind the performance expectations that DHS achieve them instead of merely take actions that apply or relate to them. Second, DHS took issue with the binary standard approach we used to assess each performance expectation. We acknowledge the limitations of this standard in our report but believe it was appropriate for our review given that the Administration has generally not established quantitative goals and measures for the expectations. Therefore, we could not assess where along a spectrum of progress DHS stood in achieving each performance expectation. Third, DHS was concerned about an apparent shift in criteria we applied after the department provided us additional information and documents. What DHS perceived as a change in criteria for certain performance expectations was really the process by which we disclosed our preliminary assessment; analyzed additional documents and information from DHS; and updated and, in many cases revised, our assessments based on the additional inputs. Fourth, DHS raised concerns with consistency in our application of the methodology. Our core team of GAO analysts and managers reviewed all inputs from GAO staff to ensure consistent application of our methodology, criteria, and analytical process, and our quality control process included detailed reviews of the report’s facts as well as assurances that we followed generally accepted government auditing standards. Finally, DHS points outs that we treated all performance expectations as if they were of equal significance. In our report, we acknowledged that differences exist, but we did not weight the performance expectations because congressional, departmental, and others’ views on the relative priority of each expectation may be different, and we did not believe it was appropriate to substitute our judgment for theirs. Overall, we appreciate DHS’s concerns and recognize that in such a broad- based endeavor, some level of disagreement is inevitable, especially at any given point in time. However, we have been as transparent as possible regarding our purpose, methodology, and professional judgments and believe that our methodology provides a sound basis for the progress report. Going forward, we will work with DHS to further clarify the performance expectations we identified and our criteria for assessing DHS’s progress in meeting those expectations. By engaging in a constructive dialogue with DHS, we hope to establish a mutually agreed- upon basis for any future evaluation of DHS’s progress. Our report shows that since March 2003, DHS has attained some level of progress in implementing the performance expectations in all of its major mission and management areas, but the rate of progress among these areas has varied. Overall, DHS has made more progress in its mission areas than in its management areas, reflecting an understandable focus on implementing efforts to secure the homeland. As DHS continues to mature as an organization, we believe it will be able to put more focus—and achieve more expectations—in the management areas. Within its mission areas, DHS has made more progress in developing strategies, plans, and programs than in implementing them. For example, in the area of border security we found that DHS has developed a multiyear strategy and initiative for identifying illegal border crossings between ports of entry. However, DHS is in the early stages of implementing this strategy, and we and the DHS IG identified problems with implementation of past programs with similar objectives. Likewise, in the area of emergency preparedness and response, DHS has developed the National Incident Management System. However, we have reported that much more work remains for DHS to effectively coordinate its implementation. Below we provide more information on progress made by DHS in its mission and management areas. DHS’s border security mission includes detecting and preventing terrorists and terrorist weapons from entering the United States; facilitating the orderly and efficient flow of legitimate trade and travel; interdicting illegal drugs and other contraband; apprehending individuals who are attempting to enter the United States illegally; inspecting inbound and outbound people, vehicles, and cargo; and enforcing laws of the United States at the border. As shown in table 2, we identified 12 performance expectations for DHS in the area of border security and found that DHS has generally achieved 5 of them and has generally not achieved 7 others. DHS’s immigration enforcement mission includes apprehending, detaining, and removing criminal and illegal aliens; disrupting and dismantling organized smuggling of humans and contraband as well as human trafficking; investigating and prosecuting those who engage in benefit and document fraud; blocking and removing employers’ access to undocumented workers; and enforcing compliance with programs to monitor visitors. As shown in table 3, we identified 16 performance expectations for DHS in the area of immigration enforcement and found that DHS has generally achieved 8 of them and has generally not achieved 4 others. For 4 performance expectations, we could not make an assessment. DHS’s immigration services mission includes administering immigration benefits and working to reduce immigration benefit fraud. As shown in table 4, we identified 14 performance expectations for DHS in the area of immigration services and found that DHS has generally achieved 5 of them and has generally not achieved 9 others. DHS’s aviation security mission includes strengthening airport security; providing and training a screening workforce; prescreening passengers against terrorist watch lists; and screening passengers, baggage, and cargo. As shown in table 5, we identified 24 performance expectations for DHS in the area of aviation security and found that DHS has generally achieved 17 of them and has generally not achieved 7 others. DHS’s surface transportation security mission includes establishing security standards and conducting assessments and inspections of surface transportation modes, which include passenger and freight rail; mass transit; highways, including commercial vehicles; and pipelines. As shown in table 6, we identified 5 performance expectations for DHS in the area of surface transportation security and found that DHS has generally achieved 3 of them and has generally not achieved 2. DHS’s maritime security responsibilities include port and vessel security, maritime intelligence, and maritime supply chain security. As shown in table 7, we identified 23 performance expectations for DHS in the area of maritime security and found that DHS has generally achieved 17 of them and has generally not achieved 4 others. For 2 performance expectations, we could not make an assessment. DHS’s emergency preparedness and response mission includes preparing to minimize the damage and recover from terrorist attacks and disasters; helping to plan, equip, train, and practice needed skills of first responders; and consolidating federal response plans and activities to build a national, coordinated system for incident management. As shown in table 8, we identified 24 performance expectations for DHS in the area of emergency preparedness and response and found that DHS has generally achieved 5 of them and has generally not achieved 18 others. For 1 performance expectation, we could not make an assessment. DHS’s critical infrastructure and key resources protection activities include developing and coordinating implementation of a comprehensive national plan for critical infrastructure protection, developing partnerships with stakeholders and information sharing and warning capabilities, and identifying and reducing threats and vulnerabilities. As shown in table 9, we identified 7 performance expectations for DHS in the area of critical infrastructure and key resources protection and found that DHS has generally achieved 4 of them and has generally not achieved 3 others. DHS’s science and technology efforts include coordinating the federal government’s civilian efforts to identify and develop countermeasures to chemical, biological, radiological, nuclear, and other emerging terrorist threats. As shown in table 10, we identified 6 performance expectations for DHS in the area of science and technology and found that DHS has generally achieved 1 of them and has generally not achieved 5 others. DHS’s acquisition management efforts include managing the use of contracts to acquire goods and services needed to fulfill or support the agency’s missions, such as information systems, new technologies, aircraft, ships, and professional services. As shown in table 11, we identified 3 performance expectations for DHS in the area of acquisition management and found that DHS has generally achieved 1 of them and has generally not achieved 2 others. DHS’s financial management efforts include consolidating or integrating component agencies’ financial management systems. As shown in table 12, we identified 7 performance expectations for DHS in the area of financial management and found that DHS has generally achieved 2 of them and has generally not achieved 5 others. DHS’s key human capital management areas include pay, performance management, classification, labor relations, adverse actions, employee appeals, and diversity management. As shown in table 13, we identified 8 performance expectations for DHS in the area of human capital management and found that DHS has generally achieved 2 of them and has generally not achieved 6 others. DHS’s information technology management efforts include developing and using an enterprise architecture, or corporate blueprint, as an authoritative frame of reference to guide and constrain system investments; defining and following a corporate process for informed decision making by senior leadership about competing information technology investment options; applying system and software development and acquisition discipline and rigor when defining, designing, developing, testing, deploying, and maintaining systems; establishing a comprehensive, departmentwide information security program to protect information and systems; having sufficient people with the right knowledge, skills, and abilities to execute each of these areas now and in the future; and centralizing leadership for extending these disciplines throughout the organization with an empowered Chief Information Officer. As shown in table 14, we identified 13 performance expectations for DHS in the area of information technology management and found that DHS has generally achieved 2 of them and has generally not achieved 8 others. For 3 performance expectations, we could not make an assessment. DHS’s responsibilities for real property management are specified in Executive Order 13327, “Federal Real Property Asset Management,” and include establishment of a Senior Real Property Officer, development of an asset inventory, and development and implementation of an asset management plan and performance measures. As shown in table 15, we identified 9 performance expectations for DHS in the area of real property management and found that DHS has generally achieved 6 of them and has generally not achieved 3 others. Our report contains detailed information on DHS’s progress in achieving each of the performance expectations, including a detailed summary of our work, the DHS IG’s work, and DHS documentation and officials’ statements. We also provide our basis for each assessment. In commenting on a draft of our report, DHS disagreed with our assessments for 42 of the 171 performance expectations noted above. In our report, we provide detailed responses to DHS’s comments on the 42 performance expectations. We look forward to discussing our assessments in all the mission and management areas in more detail with the committee and subcommittees to help inform their ongoing oversight efforts. Our work has identified cross-cutting issues that have hindered DHS’s progress in its mission and management areas. These issues include: (1) transforming and integrating DHS’s management functions; (2) establishing baseline performance goals and measures and engaging in effective strategic planning efforts; (3) applying and improving a risk management approach for implementing missions and making resource allocation decisions; (4) sharing information with key stakeholders; and (5) coordinating and partnering with federal, state, local, and private sector agencies entities. The creation of DHS is an enormous management challenge, and DHS faces a formidable task in its transformation efforts as it works to integrate over 170,000 federal employees from 22 component agencies. Each component agency brought differing missions, cultures, systems, and procedures that the new department had to efficiently and effectively integrate into a single, functioning unit. At the same time it weathers these growing pains, DHS must still fulfill its various homeland security and other missions. DHS has developed a strategic plan, is working to integrate some management functions, and has continued to form necessary partnerships to achieve mission success. Despite these efforts, we reported earlier this year that DHS implementation and transformation remains high-risk because DHS has not yet developed a comprehensive management integration strategy and its management systems and functions⎯especially related to acquisition, financial, human capital, and information management⎯are not yet fully integrated and wholly operational. Additionally, transparency plays an important role in helping to ensure efficient and effective transformation efforts. DHS has not made its management or operational decisions transparent enough so that Congress can be sure that it is effectively, efficiently, and economically using the billions of dollars in funding it receives annually. Moreover, we have encountered access issues in numerous engagements, and the lengths of delay have been both varied and significant and have affected our ability to do our work in a timely manner. The Secretary of DHS and the Under Secretary for Management have stated their desire to work with us to resolve access issues and to provide greater transparency, but have not yet proposed any change to DHS's policies or procedures for how DHS officials are to interact with GAO. A number of DHS’s programs lack outcome goals and measures, a fact that may hinder the department’s ability to effectively assess the results of program efforts or fully assess whether the department is using resources effectively and efficiently, especially given various agency priorities for resources. In particular, we have reported that some of DHS’s components have not developed adequate outcome-based performance measures or comprehensive plans to monitor, assess, and independently evaluate the effectiveness of their plans and performance. For example, in August 2005 we reported that U.S. Immigration and Customs Enforcement lacked outcome goals and measures for its worksite enforcement program and recommended that the agency set specific time frames for developing these goals and measures. Further, we have reported that many of DHS’s border- related performance goals and measures are not fully defined or adequately aligned with one another, and some performance targets are not realistic. We have also recognized that DHS faces some inherent difficulties in developing performance goals and measures to address its unique mission and programs, such as in developing measures for the effectiveness of its efforts to prevent and deter terrorist attacks. Within its sphere of responsibility, DHS cannot afford to protect everything against all possible threats. As a result, DHS must make choices about how to allocate its resources to most effectively manage risk. In April 2007, DHS established the new Office of Risk Management and Analysis to serve as the DHS Executive Agent for national-level risk management analysis standards and metrics; develop a standardized approach to risk; develop an approach to risk management to help DHS leverage and integrate risk expertise across components and external stakeholders; assess DHS risk performance to ensure programs are measurably reducing risk; and communicate DHS risk management in a manner that reinforces the risk-based approach. It is too early to tell what effect this office will have on strengthening departmentwide risk management activities. Several DHS component agencies have taken steps toward integrating risk- based decision making into their decision-making processes. For example, the Coast Guard has developed security plans for seaports, facilities, and vessels based on risk assessments. Other components have not always utilized such an approach. In addition, DHS has not performed comprehensive risk assessments in transportation, critical infrastructure, and the immigration and customs systems to guide resource allocation decisions. For example, DHS has not fully utilized a risk-based strategy to allocate resources among transportation sectors. Although the Transportation Security Administration (TSA) has developed tools and processes to assess risk within and across transportation modes, it has not fully implemented these efforts to drive resource allocation decisions. In 2005, we designated information sharing for homeland security as high-risk and continued that designation in 2007. We recently reported that the nation still lacked an implemented set of governmentwide policies and processes for sharing terrorism-related information but has issued a strategy on how it will put in place the overall framework, policies, and architecture for sharing with all critical partners—actions that we and others have recommended. DHS has taken some steps to implement its information-sharing responsibilities. For example, DHS implemented a network to share homeland security information. States and localities are also creating their own information “fusion” centers, some with DHS support. However, DHS did not fully adhere to key practices in coordinating efforts on its homeland security information network with state and local information sharing initiatives and faces other information-sharing challenges, including developing productive information-sharing relationships among the federal government, state and local governments, and the private sector. To secure the nation, DHS must form effective and sustained partnerships among legacy component agencies and also with a range of other entities, including other federal agencies, state and local governments, the private and nonprofit sectors, and international partners, but has faced difficulties in doing so. Thirty-three of the 43 initiatives the National Strategy for Homeland Security are required to be implemented by three or more federal agencies. In addition, the private sector is a key homeland security partner. For example, DHS must partner with individual companies and organizations to protect vital national infrastructure, such as the nation’s water supply, transportation systems, and chemical facilities. In October 2006 we reported that all 17 critical infrastructure sectors had established their respective government councils, and nearly all sectors had initiated their voluntary private sector councils in response to the National Infrastructure Protection Plan. In addition, through its Customs-Trade Partnership Against Terrorism Program, U.S. Customs and Border Protection (CBP) has worked in partnership with private companies to review their supply chain security plans. However, DHS has faced some challenges in developing other effective partnerships and in clarifying the roles and responsibilities of various homeland security stakeholders. For example, federal and private sector stakeholders stated that the TSA has not provided them with the information they would need to support TSA’s efforts for the Secure Flight program. Further, lack of clarity regarding roles and responsibilities caused DHS difficulties in coordinating with its emergency preparedness and response partners in responding to Hurricanes Katrina and Rita. Given the leading role that DHS plays in securing the homeland, it is critical that the department’s mission programs and management systems and functions operate as efficiently and effectively as possible. In the more than 4 years since its establishment, the department has taken important actions to secure the border and the transportation sector and to defend against, prepare for, and respond to threats and disasters. DHS has had to undertake these critical missions while also working to transform itself into a fully functioning cabinet department—a difficult undertaking for any organization and one that can take, at a minimum, 5 to 7 years to complete even under less daunting circumstances. At the same time, a variety of factors, including Hurricanes Katrina and Rita, threats to and attacks on transportation systems in other countries, and new responsibilities and authorities provided by Congress have forced the department to reassess its priorities and reallocate resources to address key domestic and international events and to respond to emerging issues and threats. As it moves forward, DHS will continue to face the challenges that have affected its operations thus far, including transforming into a high- performing, results-oriented agency; developing results-oriented goals and measures to effectively assess performance; developing and implementing a risk-based approach to guide resource decisions; and establishing effective frameworks and mechanisms for sharing information and coordinating with homeland security partners. DHS has undertaken efforts to address these challenges but will need to give continued attention to these efforts in order to efficiently and effectively identify and prioritize mission and management needs, implement efforts to address those needs, and allocate resources accordingly. Efforts to address these challenges are especially important given the threat environment and long-term fiscal imbalance facing the nation. While this testimony contains no new recommendations, in past products GAO has made approximately 700 recommendations to DHS. DHS has implemented some of these recommendations and taken actions to implement others. However, we have reported that the department still has much to do to ensure that it conducts its missions efficiently and effectively while it simultaneously prepares to address future challenges that face the department and the nation. A well-managed, high-performing Department of Homeland Security is essential to meeting the significant homeland security challenges facing the nation. As DHS continues to evolve, implement its programs, and integrate its functions, we will continue to review its progress and performance and provide information to Congress and the public on its efforts. This concludes my prepared statement. I would be pleased to answer any questions you and the Committee members may have. For further information about this testimony, please contact Norman J. Rabkin, Managing Director, Homeland Security and Justice, at 202-512-8777 or [email protected]. Other key contributors to this statement were Jason Barnosky, Rebecca Gambler, Kathryn Godfrey, Christopher Keisling, Thomas Lombardi, Octavia Parks, and Sue Ramanathan. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Department of Homeland Security's (DHS) recent 4-year anniversary provides an opportunity to reflect on the progress DHS has made. The creation of DHS was one of the largest federal reorganizations in the last several decades, and GAO has reported that it was an enormous management challenge and that the size, complexity, and importance of the effort made the challenge especially daunting and critical to the nation's security. Our prior work on mergers and acquisitions has found that successful transformations of large organizations, even those faced with less strenuous reorganizations than DHS, can take at least 5 to 7 years to achieve. This testimony is based on our August 2007 report evaluating DHS's progress since March 2003. Specifically, it addresses DHS's progress across 14 mission and management areas and key themes that have affected DHS's implementation efforts. Since its establishment in March 2003, DHS has made varying levels of progress in implementing its mission and management areas, as shown in the following table. In general, DHS has made more progress in its mission areas than in its management areas. Within its mission areas, DHS has made progress in developing plans and programs, but has faced challenges in its implementation efforts. Key underlying themes have affected DHS's implementation efforts. These include strategies to achieve agency transformation, strategic planning and results management, risk management, information sharing, and partnerships and coordination. For example, we have designated DHS's implementation and transformation as high-risk. While DHS has made progress in transforming its component agencies into a fully functioning department, it has not yet addressed elements of the transformation process, such as developing a comprehensive transformation strategy. DHS also has not yet fully adopted and applied a risk management approach in implementing its mission and management functions. Some DHS component agencies have taken steps to do so, but this approach is not yet used departmentwide. In addition, DHS has taken steps to share information and coordinate with homeland security partners but has faced difficulties in these partnership efforts. Given DHS's leading role in securing the homeland, it is critical that the department's mission and management programs operate as efficiently and effectively as possible. DHS has taken important actions to secure the border and transportation sectors and to prepare for and respond to disasters. DHS has had to undertake these missions while also working to transform itself into a fully functioning cabinet department--a difficult task for any organization. As DHS moves forward, it will be important for the department to continue to develop more measurable goals to guide implementation efforts and to enable better accountability. It will also be important for DHS to continually reassess its mission and management goals, measures, and milestones to evaluate progress made, identify past and emerging obstacles, and examine alternatives to effectively address those obstacles.
In August 2003, the Coalition Provisional Authority (CPA) dissolved the military organizations of the former regime, including the Ministry of Defense. In March 2004, the CPA established a new Ministry of Defense. The MOD was ultimately to be responsible for the overall management, direction, and control of the Iraqi armed forces, which now include the Iraqi Army, Air Force, and Navy. Responsible for an estimated 200,000 civil servants and military personnel, the MOD is expected to conduct all functions needed to sustain the armed forces, including developing plans, programs, and budgets; and procuring needed goods. The CPA did not dissolve the Ministry of Interior. MOI’s role is to manage more than 300,000 staff in the Iraqi police services, the National Police, the Border Enforcement, and other services. Managerial functions include setting qualifications and training for the forces, vetting all police and other employees, and conducting the budgeting and financing for MOI forces. The MOI directly controls the national police forces. However, the MOI exercises only limited administrative control over regular Iraqi police forces in the provinces, controlling issues such as recruiting standards and yearly budget allocations. Operational control of provincial police rests with the governor and his Council. MNF-I leads U.S. and coalition military efforts in Iraq. Under the command of MNF-I, MNSTC-I is responsible for leading coalition efforts to train and equip Iraqi security forces and to build MOI and MOD capabilities. MNSTC-I helps develop MOI and MOD capabilities through Ministry Transition Teams and the Joint Staff Transition Team, which have a total of about 215 coalition advisors assigned to work with Iraqi officials at the ministries. The Iraqi government and the coalition transition teams confront a challenging national environment to develop Iraq’s security ministries. Corruption is reportedly widespread and poses a major challenge to building an effective government. A March 2007 DOD report states that the Prime Minister has committed to reforming the government beginning with his cabinet and the ministries. This commitment recognizes the government’s failure to counter corruption and reduce sectarianism, which hampers the government’s ability to perform. In addition, capacity building efforts are taking place amid ongoing violence and sectarian tension, posing a threat to Iraqi government employees. The 2007 increase in Iraq’s security budget is attributable to increases in planned expenditures and an appreciation of the Iraqi currency against the U.S. dollar. MOD and MOI spent the largest percentage of budgeted amounts on salaries but were less successful in spending funds on goods and services (e.g., food, uniforms, and fuel) and capital goods (e.g., weapons, ammunition, and vehicles). Given Iraq’s continued difficulties in spending funds for these items, DOD has requested $5.8 billion in additional funds to help purchase these critical items and provide other assistance to Iraq’s security ministries. DOD’s March 2007 report to Congress stated that the 37-percent increase in Iraq’s 2007 security budget is evidence of Iraq’s growing self-sufficiency and commitment to security. However, our analysis of Iraq’s 2007 budget shows that this reported increase is attributable to both increases in planned expenditures and an appreciation of the Iraqi currency against the U.S. dollar (Iraq’s fiscal year begins on January 1 of each year). Iraq implemented a 14-percent exchange rate appreciation between November 1, 2006, and February 1, 2007, to reduce the rate of core (non-fuel) inflation. In 2006, inflation in Iraq averaged over 50 percent. Iraq’s official budget is presented and executed in Iraqi dinars, not U.S. dollars. The percentage changes we calculated using a constant 2006 exchange rate are the same as those in the official budget based on Iraqi dinars. For example, MOD’s 2007 budget shows a decline in the number of Iraqi dinars budgeted for goods and services compared with 2006. However, when converted to U.S. dollars at the new appreciated exchange rate, the budget shows an increase in planned expenditures. It is therefore important to know the source of changes in the budget. For imported products, the appreciated exchange rate (which means the Iraqi dinar exchanges for relatively more U.S. dollars than before) allows Iraq to buy relatively more imported products for the same number of dinars. However, for expenditures made in Iraq, especially salaries, the appreciated exchange rate may not best reflect changes in Iraq’s budget expenditures. Thus, we present both calculations. Table 1 shows how the projected growth rate of Iraq’s security budget varies with the foreign exchange rate used to convert Iraqi dinars into U.S. dollars. When using an appreciated exchange rate, Iraq’s security budget grows by 37 percent in 2007. The budget of MOD, which plays a key role in conducting counterinsurgency operations, grows by 20 percent. However, when using a constant exchange rate to facilitate a more direct comparison of the planned increases in budgeted dinars, Iraq’s security budget grows by 15 percent in 2007 to $6.2 billion (constant exchange rate), which represents 18 percent of Iraqis total 2007 budget of $34.5 billion. Thus, the increase in Iraq’s budget in U.S. dollars is due to the actual increases in planned expenditures and an appreciation of the currency. Although MOD’s overall budget will grow in 2007, its budget for several critical items needed to wage counterinsurgency operations will decline in 2007. For example, the Ministry of Defense’s 2007 budget for capital goods—including weapons, ammunition, and vehicles—will decrease whether using a constant exchange rate (17 percent) or appreciated exchange rate (2 percent). In contrast to MOD, MOI’s 2007 budget shows positive growth rates in all major categories. For example, the Ministry of Interior’s 2007 budget for capital goods—including weapons, ammunition, and vehicles—will increase regardless of which exchange rate is used, by 16 percent using a constant exchange rate or by 38 percent using the appreciated exchange rate. The MOI is receiving increased budget support for its law enforcement responsibilities. However, the additional budget support will be provided to a ministry prone to militia infiltration. For example, in November 2006, the Director of the Defense Intelligence Agency stated that the Ministry of Interior and the police were heavily infiltrated by militia members of the Badr Organization and the Mahdi Army. In addition, the MOI’s national police—a paramilitary force of about 24,000 personnel—had conducted counterinsurgency operations in the past, but the Iraqi government decided in late 2006 to transform it into a civil society force due to frequent allegations of abuse and other illegal activities. The total number of staff reportedly employed by the Ministries of Defense and Interior will grow from about 538,000 in 2006 to 608,000 employees in 2007 (see table 2). However, these numbers should be interpreted with some caution. As we reported in January 2007, ghost employees comprise about 20 to 30 percent of Ministry of Interior staff, according to U.S. officials. Also, as of February 2007, the Iraqi government has yet to complete a census of all government employees, as required by the International Monetary Fund. To help assess whether Iraq’s security ministries will be able to spend the 2007 budgets, we analyzed the security ministries’ 2006 budgets and spending. Figure 1 shows the total amounts budgeted and expended by funding category. In terms of their budgets, the MOD had both a larger budget ($3.4 billion compared with $1.9 billion) and a larger portion of its budget targeted at goods and services and capital goods, compared with the MOI. For the MOI, salaries dominated the budget in 2006. Figure 1 also shows that the ministries have had difficulty expending some categories of their budgets. For example, MOD and MOI spent about 76 and 82 percent, respectively, of the $912 million and $1,471 million budgeted for salaries as of November 2006. In contrast, MOD and MOI spent 1 and 15 percent, respectively, of the $864 million and $233 million budgeted for capital goods (e.g., weapons, ammunition, and vehicles). The inability or unwillingness of Iraq’s security ministries to spend budgeted funds on critical items raises questions about the priorities and capabilities of Iraq’s government to fund its security requirements. As the U.S. government transfers more of its security responsibilities to the Iraqi government, it is important that the Iraqi government demonstrate that it can execute its approved budgets more effectively. While Iraq’s security ministries have encountered difficulties in spending budgets for weapons, equipment, vehicles, food, fuel, and other items needed to mount counterinsurgency campaigns, the U.S. government anticipates providing additional support to these two ministries at least through the end of fiscal year 2008. DOD has asked for an additional $5.8 billion to develop the Iraqi security forces in its fiscal year 2007 supplemental request and the fiscal year 2008 Global War on Terror budget request (see table 3). Of this amount, about $3.25 billion (about 56 percent) would purchase equipment and transportation for the Iraqi security forces. DOD is also requesting about $682 million for training and operations, including efforts to develop senior management capabilities within the Ministries of Defense and Interior, and to provide increased training for MOD intelligence operations, communications operations, and resource management. Iraq’s security ministries face numerous challenges if they are to more effectively direct and sustain Iraq’s security forces. DOD reports and our February 2007 fieldwork in Iraq found that the security ministries face two key challenges: (1) managing a growing workforce while developing effective personnel systems, and (2) improving the limited ability of MOD and MOI to manage their logistics operations. Coalition advisors are working with the security ministries to improve their planning, budgeting, personnel, and logistical systems. In addition, a 2006 Foreign Military Sales (FMS) agreement with Iraq will enable the security ministries to bypass their ineffective procurement systems and purchase needed equipment and supplies directly from the United States, according to U.S. officials. Planned changes in the size and composition of the security forces will complicate MOD and MOI efforts to effectively manage their personnel. The security ministries plan to add 60,000 to 70,000 staff to their rolls in 2007. In addition, in December 2006, the Iraqi Prime Minister directed the MOI to assume responsibility for paying most of the Facilities Protection Service (FPS), a 150,000-strong ministry guard force currently working for 27 ministries and 8 independent directorates. According to DOD reporting, the FPS lacks a coherent force structure and standardized equipment, and its personnel are often untrained, unreliable, and sometimes responsible for violent crimes. According to a senior coalition advisor, FPS personnel will be paid by the MOI but remain under the day- to-day supervision of the ministries, agencies, or provincial governments to which they are assigned. Although the ministries are significantly expanding their workforces, DOD reports that MOD and MOI cannot accurately account for the personnel they currently have on their payrolls. DOD notes that about 65 percent of authorized personnel in fielded units are present for duty at any time, but this figure is based on unreliable data. Similarly, MOI also has no reliable data to indicate how many personnel are still serving with the ministry, so it is unknown how many of the more than 300,000 employees on the MOI payroll are present for duty. MNSTC-I estimates that the number of employees present for duty is less than 70 percent. DOD reports that payments for pensions, medical care, and death benefits are currently included in security ministry payrolls. Thus, the security ministries’ personnel figures may include retired, wounded, or deceased personnel. DOD also found that corruption inflates both security ministries’ personnel figures, as corrupt leaders often collect pay and other compensation designated for non-existent soldiers and policemen on the unit rolls. In addition, a February 2007 MNF-I assessment stated that development of MOD’s personnel management system was hindered in 2006 by poor leadership, low morale, and reliance on coalition counterparts. MNSTC-I commented that higher level leadership within MOD did not allow knowledgeable managers to implement personnel reforms and overruled their decisions. U.S. government documents and coalition officials also cited problems at MOI with militia infiltration that complicated reform efforts. Our recent work in Iraq also found that the ministries’ lack of skilled or experienced staff presents a challenge. Some coalition officials noted that the lack of trained staff hindered efforts to improve MOD budget formulation, noting that only two or three members of the 30- person budget office were capable of producing budget spreadsheets on a computer. Furthermore, these advisors stated that most ministry staff lack basic computer and information technology skills, are unwilling to make decisions, and often refer problems to higher levels. DOD’s March report stated that the most significant shortcoming in MOD and MOI forces capabilities is in planning and executing logistics and sustainment requirements. The report noted that the factors underlying this deficiency include inadequate levels of sustainment stocks, such as vehicle fuel pumps and filters. Also identified as a challenge was the limited capacity of MOD and MOI to plan for, acquire, distribute, and maintain needed items. In addition, the security ministries have difficulties in accounting for their equipment. For example, MOI’s immature equipment accountability system cannot track what police weapons and vehicles remain in service or how much equipment authorized by the provincial governors MOI has purchased for their staff, which had been. Our fieldwork found that MOD and MOI units maintain equipment accountability through the use of hand receipts and manual ledgers. As GAO previously testified, both MOD and MOI have significant logistics management issues to overcome before they are capable of independently sustaining their security forces. Our recent fieldwork also found that developing the security ministries’ logistics capacity remains a major challenge, particularly at MOI. U.S. officials noted that MOI cannot sustain the wide variety of equipment donated by the coalition. For example, GAO previously testified that the MOD had difficulty maintaining 21 different types of light trucks. Similarly, MOI has been unable to maintain the 17 makes of vehicles it has received for use by its personnel. According to coalition officials, the cost and difficulty of obtaining spare parts for these diverse vehicle fleets results in using some vehicles for spare parts and not repairing others. Moreover, the MOI has not approved the draft logistics concept proposed by the coalition, in part because it has yet to gain the agreement of the provinces and is still negotiating with them on the national warehouse system. The coalition devotes significant resources to develop capacity at Iraq’s security ministries. As of March 2007, the U.S.-led coalition had assigned 215 military, civilian, and contracting personnel to advise Iraqi staff at the MOD and MOI on establishing plans and policies, budgeting, and managing personnel and logistics. In comparison, the Ministries of Oil and Electricity had 10 and 18 advisors, respectively. The 111 coalition advisors at the MOD are embedded with staff from a number of offices, including Plans and Policies and the Iraqi Joint Staff. According to the advisors, they work with their Iraqi counterparts to improve their planning processes and capabilities. For example, a senior advisor to the joint staff helped MOD develop its counter insurgency strategy. He provided them with a planning template, reviewed their work, and suggested they add details such as the source of the threat, the risk level, and the forces required to counter the threats. He was uncertain as to whether his Iraqi counterparts had taken ownership of the process. Our recent field work at the MOI found that 104 coalition advisors are working with Iraqi officials. Among other efforts, they are helping MOI develop processes for vetting Iraqi security forces, including collecting and storing biometric data; establishing an identification card system; and establishing a personnel management database that will house inventory, payroll, human resource, financial, and budget data. However, U.S. advisors stated that MOI staff has resisted efforts to computerize their manual processes because of the increased transparency it would provide. Finally, MNSTC-I personnel are also assisting the MOD and the MOI in purchasing needed equipment from the United States through the Foreign Military Sales (FMS) program. Under FMS, the U.S. government agrees to sell defense articles or services (including training) to eligible foreign countries or international organizations. The articles or services usually come from DOD stocks or through purchase under DOD-managed contracts. In December 2006, the government of Iraq transferred $1.9 billion into an Iraqi account for FMS purchases. According to a November 2006 DOD report, Iraq’s use of the FMS program is intended to provide a way for both MOD and MOI to spend their money on complete procurement packages without risking the loss of funds to the corruption and mismanagement that hampers Iraqi government contracting. In the latter part of 2006, DOD notified Congress of a number of possible foreign military sales to Iraq, including: Up to $900 million on intelligence, surveillance, and reconnaissance aircraft, as well as related support equipment, training, spare and repair parts, publications and technical data, and other elements of logistics support; Up to $750 million for troop transport helicopters, small arms, ammunition, vehicles, and associated logistics support; and Up to about $460 million for trucks, vehicles, including light armored vehicles, and trailers, as well as associated equipment and services. According to a March 2007 DOD report, MOD also plans to fund a $160 million maintenance contract through the FMS program from April 2007 through March 2008. U.S. and coalition officials stated that the FMS agreement would allow both MOD and MOI to bypass their ineffective procurement systems and procure equipment and supplies more quickly and efficiently. However, in the long term, it is unclear whether Iraq’s use of the FMS program will contribute to the ministries’ capacity to improve their inefficient procurement and contracting systems. DOD expects that the Iraqi government will be capable of sustaining its security forces by 2008. This expectation may not be met given the security ministries’ past problems in spending their capital budgets and current personnel and logistical weaknesses. In addition, as we previously reported, the United States and the Iraqi security ministries are supporting Iraqi forces that have divided loyalties, varying capabilities, high absenteeism, and questionable dependability. Mr. Chairman, this concludes my statement. I would be pleased to answer any questions that you or other members may have at this time. The Multinational Security Transition Command-Iraq provided comments on a draft of this statement. The head of the Command stated, “The GAO testimony fails to give the government of Iraq and the two security ministries any credit for recognizing their financial vulnerabilities and for progressing far beyond the opaque and irresponsible business practices of previous interim governments. The 2007 GOI budget was negotiated responsibly and openly. Though the security budget may represent ‘only’ a 15-percent increase from the previous year in purchasing power, it is nearly a 20-percent share of the national budget. The government of Iraq has clearly recognized its inability to responsibly make procurements on behalf of its military and police forces and so has entered into a $1.7 billion Foreign Military Sales Agreement with 2006 funding. We anticipate that another $1.55 billion investment into United States FMS this calendar year.” We added information in this statement to reflect MNSTC-I’s comments. However, both DOD and GAO agree that it will take considerable time and resources to address the challenges the U.S. and Iraqi governments face in developing fully functioning security ministries and capable Iraqi forces. For questions regarding this testimony, please call Joseph A. Christoff at (202) 512-8979 or [email protected]. Other key contributors to this statement were Nanette Barton, Daniel C. Cain, Lynn Cothern, Mattias Fenton, Elisabeth Helmer, B. Patrick Hickey, Bruce Kutnick, Stephen M. Lord, Judy McCloskey, Tetsuo Miyabara, Mary Moutsos, and Timothy Wedding. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In November 2005, the President issued the National Strategy for Victory in Iraq. According to the strategy, victory will be achieved when Iraq is peaceful, united, stable, secure, well integrated into the international community, and a full partner in the global war on terror. To help Iraq achieve this, the U.S. is, among other efforts, helping strengthen the capabilities of the Iraq Ministries of Defense and Interior (police forces) so they can assume greater responsibility for the country's security. The United States has provided about $15.4 billion to develop Iraqi security forces and institutions. In this testimony, GAO discusses preliminary observations on (1) U.S. and Iraqi funding to develop and sustain the Iraqi security forces, and (2) key challenges the United States and Iraq face in improving the security ministries' operations and management. This statement is based on prior GAO reports, recent fieldwork in Iraq and Department of Defense, U.S. Treasury and Embassy budget documents. GAO added information to this statement in response to comments from Multinational Security Transition Command-Iraq. We completed the work in accordance with generally accepted government auditing standards. In March 2007, DOD reported that Iraq will increase its 2007 security budget from $5.4 billion to $7.3 billion (a 37-percent increase). DOD states this increase provides evidence of the country's growing self-sufficiency and commitment to security. However, our analysis shows that some of this increase is due to the appreciation of the Iraqi dinar against the dollar. Using a constant exchange rate, Iraq's 2007 security budget grows by 15 percent. Also, Iraq faced problems spending its 2006 security budget. As of November 2006, the Iraq Ministry of Defense had spent only about 1 percent of its capital goods budget for weapons, ammunition, and vehicles. DOD has requested $5.8 billion in additional U.S. funds to help purchase these items for Iraq and provide assistance to its security ministries. The United States and Iraq face personnel and logistical challenges in developing ministries that can sustain Iraq's growing security forces. For example, the ministries have inadequate systems to account for personnel and inexperienced staff with limited budgeting and technology skills. Also, both security ministries have difficulties acquiring, distributing, and maintaining weapons, vehicles, and equipment. The U.S.-led coalition has provided significant resources to develop Iraq's security forces and has 215 military and civilian advisors at the ministries. The United States signed a foreign military sales agreement with Iraq that, according to U.S. officials, allows Iraq to bypass its ineffective procurement systems to purchase equipment directly from the United States. Iraq has deposited $1.9 billion into its account for foreign military sales. However, it is unclear whether this program will help improve the ministries' procurement and contracting capacity.
The Homeland Security Act of 2002 established USCIS within DHS. USCIS is responsible for several functions transferred on March 1, 2003, from the former Immigration Services Division of the Immigration and Naturalization Service (INS) under the Department of Justice. These functions include providing services or benefits to facilitate entry, residence, employment, and naturalization of legal immigrants; processing applications for U.S. citizenship/naturalization; and rendering decisions about immigration-related matters. The USCIS Information & Customer Service Division is responsible for operating the National Customer Service Center (NCSC), which was established in 1997 to provide nationwide assistance by telephone to customers calling about immigration services and benefits. When a customer calls the NCSC toll-free number (1-800-375-5283), the call is received by the interactive voice response system. The system features automated, self-service options 24-hours a day, 7 days a week. If the system cannot address a customer’s concerns or needs or if a customer requests live assistance, then the call is generally routed to one of the four NCSC contract call centers, known as Tier 1. These four centers are operated by the contractor, Pearson. If a question posed by a customer is particularly complex or otherwise cannot be answered at the Tier 1 level, the call is transferred to one of the two USCIS-operated call centers, known as Tier 2. Figure 1 shows the organization of NCSC, including the call centers. In fiscal year 2004, almost half of 21.1 million calls made to NCSC were handled and completed by the interactive voice response system and the rest were generally routed to Tier 1. Customer service representatives (CSR) at Tier 1 respond to inquiries in English or Spanish. The CSRs focus primarily on providing administrative information to customers by using a series of scripts provided by USCIS. For example, if a customer needs what USCIS considers basic information, such as USCIS local offices’ hours of operations, eligibility requirements, and procedures to follow, such questions are to be answered by CSRs at Tier 1 call centers using specific scripts. In addition, CSRs are to refer customers to USCIS service centers and local offices, for such things as changes of address and appointment scheduling at USCIS application support centers. (Some of these tasks may alternatively be performed by customers through the USCIS Web site—www.uscis.gov.) As of April 2005, the four Tier 1 call centers employed over 450 CSRs. Figure 2 shows CSRs processing calls at a Tier 1 call center. At the two USCIS-operated Tier 2 call centers, calls are handled by immigration information officers (IIO)—immigration specialists with in- depth knowledge of immigration laws, non-immigrant visas, naturalization, asylum and refugee status, and other related policies and procedures. As of April 2005, the Tier 2 call centers operated by USCIS had 111 IIOs. About 5 percent, or about 590,000, of the calls going to Tier 1 CSRs were rerouted to Tier 2 IIOs in fiscal year 2004. Figure 3 shows the call volume handled by the interactive voice response system, Tier 1 call centers, and Tier 2 call centers during fiscal year 2004. In January 2002, USCIS awarded a performance-based service contract for the management of four Tier 1 call centers. In making this award, USCIS obtained acquisition services from the Department of Veterans Affairs (VA), and the contracting officer who signed and was responsible for administering the contract was a VA employee working on behalf of USCIS. The contracting officer’s technical representative (COTR), a USCIS employee, was also responsible for administering the contract. In commenting on a draft of this report, DHS noted that by agreement of both the VA and USCIS, on April 20, 2005, USCIS assumed responsibility for administering the contract. The contract was awarded for a base year, beginning on June 1, 2002, plus 4 option years (1-year renewable extensions of the contract, three of which have been exercised as of June 2005). Through calendar year 2004, USCIS paid $64.6 million to the contractor for the Tier 1 call center operations. According to the Office of Federal Procurement Policy (OFPP) in the Office of Management and Budget, performance-based service contracts are designed to focus on results. Their purpose is to ensure that contractors are given the freedom to determine how to meet the government’s performance objectives, that appropriate quality levels are achieved, and that payment is made only for services that meet these levels. This type of contract is to emphasize standards for customer service and measurement of performance and may offer financial incentives, both positive and negative, to encourage quality performance. According to OFPP, call centers are suited to this type of contract because, among other things, they emphasize achieving results by meeting customer service standards. According to OFPP, with performance-based service contracts, incentive payments made to an independent contractor are to be contingent on the contractor’s ability to meet the government’s performance standards; the contract does not specify how those standards are to be met. Thus, the contractor retains discretion in determining how to meet performance standards specified in the contract, for example, how many CSRs to hire to ensure calls are answered within a contractually specified time. Other elements suggested for using a performance-based service contract include (1) identifying the agency’s needs and addressing those needs with performance requirements that describe required service results; (2) establishing performance standards that describe the required performance level; and (3) establishing a quality assurance plan for assessing contractor performance in order to ensure that the contractor has performed in accordance with the standards. USCIS used a multi-faceted approach to monitor and evaluate the quality of information and service provided by CSRs to customers calling contractor-operated Tier 1 call centers. This approach used seven performance measures. USCIS obtained performance data from the contractor’s monitoring of selected telephone calls; customer satisfaction surveys; and a telecommunications vendor (telephone company). In addition, USCIS used an independent consulting firm to monitor CSRs’ telephone calls and conduct a “mystery shopper” program assessing CSRs’ responses to customers. In order to monitor and evaluate the performance of the four contractor- operated Tier 1 call centers, USCIS planned to use seven performance measures. These measures were to evaluate the quality of customers’ telephone interactions with CSRs; the accuracy of information provided to callers over the telephone; the accuracy of callers’ information recorded by CSRs; callers’ levels of satisfaction; how quickly CSRs handled calls (two measures); and the number of calls abandoned by customers put on hold. According to USCIS officials, USCIS established the performance measures based on a review of industry standards for both government and private-sector call center operations. The measures were described in a section of the contract called the Performance Requirements Summary (PRS). Under the PRS, these performance measures comprised one of three components upon which the contractor’s performance score was based. The other two components were the standard, or goal, set for each measure, identifying the performance levels the contractor was expected to meet (e.g., callers will wait an average of 30 to 36 seconds before their calls are answered), and the performance calculation that USCIS would use to analyze performance data (e.g., total delay of all calls divided by the total number of calls). The PRS listing of the seven performance measures included a “sample calculation” for each of the measures, and stated that “actual calculations to be determined during Contract negotiations.” USCIS officials said they intended to negotiate and finalize the calculations after a 4- to 6-month phase-in period, and the contract was awarded with this provisional language. As to the performance measures and their related standards or goals, three of USCIS’s performance measures are call quality monitoring, accuracy of information provided, and accuracy of capturing information. Data on these measures are to be collected by the contractor’s quality assurance staff, who are to randomly monitor two calls per day for each CSR. (CSRs are not to know when they are being monitored.) The data collected are to be reported to USCIS on a monthly basis. Details on these three measures follow: Call quality monitoring. Calls are to be monitored by the contractor’s quality assurance staff to assess the CSRs’ “soft skills,” that is, their ability to interact with customers, establish customer rapport, maintain composure during a call, speak with clarity and professionalism, and other factors. Call quality monitoring data are to be captured on a standardized form. CSR responses for each of nine different “soft skills” are scored as percentages, with scores for the most highly valued skills, such as “active listening”—that is, whether the CSR was deemed to be attentive when listening to the customers—given more weight than the scores for other skills. The nine scores (i.e., percentages) are combined for a total “soft skills” score, with 100 percent as the highest possible score. The performance standard stated in the PRS for this measure is that all calls monitored achieve an average score of 90 percent to 95 percent after the nine “soft skills” scores (i.e., percentages) for each call are combined. (See app. II for additional details on the criteria and methodology used to determine soft skills scores.) Accuracy of information provided. Calls are to be monitored by the contractor’s quality assurance staff to determine, among other things, whether CSRs provided accurate and complete responses. Using a standardized form, the staff score CSRs on five different efforts, such as whether the CSR used software tools appropriately and whether, when the callers were asked directly, they indicated that their needs had been satisfied. The five efforts are scored as percentages, with more weight given to the scores for certain efforts, such as “provides complete response.” The scores are then combined for a total “accuracy of information provided” score, with 100 percent as the highest possible score. The performance standard stated in the PRS for this measure is that all calls monitored achieve an average score of 95 percent to 97 percent after the five accuracy scores (i.e., percentages) for each call are combined. (See app. II for additional details on the criteria and methodology used to determine accuracy of information provided.) Accuracy of capturing information. Calls are to be monitored by the contractor’s quality assurance staff to determine, among other things, whether CSRs accurately record and verify the callers’ information. The staff assess this measure by scoring four efforts, including whether a referral to a local USCIS service center or local office was completed appropriately and correctly. The four efforts are scored as percentages, with more weight given to the scores for certain efforts, such as “verifies caller’s information.” The scores are then combined for a total “accuracy of information provided” score, with 100 percent as the highest possible score. The performance standard stated in the PRS for this measure is that all calls monitored achieve an average score of 95 percent to 97 percent after the four accuracy scores (i.e., percentages) for each call are combined. (See app. II for additional details on the criteria and methodology used to determine accuracy of capturing information.) A fourth performance measure of call quality—customer satisfaction— was assessed by an independent consulting firm. Customer satisfaction surveys were conducted on a monthly basis to determine if customers were satisfied with the service that CSRs provided. At least 375 callers are to be randomly selected to be interviewed each month from a population of 10,000 randomly identified callers who called within the 30 days prior to the survey. To measure satisfaction with CSRs, customer responses to four interview questions about CSRs are compiled, and the overall percentage of respondents indicating satisfaction is calculated. The performance standard stated in the PRS for this measure is 80 percent to 85 percent of the customers surveyed indicating overall satisfaction with the CSRs’ service. (See app. III for additional details on the criteria and methodology used to determine customer satisfaction.) Three other performance measures involve the collection of statistical data by the telecommunications vendor for determining how quickly calls are answered. The performance measures and standards in the contract for assessing how quickly CSRs answered customers’ calls are as follows: Service level. The telecommunications vendor under contract with USCIS is to collect information on the number of calls answered by CSRs in 20 seconds or less, that is, the number of callers who spoke to a CSR within 20 seconds after getting through the interactive voice response system. The performance standard stated in the PRS for this measure involves two factors: half-hour increments and the length of time it took CSRs to answer calls. The standard is that for 80 percent to 85 percent of the half-hour increments measured, 80 percent of the calls are to be answered in 20 seconds or less. Average speed of answer. The telecommunications vendor under contract with USCIS is to collect information on the length of time it takes for CSRs to answer customers’ calls after they are routed to Tier 1 by the interactive voice response system; that is, how long callers are on hold before a CSR answers their call. The performance standard stated in the PRS for this measure is that, for all calls routed to Tier 1, callers will wait an average of 30 seconds to 36 seconds. Abandoned calls. The telecommunications vendor under contract with USCIS is to collect information on the number of calls abandoned by customers after getting through the interactive voice response system and waiting for a CSR to answer, that is, the number of times that customers hang up the telephone while waiting for a CSR. The performance standard stated in the PRS for this measure involves two factors: half-hour increments and how frequently callers abandon their calls. The standard is that for 85 percent to 95 percent of the half-hour increments measured, 1 percent to 2 percent of the calls are expected to be abandoned before a CSR answers. The contract stated that the contractor would be eligible to earn financial incentive awards if the average monthly performance met or exceeded the standards on a quarterly basis at each call center, and allowed USCIS to make deductions from payments to the contractor if the average monthly performance fell below the standards. According to the contract, the contractor is not eligible for an incentive award for a particular quarter if one of the performance standards is not met by one call center, and USCIS may make a deduction from payments to the contractor in that case. In addition, USCIS may, at its sole option, elect to include or waive financial incentives as it deems appropriate. In addition to the performance data collected by the contractor’s own quality assurance staff, an independent consulting firm, and the telecommunications vendor, USCIS took two additional steps to measure call center performance for quality assurance purposes. First, to help ensure that the contractor’s scoring of call-quality performance measures was reliable, USCIS used another independent consulting firm to validate the results of the contractor’s efforts by monitoring two calls per month for each CSR. Data were gathered and provided to USCIS on a monthly basis. (See app. IV for additional details on the criteria and methodology used by the independent consulting firm to conduct call monitoring.) Second, in April 2003, USCIS engaged the same independent consulting firm to carry out a “mystery shopper” program to assess the completeness and accuracy of CSRs’ answers to callers. Under this program, an independent consultant places random calls—1,200 each month—to Tier 1 call centers using various scripts provided by USCIS. As of April 2005, the scripts used in these calls covered 32 different scenarios, or types of calls, and 100 new scenarios were being developed. The calls are conducted in English and Spanish. (See app. V for an example of a mystery shopper scenario.) USCIS did not reach agreement with the contractor on how to apply the performance measurement requirements described in the PRS before awarding the performance-based service contract. USCIS suspended all financial incentives, positive or negative, while the parties negotiated this issue over a period of about 16 months without reaching agreement. After negotiations were abandoned, USCIS determined that, for the fourth quarter of 2004, the contractor had failed to meet four of seven performance measures and merited a payment deduction. The contractor disagreed on the grounds that the performance measurements had not been finalized and that changes in call center workloads affected the basis for applying financial incentives. In a separate matter, USCIS failed to ensure that all contractual, regulatory, and GAO standards pertaining to the documentation of the contractor’s performance were fulfilled. The performance measurement requirements described in the PRS were not completely finalized before the contract was awarded. The language referring to “sample” calculations for determining how performance would be measured remained in the contract after it was in force. In commenting on a draft of this report, DHS said that at the time of the contract award, USCIS management believed it was appropriate to let the winning vendor have some input into the performance measurement methodology since this contract represented a transition to performance-based contracting for call center operations. The negotiations between USCIS and the contractor on this issue began in January 2003 (after a phase-in period) and continued intermittently until April 2004, when they were abandoned. While negotiations were taking place and after they were abandoned, USCIS obtained monthly data relating to the contractor’s performance on the seven performance measures and compared those data to the standards. USCIS considered the measures and standards themselves to be nonnegotiable; the contractor, on the other hand, considered them as part of the “sample calculations” and, thus, negotiable. For over 2 years, USCIS did not use any of the resulting performance scores for the purpose of calculating financial incentive awards or payment deductions under the contract because the terms of the PRS remained unresolved between the parties. The contractor maintained that the performance scores were “potential scores” and were to be used by the parties in reaching an agreement on how to structure the PRS. On September 1, 2004, the contracting officer, representing USCIS, sent a letter to the contractor advising that USCIS would begin evaluating the contractor’s performance and determining a financial incentive award or payment deduction for the fourth quarter of the calendar year (October 1 through December 31). USCIS officials told us they decided to take this action because they had concluded that negotiations with the contractor were unlikely to result in an agreement on the PRS. The contractor objected to USCIS’s decision to carry out this evaluation. By letter dated November 29, 2004, the contractor stated that, under the terms of the contract, USCIS could not unilaterally determine the performance measurement requirements because all aspects of the requirements were negotiable, including the performance standards. The contractor further stated that an evaluation of its performance must take into account certain changes that took place to the work required under the contract. For example, the contractor stated that the number of USCIS-provided scripts, containing information for CSRs to address callers’ inquiries, had grown to more than 2,300 pages from approximately 400 script pages in June 2002. According to the contractor, these changes significantly increased the average amount of time needed to handle a call and affected the contractor’s ability to meet the performance standards imposed by USCIS. According to the contractor, USCIS’s unilateral imposition of performance measurement requirements that did not account for the changed work requirements was inconsistent with Federal Acquisition Regulation (FAR) 16.402(g), which provides that “t is essential that the Government and contractor agree explicitly on the effect that contract changes (e.g., pursuant to the Changes clause) will have on performance incentives.” Nevertheless, by letter dated February 11, 2005, USCIS’s contracting officer notified the contractor of the evaluation results for the period of October through December 2004. The results showed that the contractor met the standards for three of the seven performance measures and did not meet the standards for the other four measures. USCIS determined that, as a result of this performance, payments due to the contractor for services would be reduced. The letter noted that the contractor could submit its own data regarding performance during this period. Following the review of any data submitted, USCIS would take action to make the appropriate payment deduction, waive the payment deduction, or pay an appropriate incentive award. The contractor requested, by letter dated February 25, 2005, that USCIS waive implementation of the financial incentives, both positive and negative. The contractor reiterated its position that USCIS’s unilateral implementation of the performance measurement requirements as currently written in the contract, without sufficient regard for substantial changes to the contract and the changing nature of the program, was not appropriate. The contractor stated that it was ready to resume negotiations on this subject so that fair and equitable financial incentives would be established. The contractor further stated that it had determined the payment deduction was incorrectly calculated by USCIS. USCIS’s contracting officer responded, by letter dated April 15, 2005, that the government would not agree to waive implementation of the financial incentives and a deduction would be made from the next payment to the contractor. The letter stated that USCIS did not unilaterally create and impose the performance measurement requirements, which were included in the negotiated contract that USCIS and the contractor agreed to. Regarding the contractor’s assertion that the average amount of time needed to handle calls had significantly increased, the letter noted that the performance measurement requirements would apply regardless of the average length of calls at any given time. According to FAR and OFPP guidance on performance-based service contracting, the precise method for measuring performance should have been agreed upon between USCIS and the contractor before the contract was signed and implemented. FAR § 16.401 states that performance-based service contracts should establish “reasonable and attainable targets that are clearly communicated to the contractor.” According to OFPP, performance measurement techniques (i.e., how performance will be assessed to determine whether standards have been met) are essential elements of performance-based service contracting and should be clearly stated. In addition, according to OFPP, performance- based service contracts emphasize that all aspects of an acquisition be structured around the purpose of the work to be performed, that appropriate performance quality levels are achieved, that payment is made only for services that meet these levels, and that financial incentives are awarded to encourage quality performance. Although the disagreement between the two parties had not been resolved, USCIS exercised its option to extend the current call center contract for another year through May 31, 2006, to allow time to solicit and award new call center contracts. The exercise of this option has no effect on the contract’s performance measurement terms, which is the source of the parties’ dispute. USCIS officials said they plan to award new performance- based service contracts for Tier 1 operations to two vendors, with the two vendors fully operational by June 2006, to improve the handling of customers’ calls to Tier 1. USCIS officials told us they intend for the new contracts to include certain changes meant to improve Tier 1 call center operations and to incorporate OFPP guidance on performance-based contracting. USCIS officials told us that, unlike the current contract, the new PRS will clearly specify how contractor performance will be assessed and will not leave any terms open for post-award negotiation. In addition, USCIS officials said the new contracts will include independent call monitoring and the mystery shopper program as performance measurement tools to assess the quality of the Tier 1 CSRs’ responses to customers, including the accuracy and reliability of the information provided. At the time of our review, USCIS officials said that the solicitation was going through DHS’s contract review process and DHS had not issued the solicitation for a new contract containing these changes. DHS said in its comments on a draft of this report that the solicitation was with the DHS Procurement Office for review and issuance. As part of its quality assurance responsibilities under the current contract, USCIS is to keep written records of observations about the contractor’s performance based on periodic evaluations comparing performance data to standards in the PRS. USCIS’s contracting officer’s technical representative (COTR), who is responsible for administering the contract, is to use these written observations to notify the contractor if there are deficiencies—specifically, if the contractor does not meet the performance standards. The contractor is required to sign and date such observations to acknowledge that the COTR apprised it of any deficiencies. USCIS and contractor officials said they met at least quarterly (monthly, since October 2004) to discuss performance, performance data, and other items. USCIS officials said they provided the contractor with documentation containing performance and other data to discuss at these meetings. USCIS officials said some of this documentation identified performance deficiencies. However, contractor officials said they viewed the performance data as “potential scores” to be considered during negotiations. To the extent that USCIS considered the performance data as notification of deficiencies, it did not follow contractual procedures requiring the COTR to obtain the contractor’s signature acknowledging notification of the deficiencies. In addition, neither USCIS nor the contractor kept minutes of these meetings. According to FAR § 46.104(c), the government should maintain, as part of the performance records of a contract, suitable records reflecting the nature of its contract quality assurance actions. With respect to any performance deficiencies, the government’s records should include, among other things, the number and type of defects observed and any actions to correct deficiencies. Further, according to GAO’s standards for internal control in the federal government, for an agency to run and control its operations, it must have relevant, reliable information relating to internal events. All transactions and other significant events need to be clearly documented, and the documentation should be readily available for examination. This information should be recorded and communicated to management and others within the agency who need it to carry out their responsibilities. GAO’s standards provide a framework for establishing and maintaining internal control and for identifying and addressing major performance challenges. Appropriate and effective internal control is a key factor in helping agencies better achieve program results. The contract also requires the contractor to provide a quality assurance plan. The plan that was developed by the contractor describes the contractor’s approach and strategy for ensuring the delivery of high-quality service. As part of the plan, the contractor is to conduct formal, biweekly internal performance review meetings to help with the identification and correction of performance deficiencies. These meetings are to be attended by contractor and USCIS officials, with contractor staff reporting on quality performance issues, and are to be in addition to the quarterly (now monthly) meetings discussed above. Minutes of the meetings are to identify action items, responsibilities, and solution time frames, and the minutes are to be published for USCIS review. However, a contractor official said that these meetings never took place. According to both contractor and USCIS officials, the quarterly meetings were used to discuss operations and performance and to focus senior management attention on any performance issues. Under FAR § 37.602-2, the government’s quality assurance surveillance plans should include actions to help ensure that the contractor carries out its quality control obligations. By failing to ensure that the contractor held and documented performance review meetings as required by the contractor’s quality assurance plan, USCIS did not meet its quality assurance obligations under FAR § 37.602-2 and GAO’s internal control standards. In addition, USCIS’s failure to obtain the contractor’s written acknowledgment of USCIS-identified performance deficiencies did not meet the notification procedures established by the contract, documentation requirements of FAR § 46.104(c), and GAO’s standards for internal control. In its comments on a draft of this report, DHS noted that the contract was administered by a component of the VA until April 20, 2005, and DHS said the VA was provided with documentation discussed at quarterly and monthly performance assessment meetings between USCIS and the contractor. According to DHS, the lack of a clear understanding between USCIS and the VA regarding their roles contributed to the fact that formal documentation and evaluations were not always properly maintained and formally transmitted to the contractor. DHS acknowledged that the agency procuring a service is ultimately responsible for the contract and, thus, USCIS should have clarified its and the VA’s roles. USCIS used contractor performance data, including the results of surveys, call monitoring, and the mystery shopper program, to identify opportunities to improve customer service, including improving call- response times, help CSRs and IIOs better respond to customer inquiries, and manage the flow of calls into call centers. Following are examples of initiatives that USCIS recently implemented, or was planning to implement, as of April 2005. It is too early to assess the impact of these initiatives. USCIS implemented “intelligent call routing” for Tier 1. With “intelligent call routing” in place since February 2004, telephone calls routed from the interactive voice response system to Tier 1 are now routed to the next available CSR, at any of the four Tier 1 contractor- operated call centers. Previously, telephone calls to Tier 1 were routed to the next available CSR in the call center that resided in the same geographic region of the country as the caller. USCIS officials said that by June 2005 they will also have implemented “intelligent call routing” for calls transferred from Tier 1 to the next available IIO at either of the two Tier 2 call centers. USCIS implemented “overflow routing.” USCIS started its “overflow routing” initiative in October 2004, enabling certain general call types, identified as “English Other” and “Spanish Other,” to be routed directly from the interactive voice response system to Tier 2 USCIS-operated call centers, bypassing the Tier 1 contractor-operated call centers. Previously, all calls handled by IIOs at Tier 2 were first routed from the interactive voice response system to Tier 1, where CSRs then transferred the calls to Tier 2. USCIS officials said they expect the change will result in 1 to 5 percent of all calls being routed directly to Tier 2, which should help when Tier 1 CSRs cannot handle the call volume. USCIS implemented interactive voice response system routing of certain telephone calls to USCIS service centers. USCIS changed its automated interactive voice response system in December 2004 so that certain types of customers’ telephone calls—for example, certain issues concerning new permanent residents, cases already approved or denied, and pending cases—are now routed directly to USCIS service centers, bypassing CSRs at Tier 1. Previously, all customers’ telephone calls that needed to be handled by USCIS service centers were routed by the interactive voice response system to Tier 1. Then, after talking with the customers, the CSRs referred the customers to the service centers (whose employees have access to case paperwork) via e-mail. CSRs were allowed to transfer customers’ telephone calls to service center personnel only when customers requested emergency and expedited handling of applications. USCIS implemented a portfolio management system. Private attorneys, paralegals, and other representatives can use the USCIS Internet Web site to check the status of their clients’ immigration cases using a USCIS receipt number. Under the system, USCIS also notifies the representatives via e-mail when a case status changes; for example, when actions are taken, such as the approval or denial of an application. As of April 2005, over 300,000 customers, attorneys, and other representatives had used this system. USCIS said it is planning to implement a referral management system. Currently, Tier 1 CSRs send, via e-mail, service request referrals to USCIS service centers and local offices for customers who call wanting to change addresses, schedule and reschedule appointments at application support centers, order forms, and resolve problems. After a referral is made, NCSC does not know whether the service center or local office responded to the customer in a timely manner or even responded at all. To better monitor this process, USCIS plans to implement a referral management system, with such service request referrals placed in a database and assigned a tracking number. The system is to (1) determine the proper service center or local office to process the referral, (2) assign the case to an adjudicator, (3) update the case on a daily basis, and (4) report once a month on case status. The referral management system is planned to be accessible to customers on USCIS’s Internet Web site so they can make and track their own service request referrals. In addition, customers without Internet access are to be able to call on the telephone and CSR’s will access the USCIS Web site and create referrals for them. USCIS plans that the referral management system will be fully operational during the summer of 2005. USCIS is planning a customer service portal on USCIS’s Web site. USCIS has a long-term goal of giving customers Internet access to information contained in the “scripts” used by Tier 1 CSRs to answer customers’ questions. USCIS plans to establish a customer service portal on the USCIS Internet Web site, providing access to the information. The goal is to let customers with Internet access look up information themselves without having to call NCSC on the telephone, navigate the interactive voice response system, and wait for CSR’s to answer. USCIS had not set a time frame for implementing this initiative. Immigration call centers are a vital information referral source used millions of times by immigrants and other interested parties seeking to obtain needed documents, regulatory information, up-to-date status information on immigration-related benefits and applications, and other information. To ensure that it serves its customers effectively and efficiently, USCIS appropriately used a performance-based contract, but its failure to finalize all aspects of the performance requirements before the contract was awarded hampered its ability to exercise performance incentives in the contract. As a result, USCIS lost the opportunity during the life of the contract to help ensure that it received the maximum level of service from the contractor. In addition, USCIS did not meet standards promulgated by federal acquisition regulations, GAO, and the contract itself pertaining to documenting the contractor’s performance between 2002 and 2004, and adequately documenting notification of the contractor when the government perceived deficiencies in its performance. Failure to generate adequate documentation could impair USCIS’s ability to conduct future contract negotiations and to preserve a complete and reliable record of contract performance needed to ensure accountability. To improve USCIS’s efforts for evaluating contractor performance and encourage quality services at call centers, we recommend that the Secretary of Homeland Security require the Director of USCIS take the following two actions: (1) finalize contract terms related to specific performance measurement requirements before awarding new performance-based call center contracts; and (2) maintain readily available written records of performance assessments and performance evaluation meetings with the contractor. DHS and the contractor provided formal comments and technical comments on a draft of this report, which we have incorporated, as appropriate. In its formal comments, DHS generally agreed with our recommendations. DHS said the draft solicitation for the new contracts specifically identifies performance requirements that are non-negotiable. DHS further stated that, as recommended by GAO, written records of performance assessments and performance evaluation meetings will be maintained and readily available for review by all interested parties. In its formal comments, the contractor provided additional language to further clarify this report. The contractor said the report accurately summarizes the complex nature of CIS’s call center program and several challenges created by significant post-award changes to that program. DHS’s and the contractor’s formal comments are shown in appendixes VI and VII, respectively. We are sending copies to the Director of USCIS and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at 202-512-8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VIII. To determine what performance measures the U.S. Citizenship and Immigration Services (USCIS) established to monitor and evaluate the performance of contractor-operated call centers, we interviewed USCIS headquarters officials in Washington, D.C.; Tier 1 contractor officials in Arlington, Virginia, and at a contractor-operated call center; and an official representing an independent consulting firm under contract to USCIS and located in Fairfax, Virginia. We also collected and analyzed pertinent USCIS and contractor documentation. We collected and analyzed information on the various types of monitoring and evaluation programs used by USCIS, including internal call monitoring, independent call monitoring, customer satisfaction surveys, the mystery shopper program, and telephone call data provided by a telecommunications vendor. To find out how USCIS used the performance measures to evaluate the contractor’s performance, we interviewed USCIS headquarters officials in Washington, D.C., and Tier 1 contractor officials in Arlington, Virginia. We also collected and analyzed pertinent USCIS and contractor documentation. To determine what actions, if any, USCIS took or planned to take to strengthen call center operations, we interviewed USCIS headquarters officials in Washington, D.C., and Tier 1 contractor officials in Arlington, Virginia, and at a contractor-operated call center. We also collected and analyzed pertinent USCIS and contractor documentation. We assessed the reliability of telephone call volume data provided to USCIS by a telecommunications vendor, as well as USCIS and contractor staffing data. To carry out our data reliability assessments, we (1) reviewed information about the data, systems that produced the data, and data quality control procedures, and (2) interviewed USCIS and contractor officials knowledgeable about the data as necessary. We determined that the call volume and staffing data were sufficiently reliable for the purposes of this report. We conducted our work between May 2004 and May 2005 in accordance with generally accepted auditing standards. Scoring range (0 – 3) Section score (Percent) Section score (Percent) Scoring range (0 – 3) Section score (Percent) USCIS and an independent consulting firm jointly developed a telephone survey to measure customer satisfaction with the three levels of NCSC call center service—interactive voice response system, Tier 1 CSRs, and Tier 2 IIOs. To carry out the survey each month, representatives of the independent consulting firm call 375 randomly selected customers. To assess the customers’ satisfaction with the CSRs, the representatives read several statements and ask questions for the customers to rate their experiences with CSRs. For the customer satisfaction performance measure required in the contract, USCIS collects and summarizes data on the customers’ responses to the four statements below. The customers are asked to rate their agreement with each of the statements using a scale of 1 to 7 (1 is strongly agree and 7 is strongly disagree). 1. The representative seemed to fully understand my questions. 2. The representative was polite. 3. The representative did not rush me. 4. The representative answered my questions promptly. An independent consulting firm scored CSRs on 23 separate quality assurance factors as follows. In addition to the above, Darryl W. Dutton, Ronald G. Viereck, Brian J. Lipman, Christine F. Davis, Amy L. Bernstein, and Michele C. Fejfar made key contributions to this report.
The U.S. Citizenship and Immigration Services (USCIS) bureau within the Department of Homeland Security (DHS) provides toll-free telephone assistance through call centers to immigrants, their attorneys, and others seeking information about U.S. immigration services and benefits. As the volume of calls increased--from about 13 million calls in fiscal year 2002 to about 21 million calls in fiscal year 2004--questions were raised about USCIS's ability to ensure the reliability and accuracy of the information provided at call centers run by an independent contractor. This report analyzes: (1) the performance measures established by USCIS to monitor and evaluate the performance of contractor-operated call centers; (2) how performance measures were used to evaluate the contractor's performance; and (3) any actions USCIS has taken, or plans to take, to strengthen call center operations. USCIS developed seven performance measures intended to assess the performance and overall quality of responses provided by customer service representatives at contractor-operated call centers. These measures include how quickly calls were answered and the accuracy of information provided. The contract between USCIS and its contractor stipulated that the contractor could earn financial incentive awards if the average monthly performance met or exceeded the standards on a quarterly basis at each of four call centers. Conversely, financial deductions could be made if the standards were not met. USCIS did not finalize the terms regarding how the contractor's actual performance would be calculated, or scored, before awarding the contract. This limited USCIS's ability to exercise performance incentives (positive or negative) because the parties could not reach agreement on performance terms. USCIS suspended the use of financial incentives while the parties negotiated the issue. Agreement was not reached after 16 months, however, USCIS determined that the contractor had failed to meet standards for 4 of the 7 performance measures in the fourth quarter of 2004 and took action to reduce its payments for services. The contractor objected, citing the lack of agreement on the performance measurements and the impact of workload increases, but USCIS disagreed and stated it would reduce payment. In a separate but related matter, USCIS failed to meet contractual, regulatory, and GAO standards pertaining to how the contractor's performance would be documented--especially with respect to any deficiencies. Finally, USCIS exercised its option to extend the call center contract through May 2006, to allow time to solicit and award new call center contracts. USCIS said it intends to finalize performance measurement terms in the new contracts. USCIS used contractor performance data it collected over the course of the contract to identify opportunities to improve customer service and call flow, among other things. Several initiatives were launched as a result.
Medicare Part B generally covers both synthetic drugs and biologicals administered under a physician’s direct supervision, including those administered in physician offices and in hospital outpatient departments that are not usually self-administered. These include injectable drugs (such influenza, pneumococcal, and hepatitis B vaccines); drugs inhaled through durable medical equipment (such as certain asthma medications); and oral cancer drugs if the same drug is available in injectable form. As with all drugs, Part B drugs can be either single-source or multi- source. Single-source drugs have only one manufacturer. Multi-source drugs have at least two, and often several, versions produced by different manufacturers. While each of these versions will have its own NDC, Medicare pays a single rate for any NDC associated with a given HCPCS code. Part B drugs administered to Medicare beneficiaries are generally purchased by physicians or hospitals. In 2014, Medicare spent approximately $24 billion on these drugs. The majority of these expenditures—approximately $21 billion, or 87 percent—were for drugs paid based on ASP. The remaining 13 percent of expenditures were for drugs paid based on different methodologies. For example, several Part B drugs, including certain vaccines and drugs provided through DME, are paid for on the basis of average wholesale prices (AWP) or reasonable cost and not on the basis of ASPs. Part B ASP drugs accounted for a somewhat smaller percentage of administrations than expenditures of all Part B drugs in 2014—63 percent—as drugs paid based on AWP or reasonable cost, primarily flu, pneumonia, and hepatitis B vaccines, accounted for 26 percent of all administrations of Part B drugs. (See fig. 1.) Over 9 million Medicare beneficiaries received at least one Part B ASP drug during 2014, which accounted for approximately 43 percent of all beneficiaries who received a Part B drug that year. These 9 million beneficiaries were responsible for 20 percent of Medicare’s payment for these drugs via cost-sharing requirements, or about $4 billion in 2014. According to statute, drug manufacturers that participate in the Medicaid Drug Rebate Program are required to submit data to CMS on sales of Part B drugs to most U.S. purchasers, including physicians, hospitals, and wholesale distributors within 30 days of the end of every calendar quarter. Sales must be reported net of rebates, discounts, and other price concessions. CMS officials have stated that most manufacturers participate in the Medicaid Drug Rebate Program. Other manufacturers may voluntarily submit sales price data to CMS. CMS reviews these data, which are typically reported at the NDC level, and calculates payment rates at the HCPCS level. According to CMS officials, the agency then publicly releases the revised quarterly payment rates so that stakeholders can comment on the new rates before they take effect. These officials noted that due to the time it takes for manufacturers to submit the data to CMS, CMS to review the data and then update the payment rates, and the public to review and comment on the revised rates, there is a two- quarter (6-month) lag between the sale and when the payment rate takes effect. CMS produces a web page titled “Medicare Part B Drug Average Sales Price” that provides guidance for drug manufacturers on submitting ASP data. Manufacturers submit two forms to CMS: the ASP Data Collection Form—an Excel document in which manufacturers insert all relevant sales data—and the ASP Certification Form signed by the manufacturer’s CEO or CFO to affirm the accuracy of the submitted data. Where there is no specific guidance in federal statute or regulations regarding how to calculate ASP, CMS has indicated that it allows manufacturers to make reasonable assumptions in their calculations of ASP and to submit these assumptions with the required data. CMS’s web page also includes a common e-mail address for manufacturers to send ASP-related questions to the agency. The OIG has conducted two studies related to manufacturer reporting and CMS oversight of ASP data. The first report, published in 2010, found that CMS lacks complete ASP data for certain drugs because not all manufacturers of Part B drugs are required to report ASPs. OIG recommended that CMS consider seeking a legislative change to require all manufacturers of Part B drugs to submit ASPs. CMS did not concur with this recommendation, stating that the President’s budget for the upcoming fiscal year did not include any proposals to require manufacturers of Part B drugs to submit ASPs. The second report, published in 2014, further explored this policy and found that at least one- third of the more than 200 manufacturers of Part B drugs included in the study did not submit ASPs for some of their products in the third quarter of 2012, despite being required to do so. An additional 45 manufacturers of Part B drugs were not required to report ASPs that quarter. OIG again recommended that CMS seek a legislative change to directly require all manufacturers of Part B drugs to submit ASPs. CMS again did not concur with this recommendation, stating that the President’s budget for the upcoming fiscal year did not include any proposals to require manufacturers of Part B drugs to submit ASPs. However, the agency said it would take the recommendation into consideration in the future. These reports also recommended that CMS develop or implement an automated system for the submission of ASP data to potentially limit the possibility of data entry errors, reduce the amount of time it takes to calculate ASP- based payment amounts and adjust ASP payment limits, and enable CMS to track ASPs with greater ease. CMS concurred with these recommendations. Medicare expenditures were concentrated in a small number of the 551 Part B drugs that were paid based on ASP in 2014. (See fig. 2.) In particular, 6 drugs each had expenditures of over $1 billion and collectively accounted for 36 percent of all expenditures on Part B ASP drugs that year. (See table 1 and, for a list of characteristics associated with the highest expenditure drugs in 2014, see table 6 in app. I.) Beyond the 6 highest expenditure drugs, an additional 43 drugs each had between $100 million and $1 billion in expenditures and collectively accounted for an additional 48 percent of expenditures on Part B ASP drugs. In contrast, 306 drugs (56 percent of all Part B ASP drugs) each had less than $1 million in expenditures and collectively accounted for less than 1 percent of all expenditures on Part B ASP drugs. (For a list of the 50 Part B ASP drugs with the highest expenditures in 2014, see table 7 in app. I.) Administrations of Part B ASP drugs were also concentrated in a small number of drugs in 2014. (See fig. 3.) In particular, 10 drugs were each administered over 1 million times and collectively accounted for 37 percent of all administrations of Part B ASP drugs that year. (See table 2 and, for a list of characteristics associated with the highest administration drugs in 2014, see table 8 in app. I.) Beyond the 10 drugs with the highest number of administrations, an additional 75 drugs were each administered between 100,000 and 1 million times, and collectively accounted for an additional 51 percent of all administrations of Part B drugs paid based on ASP. In contrast, 187 drugs (34 percent of all Part B ASP drugs) were each administered fewer than 1,000 times and collectively accounted for less than 1 percent of all administrations of Part B ASP drugs. (For a list of the 50 Part B ASP drugs with the highest number of administrations in 2014, see table 9 in app. I.) Few Part B ASP drugs were among both the highest expenditure and the highest administration drugs in 2014. For example, no Part B ASP drug had over $1 billion in expenditures and over 1 million administrations that year. Additionally, of the 102 Part B ASP drugs with either $100 million or more in expenditures or 100,000 or more administrations, only 32 were in both categories. These 32 drugs included all 6 drugs with expenditures over $1 billion, but none of the 10 drugs with over 1 million administrations. The characteristics of drugs associated with the majority of expenditures on Part B ASP drugs tended to differ from the characteristics of drugs associated with the majority of administrations. For example, the majority of Medicare expenditures for Part B ASP drugs in 2014 were for biologics, brand name drugs, drugs made by a single manufacturer, and drugs that came onto the market since 2000. In contrast, the majority of administrations of Part B ASP drugs were for synthetics, generics, drugs made by multiple manufacturers, and drugs that came onto the market prior to 2000. Additionally, the therapeutic categories associated with the largest percentage of expenditures tended to differ from the categories associated with the largest percentage of administrations. However, injections accounted for the majority of both expenditures and administrations (See table 3.) The majority of expenditures were for drugs with average expenditures per beneficiary over $10,000 and for drugs received by fewer than 100,000 beneficiaries. The majority of administrations were for drugs with average expenditures per beneficiary under $100 and for drugs received by over 100,000 beneficiaries. (See table 4.) Expenditures on and administrations of Part B ASP drugs were generally associated with the same provider characteristics in 2014. In particular, the majority of expenditures and administrations occurred in physicians’ offices (rather than hospital outpatient departments or other settings) and in urban areas (rather than suburban or rural areas). Additionally, the highest percentage of both expenditures and administrations generally were for Part B ASP drugs that were prescribed by the same provider specialty: hematology oncology. (See table 5.) CMS takes three main steps to validate that the sales price data reported by drug manufacturers are complete and accurate. First, CMS requires that, before a manufacturer submits a report containing data to CMS, the CEO, CFO, or authorized official of each drug manufacturer attests to the accuracy of the information provided in that report by signing the ASP Certification Form. Second, according to CMS officials, once CMS receives the sales data from the manufacturer, it performs a series of electronic data checks to assess the completeness of the submitted data. CMS’s data checks include checking for missing data or duplicate entries, checking for incorrect product information, and comparing submissions to those of previous quarters. In cases where CMS identifies discrepancies through its data checks, agency officials stated that they attempt to resolve the issue directly with the manufacturer. If CMS is unable to resolve the issue directly with the manufacturer, the agency refers the case to OIG and OIG determines appropriate enforcement, if needed. Third, officials from CMS stated the agency holds a 7 to 10 day public comment period where manufacturers and providers have an opportunity to comment on the payment amounts before they are published. CMS officials believe that the steps the agency takes to validate manufacturer-reported sales price data are sufficient, but CMS does not verify that the reported data reflect actual sales prices. Federal standards for internal control call for management to use quality information to achieve its objectives. According to GAO’s guidance for assessing the reliability of computer-processed data, completeness and accuracy are the two key components of quality data. CMS officials noted that, since 2009, only one drug manufacturer has incurred civil monetary penalties as a result of OIG’s review of manufacturer reporting discrepancies. CMS officials told us that there have been few other issues with drug manufacturers’ ASP submissions over the past couple of years and that any issues that did arise were minor. These officials also noted that during the public comment period, they receive few comments from stakeholders. Additionally, CMS’s electronic data checks described earlier are consistent with recommendations in GAO’s guidance related to verifying the completeness of data. Specifically, examples of GAO’s guidance include testing electronic data for missing or duplicate data, looking for values outside of a desired range, and testing relationships between data elements. However, CMS does not take sufficient steps to verify the accuracy of the data. Officials from CMS told us that they do not routinely verify the underlying data from manufacturers either by tracing the data to and from source documents, such as sales invoices, or through CMS’s referrals to OIG. The Social Security Act authorizes CMS to survey manufacturers that have Medicaid drug rebate agreements when necessary to verify ASP. However, CMS officials told us that this authority does not allow them to conduct blanket surveys to routinely collect information regarding manufacturers’ ASP data beyond what is on the ASP data collection form. CMS officials indicated they may also request that OIG use its authority to audit ASP data submitted by manufacturers. However, CMS has limited such referrals to situations where the agency has identified potential consistent or repeated problems with calculating and reporting ASP data. In situations where CMS requires additional information about the data submission, the agency officials stated that the requests are typically for information that could be considered public. Officials from CMS indicated the agency is developing an automated ASP submission system to use with drug manufacturers; however, the new system will not help to ensure the accuracy of the underlying sales price data. According to OIG, this automated system could limit the possibility of data entry errors, reduce the amount of time it takes to calculate and adjust ASP payments, and enable CMS to track ASPs with greater ease and efficiency. CMS began working on an automated system following a 2010 OIG recommendation. CMS officials told us that the agency is still testing the system and hopes to begin implementation at the end of 2016. Four of the six drug manufacturers we spoke with stated that implementation of an automated submission system would improve the ASP submission process. The two manufacturers that did not believe an automated submission system would improve the ASP submission process already submit their data exclusively via e-mail instead of by mail. CMS officials stated that due to the time it takes for manufacturers to calculate and submit ASP data and for CMS to review the data and update the payment rates, the automated system may not reduce the two-quarter lag between when drugs are sold and CMS receives all data and updates the payment rates. CMS is unable to assess the accuracy of all drug manufacturers’ sales price data because not all drug manufacturers submit these data to CMS. As stated previously, only drug manufacturers with Medicaid drug rebate agreements are required to submit ASP data on a quarterly basis. However, not all manufacturers of Medicare Part B drugs have these agreements; therefore, not all manufacturers are required to submit ASP data to CMS. Further, CMS officials said that the agency lacks the authority to require manufacturers not participating in the Medicaid Drug Rebate Program to submit ASP data. CMS officials also said that most manufacturers of Part B drugs do submit sales price data because they have Medicaid drug rebate agreements or submit the data voluntarily, but not all do. Without complete data from manufacturers that have been assessed for accuracy by CMS, the agency risks setting payment rates based on inaccurate information. This is inconsistent with federal standards for internal control, which call for management to use quality information to achieve objectives. Drugs manufactured by multiple sources are more likely to have inaccurate payment rates than are drugs manufactured by a single source because, according to CMS officials, the payment system provides an incentive for single-source manufacturers to report their data. CMS officials told us that single-source drug manufacturers have an incentive to report ASP data so that health care providers will know Medicare’s payment rate for their drug. These officials stated that providers prefer to use drugs with published Medicare payment rates because they know what they will be paid. If the manufacturer of a single-source drug did not submit sales price data, ASP data for that billing code would be unavailable, and CMS would substitute ASP with another metric that might be less accurate. Other metrics include rates published in national pricing compendia such as Truven Health Analytics’ RED BOOK or First Databank’s National Drug Data File, which publish product information for drugs such as strength, package size, and package quantity. OIG has found that prices published in national pricing compendia do not accurately reflect actual market prices. In contrast, CMS officials told us if a manufacturer did not submit ASP data for a drug that is manufactured by multiple sources, the sales price would still be based on ASP data submitted by the other manufacturers of the drug. This gives multi-source drug manufacturers less incentive to report ASP data, particularly if the inclusion of their data would result in a lower Medicare payment rate for the drug. To assess the potential impact of manufacturers without rebate agreements that do not voluntarily report ASP data, in its 2014 report, OIG looked at 50 high-expenditure multi-source Part B drugs in the third quarter of 2012. These drugs included those with payment rates that used sales price data from both manufacturers that were required to report their data and manufacturers that voluntarily reported their data. If manufacturers had not voluntarily reported their data, 12 of the 50 drug payment rates would have changed. Payment rates would have increased for 5 drugs (between 3 and 40 percent) and decreased for 7 drugs (between 1 and 49 percent). Payment rates for the remaining 38 drugs would have stayed the same. In 2014, Medicare spent approximately $21 billion on Part B drugs paid based on ASP. The substantial expenditures for Part B ASP drugs underscore how important it is that CMS ensure that the data on which the agency bases Medicare’s payment rates for these drugs are accurate. Federal standards for internal control call for management to use quality information to achieve its objectives. According to GAO’s guidance for assessing the reliability of computer-processed data, completeness and accuracy are the two key components of quality data. CMS conducts certain data checks to assess the completeness of the ASP data submitted by drug manufacturers. However, CMS does not verify the accuracy of the underlying data by tracing the data to and from source documents, such as sales invoices. Because CMS does not verify the accuracy of the underlying data used to determine Medicare payment rates, the resulting payment rates may be inaccurate if drug manufacturers do not report accurate data. CMS is unable to assess the accuracy of all sales price data because the agency does not receive data from all drug manufacturers. Currently, only drug manufacturers with Medicaid drug rebate agreements are required to submit ASP data to CMS. Although agency officials told us that most drug manufacturers have rebate agreements or choose to voluntarily submit ASP data, some manufacturers do not. Federal standards for internal control call for management to use quality information to achieve its objectives. Without complete data from all manufacturers that have been assessed for accuracy by CMS, the agency risks setting payment rates based on inaccurate information. To help the Department of Health and Human Services ensure accuracy in Part B drug payment rates, Congress should consider requiring all manufacturers of Part B drugs paid at ASP, not only those with Medicaid drug rebate agreements, to submit sales price data to CMS, and ensure that CMS has authority to request source documentation to periodically validate all such data. CMS should periodically verify the sales price data submitted by a sample of drug manufacturers by requesting source documentation from manufacturers to corroborate the reported data, either directly or by working with OIG as necessary. We provided a draft of this report for review to HHS and received written comments that are summarized below and reprinted in appendix II. In its comments, HHS agreed with our recommendation. HHS stated that CMS will work with OIG as appropriate regarding collecting source documentation from drug manufacturers and that CMS will take action as it is warranted. To fulfill this recommendation, CMS will have to take additional actions relative to what it has done in the past. As we noted in the report, CMS has previously requested that OIG use its authority to audit ASP data submitted by manufacturers when it has identified potential consistent or repeated problems with calculating and reporting ASP data. HHS also noted in its comments that the OIG reviews average manufacturer price (AMP) data for Part B drugs and that CMS has the authority to adjust the ASP-based payment amount in situations where the OIG finds that ASP exceeds AMP by a certain threshold percentage. However, AMP data are also reported by manufacturers and would be inaccurate if the data do not represent actual manufacturer prices. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of Health and Human Services and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. Appendix I: Characteristics Associated with the Part B Drugs Paid Based on ASP with the Highest Expenditures and Highest Number of Administrations (2014) Methylprednisolone acetate (40mg) Number of beneficiaries (thousands) Methylprednisolone acetate injection (40mg) Albuterol and ipratropium bromide inhalation solution Methylprednisolone acetate injection (80mg) In addition to the contact named above, individuals who made key contributions to this report included Gregory Giusto, Assistant Director; Alison Binkowski; George Bogart; Alexander Cattran; Daniel Lee; Lauren Metayer; Elizabeth T. Morrison; and Aubrey Naffis. Medicare Part B: Expenditures for New Drugs Concentrated among a Few Drugs and Most Were Costly for Beneficiaries. GAO-16-12. Washington, D.C.: October 23, 2015. Medicare: Information on Highest-Expenditure Part B Drugs. GAO-13-739T. Washington, D.C.: June 28, 2013. Medicare: High-Expenditure Part B Drugs. GAO-13-46R. Washington, D.C.: October 12, 2012. Medicare Part B Drugs: CMS Data Source for Setting Payments Is Practical but Concerns Remain. GAO-06-971T. Washington, D.C.: July 13, 2006. Medicare Hospital Pharmaceuticals: Survey Shows Price Variation and Highlights Data Collection Lessons and Outpatient Rate-Setting Challenges for CMS. GAO-06-372. Washington, D.C.: April 28, 2006. Medicare: Comments on CMS Proposed 2006 Rates for Specified Covered Outpatient Drugs and Radiopharmaceuticals Used in Hospitals. GAO-06-17R. Washington, D.C.: October 31, 2005. Medicare: Radiopharmaceutical Purchase Prices for CMS Consideration in Hospital Outpatient Rate-Setting. GAO-05-733R. Washington, D.C.: July 14, 2005. Medicare: Drug Purchase Prices for CMS Consideration in Hospital Outpatient Rate-Setting. GAO-05-581R. Washington, D.C.: June 30, 2005.
Medicare Part B covers drugs typically administered by a physician. Medicare pays physicians and other providers for these drugs at an amount generally equal to the ASP of the drug plus a fixed percentage. These payment rates are calculated quarterly by CMS based on price and volume data reported by drug manufacturers. Members of Congress and others have questioned the amount that both Medicare and its beneficiaries spend on Part B drugs. GAO was asked to examine Medicare spending for and utilization of Part B drugs and the accuracy of the sales price data reported by drug manufacturers. This report (1) describes Medicare spending and utilization for Part B drugs that are paid based on ASP, including variations in spending and utilization by provider and drug characteristics, and (2) examines the steps CMS takes to ensure the accuracy of the sales price data reported by drug manufacturers. To describe Medicare spending and utilization for Part B ASP drugs, GAO analyzed 2014 Medicare claims data. To examine the accuracy of ASP data, GAO interviewed CMS, the HHS Office of Inspector General, and drug manufacturers and reviewed related documentation. In 2014, the most recent year for which data were available, the Medicare program and its beneficiaries spent about $21 billion on approximately 46 million administrations of 551 Part B drugs paid based on average sales price (ASP). Six drugs—each exceeding $1 billion in expenditures—accounted for 36 percent of all expenditures on Part B ASP drugs, while a different 10 drugs—each administered over 1 million times—accounted for 37 percent of all administrations. Biologics (drugs made from living entities), drugs without generic versions available, and drugs made by a single manufacturer were associated with the vast majority of expenditures on Part B ASP drugs. In contrast, synthetics (drugs produced from chemical ingredients), drugs with generic versions available, and drugs with multiple manufacturers were associated with the vast majority of administrations. Compared with other types of providers, hematology oncologists were associated with the highest percentage of drug expenditures and administrations. Source: GAO analysis of Centers for Medicare & Medicaid Services, Food and Drug Administration, and RED BOOK data. | GAO-16-594 The Centers for Medicare & Medicaid Services (CMS), an agency within the Department of Health and Human Services (HHS), performs several electronic data checks on the sales price data reported by drug manufacturers each quarter, including checking for missing data or incorrect product information. However, CMS does not routinely verify the underlying data, which is inconsistent with federal internal control standards that call for management to use quality information to achieve its objectives. Without additional verification of the ASP data received from manufacturers, it is possible for the data to be inaccurate, which could result in inaccurate Medicare payment rates. In addition, CMS is unable to use or assess the accuracy of all sales price data because, as directed by statute, only manufacturers with Medicaid drug rebate agreements are required to submit sales price data to CMS. Unless all manufacturers without rebate agreements choose to voluntarily submit sales price data, the payment rates for some drugs will be based on incomplete ASP data or will not be set based on ASP. Congress should consider requiring all manufacturers of drugs paid at ASP to submit sales price data to CMS. Further, CMS should periodically verify the data submitted by a sample of drug manufacturers by requesting source documentation. HHS agreed with GAO's recommendation and stated that CMS would take action as warranted.
Since World War II, many employers have voluntarily sponsored health insurance as a benefit to employees for purposes of recruitment and retention, and some have also offered these benefits to their retirees. The federal tax code provides incentives for employers to subsidize health benefits because their contributions can be deducted as a business expense from their taxes, and these contributions are also not considered taxable income for employees. Employer-sponsored health benefits are regulated under the Employee Retirement Income Security Act of 1974 (ERISA), which gives employers considerable flexibility to manage the cost, design, and extent of health care benefits they provide. However, ERISA established certain requirements for employers, including that they provide health plan participants and beneficiaries with a summary plan description (SPD) specifying the retirees’ rights and circumstances under which the health plan can be modified or terminated. Concern over the costs associated with retiree health benefits was compounded in 1993 when the Financial Accounting Standards (FAS) Board adopted FAS 106, requiring employers to report annually on the liability represented by the promise to provide retiree health benefits to current and future retirees. While FAS 106 did not affect an employer’s cash flow, there has been concern that listing this future liability could affect companies’ stock prices because the reporting of projected retiree health care costs affects the overall statement of financial profitability. Some companies have said that FAS 106 requirements lead to reductions in reported income and shareholder equity and are a reason for reducing retiree health benefits. As a means of reducing their reported liability as well as controlling rising costs associated with retiree health benefits, some employers have passed a share of cost increases to their retirees in the form of higher premiums, deductibles, or copayments. Some other employers have reduced benefits or simply ceased to sponsor coverage. In the absence of employer-sponsored retiree health benefits, retirees have certain coverage alternatives, but may find them to be expensive or even unaffordable. Individuals under 65 may rely on the individual insurance market or may, in limited instances, be eligible for continuation coverage from a former employer. For example, individuals whose jobs provided health benefits that ended at retirement may continue temporary coverage for up to 18 months under provisions of the Consolidated Omnibus Budget Reconciliation Act of 1985 (COBRA). For eligible individuals who exhaust available COBRA coverage, the Health Insurance Portability and Accountability Act of 1996 (HIPAA) guarantees access to the individual market, regardless of health status and without coverage exclusions, but does not restrict the premiums that may be charged to older or less healthy individuals. For retirees 65 years or older, Medicare is typically the primary source of health insurance coverage. Under traditional Medicare, eligible individuals may apply for Part A, which helps pay for care in hospitals and some limited skilled nursing facility, hospice, and home health care, and may purchase Part B, which helps pay for doctors, outpatient hospital care, and other similar services. Medicare beneficiaries may rely on private retiree health coverage from a former employer or union or individually purchased Medicare supplemental insurance (known as Medigap) to cover some or all of the costs not covered by Medicare, such as copayments, coinsurance, deductibles, and most outpatient prescription drug costs.Depending on where they live, individuals may have the option of obtaining Medicare coverage on a fee-for-service basis or from a managed care or other private plan offered through the Medicare+Choice program since 1998. Many beneficiaries have been attracted to these plans because they typically have lower out-of-pocket costs than fee-for-service plans and offer services not covered by traditional Medicare, such as routine physical exams and prescription drugs. Nearly 6 million people, or approximately 15 percent of Medicare’s 39 million beneficiaries, were enrolled in a Medicare+Choice plan as of January 1, 2001, with recent plan withdrawals causing some beneficiaries to return to the traditional Medicare program. Despite a strong economy and relatively small premium increases during the latter part of the 1990s, available evidence from employer benefit surveys and employer benefit consultants we interviewed suggests the decline in employer-sponsored retiree health insurance has not reversed since 1997—the last year for which we had reported previously. Two widely cited employer benefit surveys estimate that just over one-third of large employers, and a smaller portion of small employers, offered health coverage to some of their retirees in 2000; however, one of these surveys shows the proportion of large employers offering coverage is the same as in 1997, whereas the other indicates a further small decline in coverage since 1997. Other data indicate that the percentage of retirees with employer-sponsored health insurance remained relatively stable during this time period. Still, many employers continuing to offer coverage have reduced the terms of coverage by tightening eligibility requirements, increasing the share of premiums retirees pay for health benefits, or increasing copayments and deductibles—thus, contributing to a gradual erosion of benefits. Employer sponsorship of retiree health benefits in 2000 was, at best, the same as in 1997 or, worse, continued to gradually erode according to two surveys. Surveys conducted by William M. Mercer, Incorporated, indicate that the portion of firms sponsoring health insurance for early retirees fell slightly from 41 percent in 1997 to 36 percent in 2000. Similarly, employer sponsorship of health benefits for Medicare-eligible retirees fell from 35 to 29 percent during this period. As shown in figure 1, this continues a gradual decline that began in the early 1990s. A second survey—conducted by the Kaiser Family Foundation and Health Research and Educational Trust (Kaiser/HRET)—estimates that about 37 percent of large employers sponsored retiree health benefits in 2000—the same percentage as in 1997, although with some year-to-year fluctuation. Like the Mercer survey, the Kaiser/HRET survey reflects a significant decline in coverage since 1991. Year-to-year fluctuations or gradual changes in these surveys’ results need to be interpreted with caution. These surveys are widely used and based on random samples designed to be representative of a broader employer population, but neither may have the precision needed to distinguish small changes in coverage from year to year because of the response rates and the number of firms surveyed. For example, only about 45 percent of the 1,887 firms in the Kaiser/HRET sample responded to the survey in 2000. Similarly, about 50 percent of the sampled firms responded to the Mercer survey, which included 2,797 respondents. Thus, year-to-year differences may have resulted from differences in those employers that chose to respond to the surveys. Also, while neither Mercer nor Kaiser/HRET reported the size of these sampling errors, Kaiser/HRET’s 1999 and 2000 reports indicated that 1-year differences in the percentage of large employers offering retiree health coverage since 1998 were not statistically significant. Large firms are more likely to sponsor health insurance for retirees than are smaller firms. For example, Kaiser/HRET reported that just over one- half of firms with 5,000 or more employees sponsored retiree health insurance in 2000, compared to only about 9 percent of firms with fewer than 200 employees. According to the Mercer data, the percentage of firms with 500 to 999 employees that sponsored retiree health insurance in 2000 was about 40 points lower than for those with 20,000 or more employees— about 30 percent or less compared to about 70 percent. The percentage of retirees obtaining health benefits through a former employer has remained relatively stable since 1997. According to our analysis of the Census Bureau’s Current Population Survey (CPS), in 1999, about 37 percent of retirees aged 55 to 64 had employer-sponsored coverage in their own names from former employers, as did about 26 percent of elderly retirees in 1999 (see figure 2). Since 1994, these figures varied by only 1 or 2 percentage points for early retirees and even less for elderly retirees. Year-to-year differences are too small to be statistically significant. This stability in coverage may exist in part because employers tend to reduce coverage for future rather than current retirees. Employers have adopted several strategies to limit their liability for retiree health costs other than terminating benefits, and these mechanisms contribute to an erosion in health benefits available to retirees. Some employers have restricted eligibility for retiree health insurance to certain employees, such as those hired before a certain date, thus reducing their future liability for these benefits without causing a large disruption in health coverage for those who are currently or soon-to-be retiring. According to Mercer’s data, about 5 percent of large employers sponsored retiree health insurance in 2000 for only selected employees, typically excluding employees hired more recently. Employers have also attempted to better manage or control their health care expenditures by increasing the share of health care costs for which the retiree is responsible. This approach encompasses a range of activities and includes employer efforts to increase the retirees’ deductibles, copayments, and premium share; cap the employer’s overall expenditures; or pay a fixed amount per retiree for health care. For example, more than 10 percent of employers reported having recently increased retirees’ potential out-of-pocket costs for deductibles, coinsurance, and copayments. Kaiser/HRET and Mercer, respectively, report that 16 to 25 percent of employers increased the retiree’s share of their premium contribution during the last 2 years. According to Mercer data, about 40 percent of large firms that offer early retiree health benefits now require these retirees to pay the entire premium—an increase of 8 or more percentage points since 1997. Likewise, the percentage of firms requiring Medicare-eligible retirees to pay the entire premium has increased 7 or more points during this time period. In other cases, employers have established caps on their overall expenditures for future retiree health benefits. The 1999 Kaiser/HRET survey estimated that about 35 percent of all large firms had recently capped their total projected contribution for retiree health benefits. How employers will ensure spending does not exceed the caps and how coverage will be affected are not clear. Benefit consultants we interviewed stated that employers typically set caps prospectively at a level higher than current spending. In some cases, employers that find they are approaching the cap for retiree health spending will raise it. Some employers are considering—but not yet widely implementing—a more fundamental change by shifting to a defined contribution plan, under which an employer directly provides each retiree with a fixed amount of money to purchase coverage, either in the individual market or through a choice of plans offered by the employer. The individual is then responsible for the difference between the employer’s contribution and the selected plan’s total premium. In addition to the potential cost savings, employers report that a defined contribution plan (1) could be administratively simpler (if the employer simply provided a payment retirees could use to purchase individual coverage) or (2) could allow them to offer retirees a wider choice (if the employer provided multiple plan offerings and retirees could purchase individual coverage as well). Thus far, few employers have adopted a defined contribution approach. Benefit consultants we interviewed said that many employers would prefer to move toward a defined contribution approach, but noted several issues that would need to be addressed before making such a fundamental change. For example, a recent study by PricewaterhouseCoopers stated that employers are uncertain about (1) the availability of insurance products that would meet their objectives for employee choice with a defined contribution approach, (2) retirees’ readiness to assume the responsibility for managing their health benefits, and (3) the potential loss of the existing tax exclusion for the employee if the employer shifts to a defined contribution. Contractual bargaining agreements with union plans and concerns among employees and retirees about major changes in their health benefits have also limited employers’ ability to shift to such an approach. Employer consultants also indicated that a defined contribution approach would highlight differences in health benefit costs among employees. Differences in how much an employer pays for an employee’s health benefits are not readily apparent with defined benefit plans because each employee is offered the same set of benefits at the same premiums. Such differences, however, could become apparent and potentially contentious under a defined contribution approach. For example, if each employee were given the same fixed amount for health insurance, those who were older, less healthy, in need of family coverage, or living in a more expensive area could pay significantly more than other employees to purchase comparable coverage. Alternatively, if employees were given a risk-adjusted fixed amount, those who were older or otherwise more costly would receive a larger payment than would others. Various factors suggest that an erosion in employer-sponsored retiree health insurance may continue. Most immediately, employers are experiencing the resurgence of inflation in their premium costs and thus could look for ways to further control costs to remain competitive, especially if the slowing of the economy continues. Moreover, if the Medicare program establishes an outpatient prescription drug benefit, some employers may reexamine their need to offer retiree health coverage. In addition, a recent court case validating a claim of age discrimination under federal law could have significant implications for employer-sponsored retiree health coverage. In the longer term, as the number of retirees relative to active workers increases with the aging of the baby boom generation, concerns over employers’ retiree health costs are likely to grow. The resumption of large health insurance premium increases and a general economic slowing could exacerbate the decline in employer-sponsored health insurance for retirees. Survey data suggest that health insurance premiums for employer-sponsored coverage are beginning to rise at an increasing rate, and these increases will likely be reflected in larger future reported liabilities. As shown in figure 3, premium increases were higher than the general inflation rate from 1990 through 1994, but increased less than general inflation from 1995 through 1997. Because the actual level of premium inflation was lower than what had been anticipated for this latter period, some firms reduced their projected FAS 106 liabilities, with some even showing increasing profits as a result of their adjusted liabilities for retiree health benefits. Beginning in 1998, however, premiums began again to rise faster than general inflation and were about 5 percentage points above general inflation in 2000. Premium increases have occurred among all major insurance types, including health maintenance organizations (HMO), preferred provider organizations (PPO), and traditional indemnity plans. The strength of the overall economy may also affect whether employers provide retiree health benefits. Employment remains at near-historic high levels, which could make employers hesitant to reduce employee benefits that potentially could harm their recruitment and retention in a tight labor market. However, if economic growth and employment levels decline, as economic indicators are starting to show, employers may be more willing to reevaluate salary and benefits to determine the combination that is most effective in recruiting and retaining employees. The strong stock market during the 1990s also provided some employers with high rates of return on pension and other assets that could be used to cover some retiree health benefit costs. ERISA requires employers to prefund their future pension benefit liabilities for retirees, but not their retiree health benefits. Thus, employers are unlikely to have significant investment income to fund retiree health benefits directly. However, some employers have transferred some of the excess pension assets generated by investment earnings to finance their retiree health benefits. This option to finance retiree health benefits could be curtailed as the rising stock market seen in the 1990s levels off. Further, recently proposed Internal Revenue Service regulations that clarify employers’ ability to transfer surplus assets from a defined benefit pension plan to a retiree health benefit plan would prevent an employer that does so from subsequently significantly reducing the number of retirees covered or the cost of such coverage. Recent and proposed changes to Medicare are also leading employers to reexamine their design of retiree benefits that supplement Medicare. Notable developments include withdrawals of health plans participating in the Medicare+Choice program and proposals to add prescription drug coverage to Medicare. A Medicare prescription drug benefit could significantly lower the cost of providing retiree health coverage, but may affect employers’ interest in doing so. Prescription drugs are typically the largest component of costs for employer-sponsored retiree health benefits for Medicare-eligible enrollees. The recent withdrawals of some health plans participating in Medicare+Choice could affect some employers that had anticipated savings in their retiree health benefit costs and had encouraged employees to join these plans. Medicare+Choice plans typically offer health benefits that are not available through traditional Medicare but are generally included in employer-sponsored Medicare supplemental coverage, such as prescription drugs and reduced cost sharing. Furthermore, many Medicare+Choice plans have historically charged enrollees small or no premiums. The 2000 Mercer survey indicates that 43 percent of large employers that provide retiree health coverage offer a Medicare+Choice HMO, and that 11 percent of Medicare-eligible retirees are enrolled in one of these plans. Some employers encouraged employees to enroll in Medicare+Choice plans by lowering their premium contributions or enhancing benefits. However, benefit consultants we interviewed report that some employers are concerned about recent Medicare+Choice plan premium increases and withdrawals. Mathematica Policy Research, Inc., reports that Medicare+Choice premiums more than doubled from an average of $6 per enrollee per month in 1999 to $14 in 2000 and are expected to increase further in 2001. Since 1999, more than 200 plans have fully terminated their Medicare+Choice contracts, reduced their service areas, or announced plans to reduce their participation in 2001.As Medicare+Choice plans drop out of the market, some employers are left to find alternative coverage for retirees for whom they had promised benefits. The effects of a Medicare prescription drug benefit, if enacted, are less certain but potentially significant. More than 40 percent of Medicare beneficiaries had prescription drug coverage from a private supplemental plan in 1996, and three-quarters of them received this prescription drug coverage from employer-sponsored plans. According to benefit consultants’ reports and some employers we interviewed, prescription drugs typically represent 40 to 60 percent of employers’ retiree health costs for Medicare-eligible enrollees and have been the fastest-growing element of health costs, increasing by 17 percent or more during the last year. Thus, adding a prescription drug benefit to Medicare could lower or make more predictable employers’ costs, encouraging some employers to retain retiree health benefits. Conversely, the enhanced Medicare benefit could reduce the value employees place on employer-sponsored retiree health benefits, making it easier for employers to reduce or eliminate coverage. Benefit consultants and recent studies indicate that employers’ responses to Medicare coverage of prescription drugs could vary depending on the prescription drug benefit design implemented, for example, the coverage limits that are included and beneficiary cost sharing that would be required. One study evaluating two general proposals estimated that employers would have significant cost savings and likely would retain supplemental prescription drug coverage for retirees to complement an outpatient prescription drug benefit. However, any savings that might actually be realized are dependent on the design features that Congress ultimately enacts and employers’ and beneficiaries’ responses. According to employer benefit consultants, an August 2000 court ruling raises concern among some employers and could potentially accelerate the decline of retiree health benefits, although its actual effect is uncertain at present. The Third Circuit Court of Appeals, which has jurisdiction for Pennsylvania, New Jersey, Delaware, and the Virgin Islands, held that Medicare-eligible retirees have a valid claim of age discrimination under the Age Discrimination in Employment Act (ADEA) when their employers provide them with health insurance coverage inferior to that provided to retirees not yet eligible for Medicare. In this case, Erie County, Pennsylvania, had offered Medicare-eligible retirees an HMO under contract with Medicare that had several features that were more restrictive than the point-of-service plan available to those retirees not yet Medicare-eligible, including a more limited choice of physicians and required primary care physician authorization for medical services. The Third Circuit decided that Medicare-eligible retirees were treated differently because of age but Erie County might not be in violation of the ADEA if the health plans provided to Medicare-eligible retirees are equal in either benefits or costs to the plans offered to retirees under age 65. The Third Circuit has sent the case back to the District Court for it to determine whether the county’s treatment of pre- and post-age 65 retirees, under their respective plans, meets either the equal cost or equal benefit requirement under ADEA. The implications of the Erie County decision for other employers remain uncertain. While only about 12 percent of employers offering retiree health coverage enroll Medicare-eligible enrollees in an HMO—the issue raised in the Erie decision—many other employers make further distinctions between the health benefits provided to their retirees based on their eligibility for Medicare. Also, some employers provide retiree health benefits only for early retirees and not for Medicare-eligible retirees. Some benefit consultants have said that this decision, if adopted by other federal courts, could lead some employers to make changes to their retiree health benefits so that benefits for Medicare-eligible retirees are no more restrictive than those offered other retirees, in some cases further eroding the level of employer-sponsored retiree health benefits. These changes could include eliminating retiree health benefits; reducing benefits to the lowest common level for all retirees; offering a Medicare supplemental plan that, combined with the traditional Medicare program, is at least as generous as benefits provided to pre-Medicare-eligible retirees; or paying retirees the same defined contribution to purchase retiree health coverage whether or not they are Medicare-eligible. In the past, retiree benefit litigation has not focused on age discrimination, but on employers’ ability to modify or terminate retiree health benefits. Since ERISA provides employers considerable flexibility to manage the cost, design, and extent of health care benefits they provide, federal courts have generally ruled in favor of the employer when challenged over termination of the plan or changes in retiree health benefits if the employer had included the right to change benefits in plan documents or collective bargaining agreements. Nearly all companies reserve the right in plan documents to modify health benefits for current and future retirees. See appendix II for an overview of the case law history regarding retiree health benefits. Over the next 30 years, both the number and proportion of Americans potentially affected by a decline in employer-sponsored retiree health insurance will increase, whether or not additional employers drop this coverage. Elderly and near-elderly individuals together will represent more than one-fourth of the population of the United States in the year 2011— the year when the first of the baby boomers will turn 65 years old— compared to one-fifth of the current population. As shown in figure 4, the number of near-elderly individuals will increase by 75 percent by 2020, and the number of elderly will double by 2030. Thus, employers will not only have a larger number of retirees for which to potentially provide health coverage, but comparatively fewer active workers to subsidize these benefits. This declining base of productive workers to support more retirees could make it more difficult for many employers to maintain retiree health benefits. Federal laws guarantee access to coverage to certain individuals who lose group coverage. However, the coverage options available to retirees whose former employers reduce, eliminate, or did not offer health coverage may be limited. Affected retirees may seek to purchase coverage on their own as individuals—either an individual insurance market product for those under 65 or a Medicare supplemental plan for those 65 or older. However, depending on their demographic characteristics and health status, retirees may encounter difficulty obtaining or affording comprehensive plans. Although federal laws, such as COBRA and HIPAA, guarantee some individuals leaving employer-sponsored group health plans access to continued coverage or to a product in the individual market, these laws may offer only limited protections to many retirees that lack access to employer-sponsored health benefits. Individuals whose jobs provided health benefits that ended at retirement may continue temporary coverage for up to 18 months under COBRA, but COBRA may be an expensive alternative because the employer is not required to pay any portion of the premium. Also, COBRA coverage is generally not available to individuals whose employers terminate health insurance after they retire. Likewise, HIPAA’s group-to-individual portability provision guarantees access to at least two individual insurance policies, regardless of health status and without exclusions, to eligible individuals leaving group coverage. States comply with this provision by using either the federal rules—which require carriers to guarantee access to certain insurance policies to eligible individuals—or an alternative mechanism. Under an alternative mechanism, states may, within broad federal parameters, design other approaches, such as a state high-risk pool, to provide eligible individuals with a choice of coverage. Depending on the approach taken by states to comply with HIPAA and the extent to which a state restricts premium rate variation in the individual market, the premiums these individuals face may be substantially higher than prices charged to healthy or younger individuals, and may be cost prohibitive to many retirees. Although these laws are limited in the protections they afford individuals without access to employer-sponsored health benefits, they may facilitate the transition of some retirees from employer-based coverage to coverage in the individual market. Although federal law provides some retirees with guaranteed access to certain coverage, others may encounter difficulty obtaining or affording coverage, especially since health insurance carriers often consider a retiree’s health status in making coverage decisions, and many retirees report poorer health. Near-elderly and elderly individuals are the most likely to report fair or poor health of any age group. The CPS indicates that more than one-fifth of near-elderly and one-third of elderly individuals reported fair or poor health in 1999, compared to about 14 percent of 45- to 54-year-olds. Moreover, as shown in table 1, the retired among these populations were more likely to report poorer health status than those who were employed. For retirees under 65, the individual insurance market, on which about 7 percent of the near-elderly population relied for their primary source of coverage in 1999, may be an option for some individuals until they reach Medicare eligibility. However, in most states, access to the individual market is not guaranteed, and individuals may encounter difficulty obtaining comprehensive plans at affordable prices, or any plans at all. The problems in purchasing plans may be exacerbated because retirees who lose employer-sponsored coverage and individually purchase private health insurance become responsible for the entire premium rather than the share they paid for employer-sponsored coverage. Further, except for some self-employed persons and certain individuals with medical expenses exceeding 7.5 percent of adjusted gross income, the federal tax code offers no subsidies for the individual purchase of private health insurance. Unlike the employer-sponsored market, where the price for coverage is based on the risk characteristics of the entire group, premium prices in the individual markets of most states are based on characteristics of each applicant, such as age, gender, geographic area, tobacco use, and health status. Even for persons with similar health, premium prices can vary significantly. For example, carriers anticipate that the likelihood of requiring medical care increases with age. Consequently, individuals between 55 and 64 in the individual market of most states pay considerably more than a 30-year-old for the same coverage. For group policies, older individuals usually pay the same amount as younger members of the group. Table 2 demonstrates the difference in premiums charged by carriers we contacted to applicants based solely on age for the same comprehensive health plan. About 20 states have passed legislation that limits the amount individual market carriers can vary premium rates or the characteristics they may use to vary these rates, but substantial variation exists among these states. For example, Minnesota allows individual market carriers to vary premiums for differences in individual characteristics such as occupation, age, and geographic area; New Hampshire allows carriers to modify premium rates only for differences in age; and New Jersey does not allow carriers to vary rates on the basis of any individual characteristics. In states where no restrictions apply, a carrier may also engage in medical underwriting, whereby it evaluates the health status of applicants to determine whether it will charge a higher premium rate, exclude an existing health condition from coverage, or deny coverage altogether. For example, individuals with serious health conditions such as heart disease are almost always denied coverage. Other, non-life-threatening conditions, such as chronic back pain, may also be excluded from coverage. In contrast, under a group plan, individuals with these conditions could not be denied coverage nor be required to pay a higher premium than others in the plan, and specific conditions could only temporarily be excluded from coverage. Table 3 provides examples of how several large individual market carriers treat non-HIPAA-eligible individuals with certain health conditions in states that do not prohibit medical underwriting. Federal law provides certain guarantees to ensure that retirees over 65 have access to Medicare supplemental policies in the event that an employer eliminates or reduces coverage; however, the coverage alternatives available to these individuals may be limited, less comprehensive, or more expensive. For example, a retiree over 65 receiving supplemental coverage through a typical private, employer- sponsored plan may receive coverage for a number of benefits, including prescription drugs. If the employer eliminated this coverage, the affected retiree could seek to purchase alternative coverage on his or her own through the Medigap market. However, under federal law, these individuals would be guaranteed access without medical underwriting to only 4 of the 10 standardized Medigap policies available in most states.None of these four plans includes prescription drug coverage. Access to other Medigap plans, including those with limited prescription drug coverage, could depend on the retiree’s health and the carrier’s willingness to offer coverage. Thus, retirees could end up with less comprehensive coverage than they received from their former employers. Further, in cases where the employer had contributed to the majority or all of the cost of the Medicare-eligible retiree’s health plan, the retiree will be responsible for the full premium price. Retirees who had obtained employer-sponsored coverage through a Medicare+Choice plan could potentially face similar challenges in terms of limited choice and coverage and higher costs in the event that health plans were no longer available, such as when a Medicare+Choice plan withdraws from the market. Regardless of how they lose their employer-sponsored coverage, purchasing Medigap coverage may be a costly alternative for many retirees. Table 4 shows examples of premiums for several popular Medigap plans in selected states. Premium increases and forecasts for a potential economic slowdown could pose concerns for many employers and may make employer- sponsored benefits vulnerable to further erosion. In the longer term, these factors, coupled with the potential for Medicare reforms and an increasing number of aging baby boomers, may produce even more uncertainty and cost pressures for employers. Consequently, as the number of retirees without employer-based coverage increases, retirees, particularly those in poorer health, may encounter difficulty finding affordable alternative health coverage. We provided a draft of this report to the Department of Labor and several expert reviewers for comments. The reviewers provided technical comments that we incorporated as appropriate. As agreed with your office, unless you announce the report’s contents earlier, we plan no further distribution of it until 30 days after its issue date. We will then send copies to the Honorable Elaine Chao, Secretary of Labor; the Honorable Michael McMullan, Acting Administrator of the Health Care Financing Administration; and other interested congressional committees and members and agency officials. We will also make copies available to others on request. Please call me at (202) 512-7118 if you have any questions. Another contact and major contributors are listed in appendix IV. In conducting our study, we reviewed available employer survey data, analyzed the March supplements of the Census Bureau’s 1995 to 2000 Current Population Survey, reviewed applicable laws and court decisions pertaining to changes in employer-sponsored coverage, obtained individual insurance market premiums from carriers, and interviewed employee benefit consulting firms and several large employers. We conducted our work from June 2000 through February 2001 in accordance with generally accepted government auditing standards. For information on the extent to which employers offer health coverage to retirees as well as the conditions under which coverage is made available, we relied on private employer benefit surveys, specifically those of (1) the Health Research and Educational Trust (HRET) sponsored by the Kaiser Family Foundation (and formerly produced by KPMG Peat Marwick) and (2) William M. Mercer, Incorporated (which were formerly produced by Foster Higgins). These surveys have more current or comprehensive information on retiree health benefits than do existing surveys conducted by the federal government. Also, these surveys are distinguished from a number of other private ones not only by their content but also by their large random samples, which allow their results to be generalized to a larger population of employers. Neither survey, however, reports sufficient information about its sampling errors to determine the precision of its estimates, although the Kaiser/HRET survey notes that year-to-year changes in the percentage of employers offering retiree health benefits have not been significant since 1998. The Kaiser/HRET surveys are based on samples of employers with three or more employees selected from a Dun and Bradstreet list of private and public employers. For some retiree health benefit questions, the Kaiser/HRET survey limits its reported data to employers with 200 or more employees. The Kaiser/HRET surveys’ sample size was about 1,800 in 1993 and 1,887 in 2000, with response rates of 55 percent and 45 percent, respectively (see table 5 for additional information on the Kaiser/HRET sample by firm size). The Mercer/Foster Higgins surveys are based on samples of employers with 10 or more employees selected from the Dun and Bradstreet database for private firms and the Census of Governments for government agencies. For some retiree health benefit questions, the Mercer survey limits its reported data to employers with 500 or more employees. The Mercer survey’s sample size was about 3,676 in 1993, with a response rate of 78 percent. In 2000, Mercer’s database contained 2,797 responses from its random sample—a response rate of about 50 percent. We relied on the Census Bureau’s March supplement of the Current Population Survey (CPS) for information on the demographic characteristics of retirees and their access to insurance. The survey is based on a sample designed to represent a cross-section of the nation’s civilian noninstitutional population. In March 2000, about 60,000 households were sampled for the survey, and about 47,000 of them, containing approximately 94,000 persons 15 years of age or older, were interviewed. The total response rate for the 2000 CPS March supplement was about 86 percent. Because the CPS is based on a sample, any estimates derived from the survey are subject to sampling errors. A sampling error indicates how closely the results from a particular sample would be reproduced if a complete count of the population were taken with the same measurement methods. To minimize the chances of citing differences that could be attributable to sampling errors, we highlight only those differences that are statistically significant at the 95 percent confidence level. The following provides more detail on how some of the CPS questions are phrased and how the responses are categorized, including some clarifications and limitations. The CPS asks whether a respondent was covered by employer/union- sponsored, Medicare, Medicaid, private individual, or certain other types of health insurance in the last year. Thus, the 2000 CPS asked what coverage an individual might have had in 1999. Until recently, individuals were not asked directly whether they were uninsured, but were deemed to be so if they denied having any of the above sources of coverage. As a result, the CPS is believed to have slightly overestimated the number of people who are uninsured. Beginning in 2000, the CPS insurance questions are being revised so that individuals who report no health insurance are specifically asked if they are uninsured; however, the Census Bureau has not yet reported the responses to this question. Another limitation to the CPS insurance questions is that they do not ask how long an individual had each source of insurance or whether the individual was covered through any source(s) at the time of the interview. Thus, the CPS considers a person to be insured even if he or she was covered for only 1 day in the past year, and regardless of whether the person was insured on the day of the interview. However, some individuals may respond with their current insurance status rather than their coverage for the past year. Because some people may receive coverage from several sources, we prioritized the source of insurance individuals reported to avoid double counting. That is, if individuals reported having coverage from two or more kinds of insurance, we assigned them to one type based on a hierarchy. Specifically, employer-sponsored coverage was considered primary to other sources of coverage for individuals less than 65 years of age, and respondents were classified as having employer-sponsored coverage even if they also had other types of coverage. The other types of health insurance were prioritized in the following order: Medicare, Medicaid, military/veterans, and individual insurance. For people 65 years of age or older, we first determined whether an individual had Medicare and then prioritized any remaining coverage in the following order: employer-sponsored, Medicaid, military/veterans, and individual insurance. The CPS also asks whether employer-sponsored insurance is provided “in their own name” or as a dependent of another policyholder. We primarily focused on whether retired individuals had employer-sponsored health insurance coverage in their own names because this coverage can most directly be considered retiree health coverage from a former employer. The CPS questions that we used for employment status are similar to those on insurance status. Respondents are considered employed if they worked at all in the past year and not employed only if they did not work at all during the past 12 months. We reviewed applicable laws and court decisions pertaining to changes in employer-sponsored coverage. Appendix II presents additional information on the results of this review. We contacted health insurance carriers in certain states with limited rating restrictions to obtain premiums for individual market policies available to applicants who were 30 and 60 years old. Similarly, we contacted several state insurance departments to obtain premium prices of Medigap policies available to eligible individuals. From carriers, we also obtained information on the kinds of health conditions that may be excluded from coverage or for which an applicant may be denied coverage altogether. For additional information on current and prospective changes to employer-sponsored retiree health benefits, we interviewed and obtained documents from several global employee benefits consulting firms. In addition, we contacted selected large employers for information on the kinds of changes they had made to their retiree health benefits as well as the factors that had led to these changes. Although employers often provide health benefits to retirees, they are not required to do so. However, employers that provide retiree health benefits are responsible for acting consistent with certain administrative and fiduciary requirements established by the Employee Retirement Income Security Act of 1974 (ERISA). In most retiree health benefit litigation, retirees have sought to restore health benefits that have been reduced or eliminated by alleging that the employer breached representations made about the quality, extent, and duration of retiree health benefits. Courts generally have ruled that an employer can modify or terminate health care benefits provided to retirees if the employer specifically had reserved that right in health benefit documents or collective bargaining agreements. A recent Third Circuit Court decision, which focused on whether differences in health benefits provided to Medicare-eligible retirees and retirees not yet eligible for Medicare violated the Age Discrimination in Employment Act (ADEA), could influence employer decisions on whether to continue retiree health benefits. Employer-sponsored retiree health benefits are considered welfare benefits under Title I of ERISA. To ensure a uniform federal law governing employee benefit plans, ERISA generally preempts all state law as it may pertain to employee benefit plans covered under its jurisdiction. Under ERISA, private employers who choose to provide retiree health benefit plans must give plan participants and beneficiaries a summary plan description (SPD) describing their rights and obligations, and are responsible for acting consistently with certain administrative and fiduciary requirements. The SPD, which must be written in a manner intended to be understood by the average plan participant, specifies retirees’ rights and the circumstances under which the health plan can be modified or terminated. In addition, ERISA establishes fiduciary standards to protect employee benefit plan participants and beneficiaries from plan mismanagement. Generally, these standards require fiduciaries to act with the care, skill, and diligence of a prudent person in protecting plan participants and beneficiaries. Federal courts generally have ruled that an employer can modify or terminate retiree health care benefits based on the fact that the employer specifically had reserved that right in health benefit documents or collective bargaining agreements. Challenges to maintain or restore these benefits largely have been unsuccessful. Generally, retirees cannot rely on oral communications or representations that benefits would be maintained for life or without reduction. ERISA requires that every plan be established and maintained under a written instrument. Thus, courts look to plan documents including the terms of the SPD to determine if the plan precludes an employer from modifying or terminating benefits. Courts, however, are divided on whether the reservation clause must be contained in the SPD. Several courts have held that, inasmuch as the SPD is an employee’s primary source of information regarding employment benefits, employees are entitled to rely on the descriptions in the summary. However, at least one appellate court has ruled that an employer reserved the right to amend or terminate health benefits if the reservation clause is in other plan documents, even if it is not mentioned in the SPD. Retirees receiving health benefits under collective bargaining agreements have fared only slightly better than salaried retirees in litigation. Absent a finding that the parties intended that the health benefits were to be maintained for the retiree’s life or some period beyond the expiration of the agreements, courts generally view these benefits as ending at the expiration of the agreements. In one of the earliest collectively bargained contract cases, UAW v. Yard- Man, Inc., the court noted that any right to lifetime benefits must be based on the contract. The contract contained the promise that the company will provide insurance to retired employees, which reasonably could be construed either as a reference to the nature of retiree benefits or as creating a benefit continuing beyond the life of the agreement. The court resolved the ambiguity by looking to other provisions of the collective bargaining agreement for evidence of intent and an interpretation in accord with the entire document. From that examination, the court concluded that the parties had intended to create insurance benefits that continued beyond the life of the collective bargaining agreement. The court noted that retiree benefits were permissive not mandatory subjects of collective bargaining, and that “it is unlikely that such benefits, which are typically understood as a form of delayed compensation or reward for past services, would be left to the contingencies of future negotiations.” The court characterized retiree health benefits as “status” benefits carrying with them “an inference that they continue so long as the prerequisite status is maintained.” The Yard-Man case served to spur some, but not all, courts into concluding that collective bargaining agreement language that appeared to require the continuation of retiree health benefits should require employers to provide those benefits. The First, Fourth, Sixth, and Eleventh Circuits have followed the “inference” standard first articulated in Yard-Man. The Fifth Circuit has questioned the inference. The Eighth Circuit has rejected the inference that employees engaged in collective bargaining are forgoing wages in consideration for retiree health benefits. The Seventh Circuit has also rejected the inference altogether, observing that the courts in this circuit do not distinguish between collective bargaining agreements and ERISA plans for this purpose. Claims of some retirees that modification or termination of their retiree health benefits constitutes a breach of fiduciary duty have, by and large, been denied. However, the Supreme Court articulated a standard for fiduciary liability in certain limited instances, finding that an employer acted as a fiduciary when it intentionally misled employees about the future and security of benefits. The Third Circuit has detailed four elements retirees must demonstrate to succeed in a breach of fiduciary duty claim: proof of fiduciary status, misrepresentations by the company, company knowledge of the confusion created, and resulting harm to the employees. The decision in Erie County Retirees Association v. County of Erie raises a new issue in evaluating retiree health benefits and could affect an employer’s continued provision of these benefits. Erie County selected a health plan for Medicare-eligible retirees that limited choice of a primary care physician and reimbursed for services, except emergencies, only if authorized by the primary care physician. However, unlike a traditional indemnity plan, there were no deductibles and few or no copayments. For former employees not yet Medicare-eligible, the county selected a hybrid point-of-service plan under which a retiree could choose an HMO option (and accept its benefits and limitations) or a traditional indemnity option. The Medicare-eligible retirees filed suit against Erie County, contending that the health coverage offered to them was inferior to that offered to retirees under 65, and therefore they were discriminated against based on section 4(a) of the Age Discrimination in Employment Act (ADEA). The Third Circuit ruled that Erie County treated its Medicare-eligible retirees differently from other retirees with respect to their compensation, terms, condition, or privileges of employment because of age, establishing a claim under the ADEA. The court also ruled that, under the act, the employer could provide different benefits to Medicare-eligible retirees only if (1) they provided equal benefits to those provided to retirees not yet eligible for Medicare or (2) the employer’s costs for Medicare-eligible retirees and retirees not yet eligible for Medicare were equal. The case was sent back to the trial court for a determination on the county’s compliance with this “equal benefit or equal cost” rule. The 10 standardized Medigap policies, called plans A through J, differ by the benefits they provide. However, all 10 plans include the same “basic benefits,” including Part A hospitalization coinsurance (days 61 to 90), lifetime reserve coinsurance (days 91 to 150), 365 extra days of hospital care, the first 3 pints of blood or equivalent quantities of packed red blood cells per calendar year that Medicare Parts A and B do not cover, and Part B coinsurance (20 percent). Individuals can purchase a Medigap plan with additional benefits, although the extent to which the 10 plans offer these various benefits differs. (Table 6 illustrates benefit differences among the three plans for which we obtained premium rates.) Plan F is the most popular Medigap plan. According to a HCFA official, plans C and F together represent over one half of all Medigap sales. Plan H is one of the three standardized plans that include a limited prescription drug benefit. Under Medigap’s special enrollment rules, eligible individuals have guaranteed access to four plans, including plans C and F. In contrast, access to plan H may be subject to medical underwriting. In addition to the above staff member named, Susan Anthony, Carmen Rivera-Lowitt, and Mark Vinkenes made key contributions to this report. Paula Bonin provided computer programming for the analysis of the CPS, and Dayna Shah and Roger Thomas provided a legal review of relevant statutes and court decisions. Medicare+Choice: Plan Withdrawals Indicate Difficulty of Providing Choice While Achieving Savings (GAO/HEHS-00-183, Sept. 7, 2000). Medigap: Premiums for Standardized Plans That Cover Prescription Drugs (GAO/HEHS-00-70R, Mar. 1, 2000). Prescription Drugs: Increasing Medicare Beneficiary Access and Related Implications (GAO/T-HEHS/AIMD-00-100, Feb. 16, 2000). Private Health Insurance: Progress and Challenges in Implementing 1996 Federal Standards (GAO/HEHS-99-100, May 12, 1999). Private Health Insurance: Declining Employer Coverage May Affect Access for 55- to 64-Year-Olds (GAO/HEHS-98-133, June 1, 1998). Implementation of HIPAA: State-Designed Mechanisms for Group-to- Individual Portability (GAO/HEHS-98-161R, May 20, 1998). Retiree Health Insurance: Erosion in Retiree Health Benefits Offered by Large Employers (GAO/T-HEHS-98-110, Mar. 10, 1998). Retiree Health Insurance: Erosion in Employer-Based Health Benefits for Early Retirees (GAO/HEHS-97-150, July 11, 1997). Private Health Insurance: Millions Relying on Individual Market Face Cost and Coverage Trade-Offs (GAO/HEHS-97-8, Nov. 25, 1996). Employer-Based Health Plans: Issues, Trends, and Challenges Posed by ERISA (GAO/HEHS-96-167, July 25, 1995).
In 1999, nearly 10 million retired people aged 55 or older relied on employer-sponsored health insurance as either their primary source of coverage or as a supplement to their Medicare coverage. Some of these persons are concerned about the continued availability of employer-sponsored coverage. Premium increases and forecasts for a potential economic slowdown could further erode employer-sponsored benefits. In the long term, these factors, coupled with the potential for Medicare reforms and the rising number of aging baby boomers, may produce even more uncertainty and cost pressures for employers. Consequently, as an increasing number of retirees lack employer-based coverage, those in poorer health may have difficulty finding affordable alternative health coverage.
Because of FEMA’s failure to establish basic upfront validation controls over registrants’ identity and address information, we estimate that FEMA made approximately $1 billion of improper and potentially fraudulent payments based on invalid registrations. This represents 16 percent of all individual assistance payments for hurricanes Katrina and Rita. The improper and potentially fraudulent payments included cases where individuals and households used invalid SSNs, used addresses that were fictitious or not their primary residence, and for submitted earlier registrations. These improper payments based on phony or duplicate registration data were not only restricted to the initial expedited assistance payments that we previously reported on, but also included payments for rental assistance, housing repair, and housing replacement. For example, rental assistance payments were made to registrants that used a post office box and a cemetery as damaged properties. In fact, as part of our ongoing forensic audit, FEMA continues to provide rental assistance to GAO based on registrations that contained fictitious identities and bogus damaged addresses. In one case, FEMA even sent GAO a check for expedited assistance after an inspector could not confirm that the property existed, and FEMA had decided not to provide housing assistance to this registration. Our projection likely understates the total amount of improper and potentially fraudulent payments since our examination of sample payments focused only on invalid registrations and did not include other criteria, such as insurance policies, which may make registrants ineligible for IHP payments. Based on our statistical sample we estimate that 16 percent of all payments were based on invalid registrations. We considered a registration invalid if it contained an invalid identity, invalid address information, or was paid from duplicate registration information. Some registrations failed more than one attribute. We drew our statistical sample from a population of 2.6 million payments made in the wake of hurricanes Katrina and Rita, totaling over $6 billion through mid-February 2006. Based on these results, we project that FEMA made about $1 billion in assistance payments based on improper or potentially fraudulent registrations. The 95 percent confidence interval associated with our estimate of improper and potentially fraudulent registrations ranges from a low of $600 million to a high of $1.4 billion in improper and potentially fraudulent payments. Table 1 shows the attributes we tested, the estimated failure rate in each attribute, and the overall projected failure amount. As shown in table 1, some registrations failed more than one attribute; therefore the total number of registrations which failed our attribute tests is less than the sum of the failures of each attribute. For example, all payments made to registrations containing bogus damaged property addresses also failed the primary residence test because the registrants could not have lived there at the time of the disaster. Additional details on the 39 registrants in our sample where we found a problem are as follows: Payments to Registrants Whose Damaged Property Address Was Not Their Primary Residence – Twenty six payments failed the primary residence test. These include individuals who had never lived at the damaged property, did not live at the damaged property at the time of the disasters, or used bogus property addresses on their registrations. We made these determinations after reviewing publicly available records, conducting site visits, and interviewing current residents and/or neighboring residents. We provide additional details related to failures in this attribute in table 2. Registrant received $2,000 in expedited assistance, $2,358 in rental assistance, and more than $15,000 in personal property replacement. Registrant originally claimed damage at a street address several houses away from the damaged property address currently in FEMA’s database. At some point in the disaster assistance process, the registrant made changes to the damaged property address. No physical inspection occurred at the damaged property. Personal property payment was based on geospatial data due to the level of devastation in the area. GAO reviews of publicly available information and credit report data showed that the registrant had never lived at the damaged property address for which she was paid. Registrant used valid physical property as damaged address to receive three payments for expedited assistance, rental assistance, and personal property replacement. GAO audit and investigative work found no evidence that the individual ever lived at the property. After receiving the payments, the registrant withdrew the application without ever having a physical inspection performed or returning the disaster payments to FEMA. Registrant used damaged property in Kenner, Louisiana, as primary residence to qualify for one expedited assistance payment and two rental assistance payments. Registrant did not live at property at the time of disaster. Owner of the property told us that the registrant had moved out of the damaged property a month prior to hurricane Katrina. Registrant used damaged property as primary residence to receive one expedited assistance and two rental assistance payments. Residents at the property had never heard of the registrant. Registrant used post office box in McIntosh, Alabama, as the damaged property address to receive expedited assistance and rental assistance. The local postal inspector stated that the post office box was linked to other individuals associated with known fraudulent activity. Payments to Duplicate Registrations—12 other payments in our sample failed because they were made to registrants whose damaged property addresses and current addresses had previously been submitted under other registrations and had received payments on those previous registrations. For example, one sample registrant submitted a registration containing the same damaged and current property addresses as those used previously by another registrant. Both registrations received payments for rental assistance for $2,358 in September 2005. Payments to Registrations with Bogus Property Addresses – Three payments in our sample were made to registrations containing bogus property addresses. For example, we found that one individual used several pieces of bogus information to receive expedited assistance. Specifically, the registrant used a SSN that was valid but the name did not match the name in records maintained by the Social Security Administration. The registrant also used a damaged property address in the 3000 block that was determined to be invalid through our on-site inspection, as street numbers on that street only went up to the 1000s. After the initial payment, the registration was withdrawn voluntarily by the registrant. In effect, this registrant was able to use completely bogus information to receive $2,000 from FEMA and then withdraw the registration to avoid further scrutiny. Payments to Registrations Containing Invalid Social Security Numbers — Two of the payments in the sample were made to individuals that used invalid SSNs (e.g., SSNs that have never been issued or SSNs that did not match the name provided on the registration). For example, one individual used a SSN that had never been issued to receive FEMA payments for expedited and rental assistance. Overall, we observed that 17 of our sample failures (44 percent) were related specifically to expedited assistance payments. The high level of expedited assistance-related failure was expected because these payments needed to be made quickly and, typically, prior to a physical inspection of the damaged property. However, we found that the other 22 failures (56 percent) were related to rental assistance and personal and real property repair and replacement payments. In its response to a draft GAO report, FEMA represented to us that all nonexpedited assistance payments, including the $2,358 in housing assistance payments, were subject to much more stringent requirements. Specifically, FEMA represented that the registrants had to demonstrate that they occupied the damaged property at the time of the disaster. However, the 22 failures we found indicate that these requirements were not effective in preventing improper and potentially fraudulent registrations from receiving nonexpedited assistance payments. Our estimate likely understates the total amount of improper and potentially fraudulent payments because we did not test our samples for all potential reasons why a disaster assistance payment could be fraudulent or improper. For example, our testing criteria did not include reviewing whether registrants had insurance policies that covered hurricane damages, which may have made them ineligible for IHP payments. We also did not test whether FEMA inspectors accurately assessed the damage to each sampled damaged property, or whether the registrants were displaced from their homes, an eligibility factor for rental assistance. During the course of our work, we found that these problems affected some of our sampled payments and, therefore, these payments may be improper or potentially fraudulent. However, because the problems did not relate to identity and address information, they passed our testing criteria. For example, an individual in our statistical sample provided a valid SSN and lived in a declared disaster area. However, the individual informed GAO that he did not incur any hurricane-related damage. Despite this fact, the individual received $2,000 in expedited assistance. We did not test whether registrants received duplicate benefits from other FEMA programs, such as free hotel lodging and trailers, which would have resulted in FEMA paying duplicate housing benefits to the same registrant. Later in this testimony, we provide examples where registrants received from FEMA free hotel rooms in addition to rental assistance. Finally, our estimate would include payments FEMA has identified for potential recoupment. Given the considerable amount of potentially fraudulent and improper payments identified in our statistical sample, it is not surprising that FEMA continued to provide rental assistance payments to GAO investigators based on bogus registrations. In one instance, rental assistance was made even after a FEMA inspector was unable to find the damaged property. Similarly, our sample testing and data mining work also identified additional examples of payments made on the basis of bogus information. In our previous testimony, we reported that we were able to obtain $2,000 expedited assistance checks from FEMA using falsified identities, bogus property addresses, and fabricated disaster stories. FEMA has continued to provide us with additional disaster-related assistance payments even after FEMA received indications from various sources that our registrations may be bogus. GAO has not cashed these checks and plans to return the checks to the Department of Treasury upon the conclusion of our work. The following provides details of two of our undercover operations: Case #1 relates to a registration submitted by GAO for hurricane Rita that cited a bogus address in Louisiana as the damaged property. In October 2005, GAO received notice that the inspector assigned to inspect the property was not able to find the house despite numerous attempts to verify the address with the phone book, post office, and with a physical inspection. The registration was subsequently returned to FEMA by the inspector and coded as withdrawn because no contact was made with the registrant. Even though GAO never met with the inspector to prove that the damaged property existed, FEMA sent GAO a check for $2,000 in early 2006. Case # 2 relates to a GAO disaster registration for an empty lot in Louisiana for hurricane Katrina. Although the damaged property address was bogus, FEMA notified GAO that an inspection was performed and confirmed that the property was damaged. However, FEMA stated that the registration could not be processed because FEMA was unable to corroborate that the GAO lived at the damaged property. GAO subsequently submitted a fictitious driver’s license that included the bogus address, which FEMA readily accepted. Based on the fictitious driver’s license, FEMA issued GAO a $2,358 rental assistance check, as shown in figure 1. Subsequent to FEMA issuing the $2,358 check, a Small Business Administration (SBA) inspector who was responsible for inspecting the damaged property in evaluation of a potential SBA loan reported that the property did not exist. Although SBA discovered that the property was bogus, FEMA issued another rental assistance check to GAO, bringing the total rental assistance on this bogus registration to about $6,000. We found that the discrepancy between FEMA’s result (which confirmed that the property existed), and SBA’s result (which showed that the property did not exist) occurred because FEMA did not conduct a physical inspection on the property but instead used geospatial mapping to determine losses. We have previously testified regarding potentially fraudulent case studies we uncovered through data mining and investigative techniques. The potential fraud in those cases was hundreds of thousands of dollars. We have continued our data mining work find additional examples where FEMA made payments, sometimes totaling over $100,000, to improper or potentially fraudulent registrations, including payments made to registrants where cemeteries and post office boxes were claimed as damaged property addresses. Table 3 provides several additional examples of improper and potentially fraudulent payments. The following provides illustrative information for three of the cases. Case number 1 involves 8 individuals who claimed several different damaged property addresses, but the same current address which is a single apartment. Public record searches also determined that only 2 of the 8 individuals actually lived at the current address. Four individuals were members of the same household who shared the same damaged property address. However, the 4 individuals each received one expedited and one rental assistance payment. FEMA criteria specified that members from the same household who were displaced to the same location should be entitled to only one IHP payment. According to public records, the other 4 individuals were not living at the address claimed as damaged at the time of the hurricane. Case number 2 involves an individual who used 13 different SSNs— including one of the individual’s own—to receive payments on 13 registrations. The individual claimed 13 different damaged property addresses and used one single current address to receive FEMA payments. According to publicly available records, this individual had no established history at any of the 13 properties in Louisiana, Mississippi, and Alabama, which the individual claimed as damaged. The individual received approximately $139,000 consisting of 8 expedited assistance payments, 4 rental assistance payments, and 14 other payments, including 3 payments of $10,500 each, and 3 payments ranging from over $12,000 to over $17,000 for personal property replacement. Further audit and investigative work indicates that 8 of the 13 addresses did not exist or do not have public ownership records. Case number 4 involves a registrant who used the address of a cemetery to make an IHP claim. Specifically, the registrant used a damaged property address located within the grounds of Greenwood Cemetery, in New Orleans, Louisiana, to request disaster assistance from FEMA. Public records show no record of the registrant ever living in New Orleans. Instead, public records indicate that for the past five years, the registrant has resided in West Virginia at the address provided to FEMA as the registrant’s current address. As discussed previously, one statistical sample item we tested related to an improper and potentially fraudulent payment FEMA made to an individual who received expedited and rental assistance as a result of using a post office box as a damaged property address. According to the Postal Inspector, this post office box was also linked to individuals that are associated with fraudulent activity. In total, we found that FEMA made over 2,000 payments totaling about $5.3 million to registrants who provided a post office box as their damaged residence. While not all payments made to post office boxes are improper or potentially fraudulent, the number of potentially fraudulent payments could be substantially reduced if FEMA put in place procedures to instruct disaster recipients to provide actual street addresses of damaged property when claiming disaster assistance. FEMA paid millions of dollars to over 1,000 registrants who used names and SSNs belonging to state and federal prisoners for expedited and housing assistance. FEMA guidelines specify that eligibility for disaster assistance is predicated on the registrant being displaced from their primary residence due to the disaster, thus having need for shelter. These eligibility criteria should have generally excluded prisoners incarcerated throughout the disaster period. Given the weaknesses we identified earlier related to the number of individuals who claimed damages based on invalid property addresses, we can not ascertain whether FEMA properly verified that these registrations were valid, and therefore deserving of IHP payments. The following are three cases where prisoner identities were used to improperly receive IHP payments. Case 1 involves a convicted felon, housed in a Louisiana prison from April 2001 to the present, who registered for IHP assistance by telephone. The registrant made a FEMA claim using a post office box address in Louisiana as his damaged property address to qualify for IHP payments for expedited assistance, rental assistance, and personal property replacement. Two of these payments were made via checks sent to the address he falsely claimed as his current residence, and the final payment was sent via electronic funds transfer (EFT) to someone who also listed the same current address on the checking account. FEMA paid over $20,000 to the registrant even though the damaged property address on the registration was a post office box address and the registrant was incarcerated throughout the disaster period. Case 2 involves a registrant who has been incarcerated in a Louisiana state penitentiary since February 2005. Several weeks after the disaster, the registrant applied by telephone for individual disaster relief assistance claiming a Louisiana address. Based on his registration information, FEMA paid the inmate over $14,000 in checks mailed to an address in Texas that he listed as his current address, and an EFT was sent to his checking account. Payments included expedited assistance, rental assistance, and personal property replacement funds. Case 3 involves a registrant who has been incarcerated in a Mississippi correctional facility since 2004. The registrant used his name and SSN over the telephone to apply for and receive $2,000 in expedited assistance and $2,358 in rental assistance. The individual listed his correct current address, at the prison, to receive these payments. Following hurricane Katrina, FEMA undertook massive efforts to house individuals and households who were displaced by the hurricane. Among other efforts, FEMA provided hotel accommodations to individuals who were at that time displaced across the United States. We found that although FEMA was responsible for paying hotel costs, FEMA did not require hotels to collect registration information (such as FEMA registration identification numbers or SSN) on individuals to whom it provided hotel accommodations. Without this information, FEMA was not able to identify individuals who were housed in hotels, and, thus, FEMA was unable to determine whether rental assistance should be provided to individuals to whom the federal government was providing free lodging. As a result, FEMA made rental assistance payments which covered the same period of time that the registrant was staying at a FEMA-paid hotels. Table 4 provides examples of some of these cases. Because the hotels were not required to collect identification numbers, we were unable to determine the magnitude of individuals who received these duplicate benefits. However, as illustrated in table 4, our data mining identified a number of individuals housed in FEMA-paid for hotels who have received more than one rental assistance payment. Without an effective means of reconciling individuals in FEMA hotels with those individuals receiving rental assistance payments, FEMA may have wasted taxpayer dollars by paying twice for housing assistance to hurricane victims. FEMA did not establish proper accountability for debit cards. As a result, FEMA disbursed about $1.5 million of taxpayer money for over 750 debit cards that FEMA cannot establish went to disaster victims. In addition, as reported previously, we continued to find cases where recipients purchased goods and services that did not meet serious disaster related needs as defined by federal regulations. FEMA lacked controls for accounting for debit cards issued, resulting in the loss of accountability for over 750 debit cards valued at about $1.5 million. The lack of controls over debit cards is particularly troubling given that debit cards are, in essence, cash that can be used to purchase goods and services. In September 2005, JPMorgan Chase was initially paid approximately $22.7 million for about 11,374 cards that the bank believed were issued to FEMA registrants. However, prior to our inquiries beginning in November 2005, we found that neither FEMA nor the bank had reconciled the actual number of cards distributed with the number of cards for which payment was made. From our numerous inquiries, both JPMorgan Chase and FEMA began to reconcile their records to the debit cards issued. As a result, JPMorgan Chase performed a physical count of cards remaining to identify the number of cards distributed. This resulted in JPMorgan Chase determining that it distributed 10,989 cards, not 11,374 cards. Upon identification of the 385 undistributed debit cards, JPMorgan Chase refunded to FEMA $770,000 attributable to these undistributed debit cards. FEMA attempted to perform a reconciliation of the distributed cards to the cards recorded in its disaster recipient database. As of May 26, 2006, FEMA can only account for 10,608 cards of the 10,989 cards JPMorgan Chase claimed that it has distributed. As a result, FEMA cannot properly account for 381 debit cards, worth about $760,000. Since initially paying JPMorgan Chase $22.7 million, FEMA has expanded the use of debit cards as a payment mechanism for future IHP payment for some registrants. Through this process, FEMA made about $59 million in additional payments of rental assistance and other benefits. As of March 2006, over 90 percent of money funded to the debit cards has been used by recipients to obtain cash and purchase a variety of goods and services. Our analysis of data provided by JPMorgan Chase found that the debit cards were used predominantly to obtain cash which did not allow us to determine how the money was actually used. The majority of the remaining transactions was associated with purchases of food, clothing, and personal necessities. Similar to findings in our February 13, 2006, testimony, we continue to find some cases where cardholders purchased goods and services that did not appear to meet legitimate disaster needs. In this regard, FEMA regulations provide that IHP assistance be used for items or services that are essential to a registrant’s ability to overcome disaster-related hardship. Table 5 details some of the debit cards activities we found that are not necessary to satisfy legitimate disaster needs. FEMA faces a significant challenge in ensuring that IHP relief payments are only sent to valid registrants while also distributing those relief payments as fast as possible. To ensure the success of the program, FEMA must build the American taxpayers confidence that federal disaster assistance only goes to those in need, and that adequate safeguards exist to prevent assistance from going to those who submit improper and potentially fraudulent registrations. To that effect, FEMA must develop and strengthen controls to validate information provided at the registration stage. As we have stated in prior audit work, and as FEMA had learned from prior experience, pursuing collection activities after disaster relief payments have been made is costly, time-consuming, and ineffective. Upfront controls are all the more crucial given the estimated $1 billion dollars that had gone to improper and potentially fraudulent registrations related to hurricanes Katrina and Rita. It is key that FEMA address weaknesses in its registration process so that it can substantially reduce the risk for fraudulent and improper payments before the next hurricane season arrives. In addition, to help deter future fraudulent registrations, FEMA must ensure there are consequences for those who commit fraud. We plan to refer potentially improper payments to FEMA for further review, and hope that FEMA will take the necessary recoupment actions. Further, we have referred, and plan to refer additional cases of potential fraud to the Katrina Fraud Task Force for further investigations and, if warranted, indictments. Finally, we plan to issue a report in the future with recommendations for addressing problems identified in this testimony. Mr. Chairman and Members of the Committee, this concludes our statement. We would be pleased to answer any questions that you or other members of the committee may have at this time. For further information about this testimony, please contact Gregory Kutz at (202) 512-7455 or [email protected], John Kelly at (202) 512-6926 or [email protected]. Major contributors to this testimony include Kord Basnight, James Berry Jr., Gary Bianchi, Valerie Blyther, Matthew Brown, Norman Burrell, Jennifer Costello, Paul Desaulniers, Steve Donahue, Dennis Fauber, Christopher Forys, Adam Hatton, Aaron Holling, Jason Kelly, Sun Kim, Crystal Lazcano, Tram Le, John Ledford, Jennifer Leone, Barbara Lewis, Jonathan Meyer, Gertrude Moreland, Richard Newbold, Kristen Plungas, John Ryan, Sidney Schwartz, Robert Sharpe, Gail Spear, Tuyet-Quan Thai, Patrick Tobo, Matthew Valenta, Tamika Weerasingha, and Scott Wrightson. Our objectives were to (1) provide an estimate of improper and potentially fraudulent payments related to certain aspects of the disaster registrations, (2) identify whether FEMA made improper or potentially fraudulent IHP payments to registrants who were incarcerated at the time of the disaster, (3) identify whether FEMA provided registrants with rental assistance payments at the same time it was paying for their hotel rooms, and (4) review FEMA’s accountability over debit cards and controls over proper debit card usage. To provide an estimate of improper and potentially fraudulent payments related to certain aspects of the disaster registrations, we drew a statistical sample of 250 payments from the Federal Emergency Management Agency (FEMA)’s Individuals and Households Program (IHP) payments. Three of the 250 were considered out of scope for our study because the payment has been returned to the U.S. government by the time of our review. Therefore, our review examined 247 payments for which the government was subject to financial loss. Potentially fraudulent and invalid payments are claims that contained (1) bogus identities, (2) addresses that did not exist, (3) addresses where there was no evidence that the address was the primary residence of the registrant at the time of the disaster, and (4) addresses that had been previously registered using duplicate information (such information would include same SSNs, same damaged address, and/or same current address). We conducted searches of public records, available FEMA data, and/or made physical inspections of addresses to determine if registrations were improper and/or potentially fraudulent. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval (e.g., plus or minus 5 percentage points). This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. As a result, we are 95 percent confident that each of the confidence intervals in this report will include the true values in the study population. To identify whether FEMA made improper or potentially fraudulent IHP payments to registrants who were incarcerated at the time of the disaster, we obtained the FEMA IHP database as of February 2006. We obtained databases containing state prisoner data since August 2005, including releases and new incarcerations, from the states of Louisiana, Texas, Mississippi, Alabama, Georgia, and Florida. We also obtained federal prisoner data since August 2005, including releases and new incarcerations, from the Department of Justice. We validated the databases were complete by comparing totals against available public information on prisoner populations. We compared these databases against the population of IHP payments to identify prisoner SSN/name combinations that received payments from FEMA. We restricted this comparison to prisoners who were in state or federal prisons at the time of the disasters. We also interviewed prisoners who registered for disaster relief and prison officials to determine if prisoners were incarcerated at the time of the disaster. To identify whether FEMA improperly provided registrants with rental assistance payments at the same time it was paying for their hotel rooms, we reviewed FEMA policies and procedures to determine how FEMA administered its hotel program, and obtained FEMA data on its hotel registrants. We also used data mining and forensic audit techniques to identify registrants who stayed in hotels paid for by FEMA who also received rental assistance payments through the IHP program. To determine whether registrations from our data mining resulted in duplication of housing benefits, we used a selection of 10 case studies for further investigation. We obtained documentation from hotel officials to substantiate that case study registrants stayed at hotels paid for by FEMA. We also gathered available FEMA data on case study registrations that received multiple rental assistance payments to determine what information they had provided FEMA in order to receive additional rental assistance. To review FEMA’s accountability over debit cards and controls over proper debit card usage, we reviewed databases of transactions and accounts provided by JPMorgan Chase, the administering bank for the debit cards, as well as FEMA’s database of debit card accounts. We interviewed bank, FEMA, and Treasury officials regarding the reconciliation of debit card accounts against IHP registrants and reviewed documentation related to the payment flow of debit cards. We also performed data mining on debit card transactions to identify purchases that did not appear to be indicative of necessary expenses as defined by the Stafford Act’s implementing regulations. During the course of our audit work, we identified multiple cases of potential fraud. For cases that we investigated and found significant evidence of fraudulent activity, we plan to refer our cases directly to the Hurricane Katrina Fraud Task Force. We performed our work from February 2006 through June 8, 2006 in accordance with generally accepted government auditing standards and quality standards for investigations as set forth by the President’s Council on Integrity and Efficiency. To validate that the National Emergency Management Information System database was complete and reliable, we compared the total disbursements against reports FEMA provided to the Senate Appropriations Committee on Katrina/Rita disbursements. We also interviewed FEMA officials and performed electronic testing of the database on key data elements. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Hurricanes Katrina and Rita destroyed homes and displaced millions of individuals. In the wake of these natural disasters, Federal Emergency Management Agency (FEMA) responded to the need to provide aid quickly through the Individuals and Households Program (IHP) program, which provides housing assistance, real and personal property assistance, and for other immediate, emergency needs. As of February 2006, FEMA made 2.6 million payments totaling over $6 billion. Our testimony today will (1) provide an estimate of improper and potentially fraudulent payments through February 2006 related to certain aspects of the disaster registrations, (2) identify whether improper and potentially fraudulent payments were made to registrants who were incarcerated at the time of the disaster, (3) identify whether FEMA improperly provided registrants with rental assistance payments at the same time it was paying for their lodging at hotels, and (4) review FEMA's accountability over debit cards and controls over proper debit card usage. To estimate the magnitude of IHP payments made on the basis of invalid registrations, we selected a random statistical sample of 250 payments made to hurricanes Katrina and Rita registrants as of February 2006. We also conducted data mining and investigations to further illustrate the effects of control breakdowns. We estimate that through February 2006, FEMA made about 16 percent or $1 billion in improper and potentially fraudulent payments to registrants who used invalid information to apply for disaster assistance. Based on our statistical sample, we are 95 percent confident that the range of improper and potentially fraudulent payments is from $600 million to $1.4 billion. In our assessment of whether a payment was improper and potentially fraudulent, we did not test for other evidence of impropriety or potential fraud, such as insurance fraud and bogus damage claims. This means our review potentially understates the magnitude of improper payments made. Examples of fraud and abuse include payments to registrants who used post office boxes, United Parcel Service stores, and cemeteries as their damaged property addresses. Absent proper verification, it is not surprising that FEMA continued to pay fictitious disaster registrations set up by GAO as part of our ongoing forensic audit. In one case, FEMA paid nearly $6,000 to our registrant who submitted a vacant lot as a damaged address. Below is a copy of a rental assistance check sent to GAO after FEMA received feedback from its inspector that the GAO undercover registrant did not live at the damaged address, and after a Small Business Administration inspector reported that the damaged property could not be found. We also found that FEMA provided expedited and housing assistance to individuals who were not displaced. For example, millions of dollars in expedited and housing assistance payments went to registrations containing the names and social security numbers of individuals incarcerated in federal and state prisons during the hurricanes. In addition, FEMA improperly paid individuals twice for their lodging--paying their hotels and rental assistance at the same time. For example, at the same time that FEMA paid $8,000 for an individual to stay in California hotels, this individual also received three rental assistance payments for both hurricane disasters. Finally, we found that FEMA could not establish that 750 debit cards worth $1.5 million went to hurricane Katrina victims. We also found debit cards that were used for a Caribbean vacation, professional football tickets, and adult entertainment.
Bridge safety first emerged as a high-priority issue in the United States in the 1960s, following the collapse of the Silver Bridge between Ohio and West Virginia, which killed 46 people. That collapse prompted national concerns about bridge conditions and safety and highlighted the need to repair and replace bridges before they collapse. Congress responded by establishing two major federal bridge programs: (1) the National Bridge Inspection Program (NBIP) to ensure periodic safety inspection of bridges and (2) what is now known as the HBP to provide a funding mechanism to assist states in replacing and rehabilitating bridges. Both of these programs generally define applicable bridges as publicly owned, over 20 feet in length, and located on public roads. Although the NBIP and HBP are separate programs, they are linked by the data collected through bridge inspections. For example, bridge information gathered through NBIP inspections is one factor used to determine the amount of HBP funding apportioned to states. The NBIP establishes federal standards, known as the National Bridge Inspection Standards, and program requirements for the proper safety inspection and evaluation of bridges. These standards establish by whom, with what frequency, and how bridge inspections are to be completed. For example, state departments of transportation (DOTs) carry out the federal- level policies, procedures, and requirements for inventory, inspection, bridge load ratings, quality assurance, and reports. Routine bridge inspections are generally conducted every 2 years, but with FHWA approval, the inspection interval may be extended to 4 years on certain bridges. Bridges may be inspected more often than every 2 years, when past inspection findings justify an increased inspection frequency. Bridge inspectors must record bridge data, including bridge conditions, during the inspection and report that information to the NBI, maintained by FHWA headquarters. Based on information gathered during bridge inspections and reported to the NBI, the HBP classifies bridge conditions as deficient or not; assigns each bridge a sufficiency rating reflecting its structural adequacy, safety, serviceability, and relative importance; and uses that information to provide funding for states to improve bridges. Deficient bridges include those that are structurally deficient, with one or more components in poor condition, and those that are functionally obsolete, with a poor configuration or design that may no longer be adequate for the traffic they serve. FHWA uses information in the NBI to annually apportion HBP funds to the states. While each state’s HBP apportionment amount is largely determined by bridge conditions and bridges generally must be below a certain condition threshold to qualify for HBP funding, other bridges are also eligible for HBP funds because states may use the funds for a broad array of other purposes, such as bridge preventive maintenance projects. All bridges are grouped into one of two general categories: Federal-aid highway bridges and bridges not on Federal-aid highways. The NBIP and the HBP generally apply to both categories of bridges located on public roads. Federal-aid highway bridges are generally located on the National Highway System, a 160,000–mile network that carries over 40 percent of the nation’s highway traffic. Non-Federal-aid highway bridges are generally located on local or rural roads that carry lower volumes of traffic than state-owned bridges. The HBP affords state DOTs discretion in determining how to use their HBP funds, and as a result, states use HBP funds and select bridge projects in a variety of ways. The HBP gives states three key flexibilities in determining how to use their HBP resources. First, the HBP has evolved to allow states to use program funds not only for bridge replacement and rehabilitation, but also for a broad array of purposes—including painting, seismic retrofitting, systematic preventive maintenance, installation of scour countermeasures (to address the effects of sediment erosion around bridge piers and abutments), and anti-icing or deicing applications— regardless of the bridge’s condition. In addition, FHWA has determined that the costs for personnel and equipment used in bridge inspections and for bridge management systems are consistent with the purpose of the HBP and therefore are also eligible uses for HBP funds. Thus, states have the flexibility to use HBP funds on bridge projects that may not immediately reduce their inventory of deficient bridges. Secondly, states have flexibility in determining how to split HBP resources between state and locally owned bridges. Aside from a requirement to distribute funds equitably, the only HBP requirement applicable to states’ allocation of program funds is that states must spend a minimum (15 percent) on non- Federal-aid highway bridges. Third, states may also spend program funds on other, nonbridge, transportation priorities by transferring up to 50 percent of their annual HBP funding to other core Federal-aid highway programs, though a penalty is invoked by reducing the state’s HBP funds in the succeeding year by the amount transferred. Many states have taken advantage of this provision over the years and transferred some of their HBP funding to other programs, although FHWA officials pointed out that some of the transferred HBP funds may still be spent on bridges and funds from other Federal-aid highway programs may also be spent on bridges. FHWA data show that significant funds have flowed toward bridges from other programs which, from a national perspective, exceed outflows from the HBP. Finally, planning for how HBP funds are spent is generally under the control of state DOTs; once states select bridge projects, they may apply to FHWA for the federal share of the costs, which is generally 80 percent of the project cost. In part due to these flexibilities, state DOTs we visited have established a range of priorities for their HBP funds—from reducing the number of their deficient bridges to seismically retrofitting their bridges—and some opted to transfer their HBP funds to fund other transportation priorities. Although the key purpose of the HBP is to enable states to improve the condition of their deficient bridges, some state transportation officials we interviewed explained that they do not focus on reducing their inventories of deficient bridges for several reasons: Deficient bridges are not necessarily unsafe. Many state transportation officials we interviewed told us that some of the deficient bridges in their states are in at least reasonably good condition and are safe. In addition, FHWA reported in 2007 that classifying a bridge as deficient does not immediately imply that it is likely to collapse or that it is unsafe. According to the FHWA report, if proper vehicle weight restrictions are posted and enforced, deficient bridges can continue to serve most traffic conditions. FHWA requires that bridge owners close to traffic any bridges that they determine to be unsafe. The HBP apportionment formula may create a disincentive to improve deficient bridges. Many federal and state officials we met with noted this potential disincentive that occurs because reducing the number and deck area of deficient bridges reduces a state’s HBP funding eligibility. Some deficient bridge projects can be cost-prohibitive. Some state officials explained that certain large-scale bridge projects—often the most traveled, urban bridges on interstate corridors—are too expensive to be implemented with HBP funds alone, especially costly “mega” projects that have an estimated total cost greater than $500 million. State DOTs use a variety of criteria, tools, and methods to select among potential bridge projects. Officials in the six states we visited use criteria such as bridge condition ratings, average daily traffic over bridges, local transportation priorities, or funding availability when prioritizing and selecting among potential bridge projects. Some states have also developed tools and approaches beyond those required by the HBP—such as bridge management systems, element-level inspections, state-specific condition ratings, and various prioritization approaches—to help them gauge bridge conditions and further inform their selection of bridge projects for funding. For example, all of the states we visited have adopted, or are considering, some form of bridge management system for gathering and analyzing bridge data to help manage their bridge assets and more efficiently allocate limited HBP resources among competing bridge priorities. States use these systems to predict future bridge conditions, estimate bridge maintenance and improvement needs, determine optimal policies for rehabilitation and replacement, and recommend projects and schedules within budget and policy constraints. FHWA has actively encouraged, but has not required, states to use bridge management systems, in part, by providing state transportation officials with relevant training and technical support. In addition, all of the states we visited required bridge inspectors to gather more detailed “element-level” bridge condition data, thereby exceeding the federal inspection requirements that require inspection of only the three major bridge components (superstructure, substructure, and deck). Furthermore, some state DOTs use their own bridge rating systems to better gauge bridge conditions and to inform their selection of bridge projects for funding. For example, the New York State DOT uses its own condition rating scale, which is based on an assessment of 47 individual bridge elements, to prioritize bridge projects. Finally, state DOTs use different methods to prioritize and select bridge projects for funding. Whereas some states we visited had highly centralized prioritization processes, others allowed the process to vary across the state. Bridge conditions, as measured by the number of deficient bridges and average sufficiency rating, improved from 1998 through 2007. According to NBI data, the total number of deficient bridges—including both structurally deficient and functionally obsolete bridges—has decreased over the last 10 years, even as the total number of bridges has increased. From 1998 through 2007, the number of deficient bridges declined by nearly 12 percent, from 172,683 to 152,317, even with the addition of more than 16,000 new bridges to the NBI (see fig. 1). The decline in the overall number of deficient bridges over the past decade reflects a reduction in the number of structurally deficient bridges. From 1998 through 2007, the number of structurally deficient bridges decreased by 22 percent, from 93,118 to 72,519 (see fig. 2). During that same period, the number of functionally obsolete bridges increased slightly from 79,565 to 79,798, an increase of 233 bridges. The reduction in the number of structurally deficient bridges, rather than functionally obsolete bridges, over this time period may reflect bridge owners’ efforts to address the deterioration or damage that are characteristic of structurally deficient bridges. Although reducing or eliminating structurally deficient bridges may not always be a state’s highest priority, structurally deficient bridges often require maintenance and repair to remain in service. By contrast, functionally obsolete bridges do not necessarily require repair to remain in service and, therefore, are unlikely to be transportation officials’ top priority for rehabilitation or replacement. The average sufficiency rating of all bridges—including both deficient and not deficient bridges—also improved slightly between 1998 and 2007, from 75 to 79 on the sufficiency rating’s 100-point scale. Additionally, while structurally deficient bridges generally have lower sufficiency ratings (average rating of 42 in 2007) than functionally obsolete bridges (average rating of 69 in 2007), the average sufficiency ratings of both types of deficient bridges improved slightly over the last decade. Improvements were most notable in bridges owned by local agencies and on rural routes, which may be attributable, in part, to the federal bridge program requirement—under HBP and some of its predecessor programs—that states spend a minimum amount of their apportionment on non-Federal-aid highway bridges. For example, from 1998 through 2007, the average sufficiency rating for bridges owned by local agencies improved from 71 to 77, and the number of deficient bridges decreased by over 17 percent, from 99,492 to 82,101. During that same period, for bridges owned by state agencies, the average sufficiency rating improved from 79 to 82, and the number of deficient bridges decreased by 4 percent, from 70,066 to 67,232 (see fig. 3). With respect to urban and rural bridges, the number of deficient rural bridges declined from 1998 through 2007 and the number of deficient urban bridges increased. From 1998 through 2007, the number of deficient rural bridges decreased by about 19 percent, from 130,910 to 106,209. During that same period, the number of deficient urban bridges increased by about 11 percent, from 41,659 to 46,086 (see fig. 4). The average sufficiency rating for both rural and urban bridges improved slightly from 1998 through 2007; for rural bridges, the average rating increased from 74 to 78, and for urban bridges, the average rating increased from 79 to 82. A bridge is classified as rural in the NBI database if it is not located inside a designated urban area. state and local bridge spending, the expansion of bridge project eligibility, and limitations in the NBI data. First, the impact of the federal investment in the HBP is difficult to measure in part because there are no comprehensive data for state and local spending on bridges. FHWA does track a portion of each state’s capital spending on bridges, and the agency has generated a single, national level estimate for total bridge expenditures by all government levels; however, there are significant gaps in this information, and neither source is comprehensive or detailed enough to be used to determine the impact of the HBP. The state transportation officials we spoke with during our site visits estimated that state and local spending on bridges ranged from the minimum match amount (generally 20 percent of the HBP apportionment amount) to more than four times the state’s apportioned HBP funds. Our previous work has shown that although federal investment in HBP and other Federal-aid highway programs has increased over time, this investment has not resulted in commensurate increases in the nation’s total government spending (federal, state, and local) on its highway system. In particular, as the level of federal funding has increased since the mid-1990s, states have not maintained their level of effort in highway spending, and federal funds have increasingly been substituted for state funds. This suggests that increased federal highway funding influences states and localities to substitute federal funds for state and local funds they otherwise would have spent on highways and bridges. Second, the impact of the HBP is also difficult to measure because HBP funds can, in some cases, be used for a variety of bridge projects without regard to a bridge’s deficiency status or sufficiency rating. Therefore, simply measuring changes in the number of structurally deficient or functionally obsolete bridges does not reflect the full impact of the program since these measures do not capture the impact of the HBP investment in the other eligible activities that do not necessarily result in an immediate reduction in the number of deficient bridges. Without quantifiable performance measures to track the full range of desired outcomes for the HBP, it is difficult to measure the program’s impact and determine the extent to which the program is serving its stated purpose. Finally, another difficulty in determining the impact of HBP funding occurs because the NBI does not readily permit changes in the condition of a group of bridges to be tracked across time. Each bridge in the NBI is assigned an identifying number by the relevant state DOT. However, the identifying number for a bridge at a specific location may change over the life of that bridge. Such a change may occur when a state renumbers, replaces, or closes and subsequently reopens a bridge. As a result, it is difficult to track changes in the condition of any specific bridge or group of bridges to determine if, for example, the same bridges that were deficient in 1998 are still deficient today, to see how many bridges have been replaced, or to determine the impact of new bridges added to the inventory (which may not be funded by the HBP) on the overall condition of the nation’s bridges. Evaluating the impact of the HBP is important not only to understand the outcomes of past spending but also to determine how to sensibly invest future federal resources. The number of HBP-eligible bridges is expected to increase as a large share of the nation’s bridges built in the 1960s and early 1970s age and become eligible for rehabilitation and replacement as a group; as a result, states and local agencies may see a spike in their need for bridge rehabilitation and replacement funding. In this environment of increasing demand for limited resources, it is especially important for FHWA and Congress to be able to evaluate the impact of the HBP in order to ensure that the program is providing an acceptable return on investment and addressing national transportation priorities. The HBP, while generally helping to improve bridge conditions, does not fully align with our principles for re-examining surface transportation programs in that the bridge program lacks focus, performance measures, and sustainability. Our principles, which are based on our prior work and federal laws and regulations, include: (1) ensuring program goals are well defined and focused on the federal or national interest, (2) incorporating performance and accountability into funding decisions, (3) employing the best tools and approaches to emphasize return on targeted federal investment, and (4) ensuring fiscal sustainability. First, HBP’s goals are not focused on a clearly identified federal interest. Over the years, the program’s statutory goals have expanded from improving deficient bridges to supporting seismic retrofitting, preventive maintenance, and many other activities, thus expanding the federal interest to potentially include almost any bridge in the country. Our previous work has emphasized the importance of identifying clear areas of federal interest as a first step in determining program goals. For example, if mobility is determined to be a key federal interest and a primary goal, the HBP could be targeted toward bridges whose conditions have the most impact on congestion and economic competitiveness and that carry higher levels of traffic or freight than those bridges in remote areas that may serve only a few people each day. If rehabilitating and reducing deficient bridges is determined to be a key federal interest, then the program could be further targeted toward that goal. The federal interest may also be greater in bridge projects that are too expensive for states to undertake without additional federal assistance or in projects that extend beyond the borders of a single state. Once the federal interest has been determined, our principles call for basing the federal share of the cost of bridge projects on the level of federal interest. Second, there is no clear tie between HBP funding and performance. HBP funds are apportioned to states without regard to program performance because the HBP formula is based on a calculation of needed repairs to deficient bridges but does not consider a state’s efforts or effectiveness in reducing its inventory of deficient bridges or controlling costs. Because the formula does not factor in other eligible program activities, such as systematic preventive maintenance, there is no link between the apportionment formula and the states’ performance of these activities. Without performance measures to link funding to performance, states lack an incentive to improve the return on the federal investment and are not held accountable for the results of their investments. Our work has shown that an increased focus on performance and accountability for results can help the federal government better target limited federal resources. Third, the HBP generally lacks sufficient tools to determine the effects of the federal investment in bridges. In this regard, bridge management systems, which are currently used by many states but not required by the program’s authorizing legislation, may be useful for prioritizing projects and making funding decisions to improve results and emphasize return on investment. Finally, the HBP’s fiscal sustainability remains a challenge in light of aging bridge infrastructure, coupled with the declining purchasing power of funding currently available for bridge maintenance, rehabilitation, replacement and the recent growth in construction costs. Based on our prior work, two tools that could possibly improve the sustainability of the HBP are a maintenance-of-effort requirement and tolling. A maintenance- of-effort requirement, whereby state or local grantees would be required to maintain their own level of funding in order to receive HBP funds, could reduce the potential substitution of federal funds for state and local funds under the program. In addition, our prior work has shown that removing barriers to, or even promoting, tolling can lead to more efficient management of existing infrastructure and capacity. Addressing the HBP’s future fiscal sustainability is critical, given the overall fiscal imbalance facing the nation and the lack of assurance that HBP funding is allocated to projects that are in the federal interest and provide the best return on investment. Our work on the HBP can provide some perspective on several provisions in the proposed legislation under review by this committee, the National Highway Bridge Reconstruction and Inspection Act of 2008 (S. 3338). The legislation proposes, among other things, to authorize an additional $1 billion for fiscal year 2009 from the U.S. Treasury’s general fund to address bridge infrastructure. The legislation would also require DOT to strengthen bridge inspection standards, adopt a risk-based process for prioritizing certain bridge rehabilitation and replacement projects, and require that states develop 5-year performance plans for bridge inspections and for the rehabilitation or replacement of deficient bridges. As summarized below, our work on the HBP is related to several provisions in the proposal: For example, the legislation calls for DOT to apply a risk-based prioritization process to every structurally deficient or functionally obsolete bridge in the nation. While such a process could potentially help target scarce federal resources to bridges that are most critical to safety and mobility, many state transportation officials we interviewed during our work raised questions about the appropriateness of focusing on all deficient bridges, noting that all deficient bridges are not necessarily unsafe and some large-scale deficient bridge projects can be too cost- prohibitive to be implemented with HBP funds alone. Also, the legislation is unclear about how, if at all, the new risk-based prioritization process will differ from or relate to DOT’s established sufficiency rating process. FHWA uses sufficiency ratings primarily to determine HBP eligibility and apportion funds. We found that states may consider sufficiency ratings in their prioritization processes but generally do not rely on these to prioritize bridge projects. In addition, the legislation calls for DOT to require states to develop 5-year performance plans covering the inspection and rehabilitation or replacement of all structurally deficient or functionally obsolete bridges. We support the use of performance plans to articulate program goals that are in the federal interest, encourage accountability for results, and help ensure that the federal government targets resources to programs that best achieve intended outcomes and national priorities. Our work has shown that the current HBP funding formula is not linked to a state’s performance in reducing its inventories of deficient bridges and we are recommending in our report being issued today that DOT work with Congress to define specific national goals and performance measures for the HBP. This legislative provision might be strengthened by requiring states to report on their progress in achieving their goals as part of each annual update to their performance plan. Also, the legislation requires that the performance plans be focused on all deficient bridges, and the same issue that I raised earlier about the appropriateness of this focus applies here as well. The legislation also calls for DOT to require the states to develop and implement a bridge management system. In our work on the HBP, all six states we visited had adopted, or were considering, some form of bridge management system to help manage their bridge assets and more efficiently allocate limited HBP resources among competing bridge priorities. In the report we are releasing today, we are recommending that DOT evaluate and incorporate into the HBP best tools and practices, such as bridge management systems. Although many aspects of the HBP are carried out at the state level—with ultimate responsibility for bridge inspection and project selection residing with the states—the federal government bears responsibility for ensuring that the program achieves results that are in the federal interest and that the program’s resources are allocated efficiently. The purpose of the HBP has greatly expanded over the years, making nearly any bridge potentially eligible for federal funding, and as a result, the federal interest in bridges lacks focus. Additionally, many state officials told us that measures used by the HBP to apportion federal funds—bridge deficiency status and sufficiency ratings—are not necessarily good proxies for the safety or risk associated with specific bridges. Even though data indicate that the number of structurally deficient bridges has declined over the last 10 years, most of this improvement has been in locally owned and rural bridges. Oftentimes, the largest and most critical bridges carrying more interstate commerce are too expensive to be funded by the HBP and so require other funding sources to be replaced or rehabilitated. Moreover, without comprehensive data on state and local spending on bridges, it is impossible either to distinguish the impact of HBP funding from the impact of state and local bridge funding or to determine the extent to which states are substituting HBP funding for state and local funds that would otherwise have been spent on bridges. Absent clear goals and related performance measures for the HBP, it is difficult to determine the overall effectiveness of the program’s investment in bridges. Our principles have suggested several ways to improve the HBP to ensure that it is more focused and performance-based in the future. For example, tools such as bridge management systems provide bridge managers with a more systematic approach to prioritizing projects and making funding decisions. Our work has shown that some states are using bridge management systems and other tools that generally exceed federal standards. Additionally, linking program goals to performance measures to determine whether goals are met and using that information to select projects and make funding decisions, can create incentives for state and local governments to improve the performance of their bridge programs, as well as the overall transportation system. As the projected revenue shortfall in the Highway Trust fund rapidly approaches and as bridge costs rise and infrastructure continues to age, incorporating strategies to better ensure the fiscal sustainability of the HBP is also critical. To improve the focus, performance, and sustainability of the HBP, the report we are releasing at this hearing recommends that the Secretary of Transportation work with Congress to take the following actions: identify and define specific national goals for the HBP; determine the performance of the program by developing and implementing performance measures related to the goals for the HBP; identify and evaluate best tools and practices that can potentially be incorporated into the HBP, such as bridge management systems; and review and evaluate HBP funding mechanisms to align funding with performance and support a targeted and sustainable federal bridge program. In reviewing a draft of the report, DOT officials said that they generally agreed with our findings and recommendations, and they provided technical comments which we incorporated in the report and this testimony, as appropriate. DOT officials also commented that they thought our re-examination principles had broader applicability than just the HBP—noting that DOT had incorporated our principles into the Department’s recent proposal for reforming surface transportation programs. DOT’s reform proposal, released in July 2008, recommends consolidating the existing network of over 100 surface transportation programs into eight broad, intermodal programs. The officials noted that DOT’s reform proposal articulates a narrower federal interest and a framework for performance management tied to clearer goals for surface transportation programs. We have not commented on DOT’s reform proposal, and the outcome of that proposal in the surface transportation reauthorization debate that will occur during 2009 is uncertain. However, we agree with DOT that our re-examination principles are applicable at a broader level than a specific program like HBP; in fact, we developed our principles because of (1) our concerns, raised in prior work, that many federal surface transportation programs are not effective at addressing key transportation challenges such as growing congestion and freight demand and (2) our conclusion that our principles could help drive the re- examination of those programs and help assess options for restructuring the entire federal surface transportation program. Chairman Boxer, this concludes my prepared statement. I would be happy to respond to any questions that you or members of the committee may have. For further information on this statement, please contact Katherine Siggerud at (202) 512-2834 or [email protected]. Individuals making key contributions to this testimony were Rita Grieco, Assistant Director; Claudia Becker; Stephanie Fain; Carol Henn; Bert Japikse; Delwen Jones; Leslie Locke; and Sara Ann Moessbauer. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The August 1, 2007, collapse of a Minnesota bridge raised nationwide questions about bridge safety and the Department of Transportation's (DOT) prioritization of bridge resources. The Highway Bridge Program (HBP), the primary source of federal funding for bridges, provided over $4 billion to states in fiscal year 2007. This testimony, based on a report GAO is releasing today, addresses (1) how states use HBP funds and select bridge projects for funding, (2) what data indicate about bridge conditions and the HBP's impact, and (3) the extent to which the HBP aligns with principles we developed, based on our prior work and federal laws and regulations, for reexamining surface transportation programs. The testimony also discusses the implications of our work for related sections of proposed legislation under review by this committee, the National Highway Bridge Reconstruction and Inspection Act of 2008 (S.3338). As context for understanding GAO's findings on the HBP, based on information gathered during bridge inspections that are generally conducted every 2 years, the HBP classifies bridge conditions as deficient or not; assigns each bridge a sufficiency rating reflecting its structural adequacy, safety, serviceability, and relative importance for public use; and uses that information to distribute funding to states to improve bridges. Deficient bridges include those that are structurally deficient, with one or more components in poor condition, and those that are functionally obsolete, with a poor configuration or design that may no longer be adequate for the traffic they serve. Use of HBP funds and project selection: The HBP affords states discretion to use HBP funds and select bridge projects in a variety of ways. Some states are focused on reducing their number of deficient bridges, while other states are pursuing different bridge priorities. For example, California has focused on seismically retrofitting bridges, a safety concern for that state. Furthermore, some states have developed tools and approaches for selecting bridge projects that go beyond those required by the HBP--such as bridge management systems and state-specific bridge condition rating systems. Bridge conditions and impact of HBP: Bridge conditions, as measured by the number of deficient bridges and average sufficiency rating of all bridges, improved from 1998 through 2007. However, the impact of the HBP on that improvement is difficult to determine because (1) the program provides only a share of what states spend on bridges and there are no comprehensive data for state and local spending on bridges and (2) HBP funds can, in some cases, be used for a variety of bridge projects without regard to a bridge's deficiency status or sufficiency rating. Alignment of HBP with GAO principles: The HBP does not fully align with GAO's principles in that the program lacks focus, performance measures, and sustainability. For example, the program's statutory goals are not focused on a clearly identified federal interest, but rather have expanded from improving deficient bridges to supporting seismic retrofitting, preventive maintenance, and many other projects, thus expanding the federal interest to potentially include almost any bridge in the country. In addition, the program lacks measures linking funding to performance and is not sustainable, given the anticipated deterioration of the nation's bridges and the declining purchasing power of funding currently available for bridge maintenance, rehabilitation, and replacement. The results of our work are generally consistent with provisions of S.3338 that call for a risk-based prioritization process for selecting bridge projects, 5-year performance plans, and bridge management systems. Our work does raise some questions about the legislation's focus on all deficient bridges because some deficient bridges do not need immediate repairs to carry traffic safely.
Prior to 1940, U.S. presidents or their descendents typically retained ownership of papers documenting their terms of office. The fate of these papers was up to the former president or his descendents, and some were lost forever. In 1940, Franklin D. Roosevelt was the first president to arrange to have a library built using privately raised funds and to then transfer both the facility and his papers to the federal government. Through its Office of Presidential Libraries, NARA operates presidential libraries housing the papers of all subsequent presidents through George W. Bush, as well as President Roosevelt’s predecessor in the White House, Herbert Hoover. At the end of a president’s term, NARA staff begin working with the president’s official records and other materials. This work goes on during library construction and during the period between the dedication of the library facility and its transfer to the federal government. Table 1 provides facts about the 13 presidential libraries and museums operated by NARA. For most of the libraries, as the president’s term was coming to a close or after it ended, friends and supporters of the president created a private charitable foundation to collect donations to construct a library. Under current law, NARA collaborates with each presidential library foundation on the construction of the library facility, and when the facility construction is complete, the foundation deeds or gives the right to use the library facility or a portion of the facility to NARA. The Presidential Libraries Act of 1986 also requires that the National Archives Trust Fund receive an operating endowment for each library before NARA can accept the transfer of the library. These endowments fund some of the federal government’s costs for the operation and maintenance of the presidential libraries. Figure 1 captures key steps of the current process of establishing a presidential library. Some variations from this process may exist. Each library is operated by a director who is a NARA employee, and other library staff who are also NARA employees. The staffs typically include an administrative officer, facility manager, education and exhibits specialists, archivists, archives technicians, and clerks, among other staff. The director of a presidential library is appointed by the Archivist of the United States, the head of NARA, who consults with the former president in selecting a candidate. The Office of Presidential Libraries is headed by the Assistant Archivist for Presidential Libraries. The Office of Presidential Libraries is responsible for overseeing the management of records at the libraries, the development of policies and procedures for the management and operation of presidential libraries, and the development and coordination of plans, programs, and resource allocations at presidential libraries. The Office of Presidential Libraries is also involved in the creation of new presidential libraries. Funds appropriated by Congress support NARA’s staffing, administration, security, maintenance, and renovation projects at the library. In fiscal year 2009, NARA spent more than $68 million in appropriations to operate the presidential libraries. In addition, for fiscal year 2009 NARA received $41.5 million in special appropriations for repairs and restoration to the John F. Kennedy Presidential Library and Museum ($22 million), the Franklin D. Roosevelt Presidential Library and Museum ($17.5 million), and the Lyndon Baines Johnson Library & Museum ($2 million). Each private foundation is operated by a director, president, or CEO and other staff that may include a chief financial officer and director of communications, among other positions. Foundation support enables the libraries to expand their research and archival functions, as well as undertake additional projects such as public outreach efforts. The foundations’ level of involvement in the activities at their associated library, such as collaboration on public and educational programs, varies from library to library. Foundations may also sponsor their own programs and activities, such as hosting a lecture series or academic discussion or producing a newsletter. NARA officials told us that, in most cases, these kinds of programs and activities are offered in conjunction with and supported by library staff. For example, a foundation may pay for a lecture series that is held in NARA-controlled space. The foundations may also generally support their associated libraries with additional funding for new facilities and equipment and for updating permanent exhibits, adding program space, and giving the library the use of foundation staff time for library activities. Foundations provide these resources directly to their associated library. This process generally is handled at the library level based on the relationship between the library and the foundation. Each presidential library also has a trust fund that receives revenue from the sale of publications, museum shop sales, document reproductions, audio-visual reproductions, library admissions, public space rentals, educational conferences, and interest income. Trust- fund money helps the library cover the cost of museum shop inventory, personnel, operational and financial systems, equipment, and supplies. These funds may also support exhibit-related and public-programming expenses. In fiscal year 2009, the trust funds for presidential libraries had a total end-of-year balance of approximately $15 million. In addition to trust funds, presidential libraries also maintain funds from gifts donated to a library for general library support or for specific projects or programs. The federal laws specific to presidential libraries focus primarily on the design and construction of library facilities and, once constructed, the deeding of the library facilities, or the rights to use the facilities, to the federal government. Congress has enacted three primary statutes that provide the legal rules for the design, construction, and transfer of library facilities. NARA’s building-use regulations outline the permissible and prohibited uses of the presidential library facilities by other groups. According to the regulations, other groups may request the use of presidential library facilities when the activity is sponsored, cosponsored, or authorized by the library; conducted to further the library’s interests; and does not interfere with the normal operation of the library. The regulations prohibit the use of the facilities for profit-making, commercial advertisement or sales, partisan political activities, or sectarian activities. When NARA considers it to be in the public interest, NARA may allow for the occasional, nonofficial use of rooms and spaces in a presidential library and charge a reasonable fee for such use. Additionally, the regulations require outside organizations to apply for the use of library space by writing to the library director and submitting an Application for Use of Space in Presidential Libraries. Applying organizations must agree to review their event plans with library staff and that the plans will conform to library rules and procedures. The application also confirms that the organization will not charge admission fees, make indirect assessment fees for admission, or take collections for their events. Further, the application prohibits the organization from suggesting that the library endorses or sponsors the organization. Federal laws and regulations specify for all federal employees—including federal employees working at presidential libraries—what they may and may not do in their official capacity. For example, federal employees may not engage in commercial or political activity associated with their federal positions. According to NARA’s General Counsel, there are no special laws or regulations that apply only to how library employees interact with the foundation or, if applicable, university associated with their library, but the laws and regulations that apply throughout the federal government also apply to library employees. The Hatch Act provides the rules for the activities of library employees at events such as candidate debates or speeches by candidates that sometimes take place at the libraries. The Hatch Act, which is enforced by the U.S. Office of Special Counsel (OSC), prohibits certain political activities for federal employees. At an event such as these (or at any other time) a library employee may not use official authority to interfere with an election; solicit, accept, or receive political contributions from any person; run for nomination or as a candidate for election to a partisan political office; or solicit or discourage the political activity of any person connected to the business of the employee’s office. NARA employees must also follow the Standards of Ethical Conduct for Employees of the Executive Branch issued by the Office of Government Ethics. The standards emphasize that employees have a responsibility to the U.S. government and its citizens to place loyalty to the Constitution, laws, and ethical principles above private gain, and set forth 14 general principles. Among other things, the standards describe limitations on actions an employee may take while seeking other employment, and require that employees use the time they are serving in an official capacity in an honest effort to perform official duties. NARA’s Office of Presidential Libraries oversees the 13 presidential libraries. That office has developed systemwide policies, including the Presidential Libraries Manual, which discusses museum activities and records topics, and the NARA / Office of Presidential Libraries Architecture and Design Standards for Presidential Libraries. The Office of Presidential Libraries also works with the NARA General Counsel on the development of policies governing the library–foundation relationship. The NARA General Counsel has issued legal opinions on foundations’ use of library facilities, when and how library staff can support foundation activities, and if library staff can fundraise for the foundations. Additionally, NARA officials explained that the NARA General Counsel and the Office of Presidential Libraries negotiate with the foundations on the agreements establishing the relationship between a new library and its associated foundation. According to NARA officials, library directors at the individual libraries consult with the NARA General Counsel about activities that could have political undertones before allowing a program or event. For example, library directors have contacted NARA General Counsel to inquire about using libraries as polling places. NARA approved the use of libraries as polling places as long as certain requirements were met such as that no political solicitation occurs on library-controlled property. In another example, a local political party requested but was not allowed to hold a political forum at the library. NARA officials told us that NARA does not have internal directives specifically regarding the supervision of library and foundation staff. They said that when library staff are concerned about supervision or other issues while working on a collaborative project with the foundations, they are expected to seek advice from the NARA General Counsel’s ethics program staff. Table 3 provides a summary of NARA policies and NARA General Counsel opinions concerning library–foundation activities and other outside uses of the libraries. Each presidential library has a written agreement with its associated foundation and, if applicable, the associated university that governs aspects of the relationship between the entities. These agreements differ in format; content; and the extent to which they address use of facilities, library and foundation staff relationships, and political activities. These agreements must be consistent with the applicable statutes and NARA regulations. At some libraries, the library–foundation relationship is addressed by more than one agreement due to the updating or supplementing of original documents, or to the changing format of the agreements over time. Some of the oldest agreements are primarily a series of Letters of Offer and Acceptance between the foundation and the General Services Administration (GSA), with later agreements taking the form of a mutually signed agreement between the foundation and NARA. For example, the Ford museum and the Hoover, Truman, Eisenhower, and Kennedy library agreements (from 1957 to 1980) include one or more Letters of Offer and Acceptance between the foundation and the GSA. Later agreements from more-recently established libraries, as well as earlier libraries that updated their agreements, include mutually signed agreements between the foundation and NARA. Of these later agreements, some focus on a specific project or aspect of the library–foundation relationship, while some focus broadly on the library–foundation relationship. We reviewed the library–foundation agreements and found that, over time, the agreements have become increasingly more detailed, especially regarding staff, each entity’s use and control of the different parts of the facilities, and political activities. Earlier agreements are largely focused on the transfer of property from the foundation to the United States, while later agreements address additional aspects of the library–foundation relationship. For example, later agreements address which entity controls specific parts of the facilities, including details related to one entity’s use of the other’s space (such as the permitted purposes for using the other’s space, and reimbursing the other entity for costs associated with using its space). Later agreements are also more likely to clarify the different roles and responsibilities of library and foundation staff, and address activities or tasks that library staff are not allowed to perform. Some of the later agreements also address potential conflicts of interest between the library and the foundation. For example, two of the later agreements state that foundation staff are to act in the best interests of the foundation, and NARA staff are to act in the best interests of NARA and the United States. Regarding political activities, two of the later agreements state that library space is not allowed to be used for partisan political activities. Also, NARA regulations give library directors the authority to establish supplemental policies. According to NARA officials, these supplemental policies may provide further detail on the library–foundation relationship regarding facilities, staff, and political activities. Our review was limited to NARA- wide policies and library–foundation agreements and we did not review any local library supplemental policies. NARA officials explained that the written agreements between individual libraries and the foundations are important, but that they also do not fully prescribe the relationships between the entities. They said that the relationships are shaped over time and by factors such as the particular foundation’s interest in collaborating with the library or doing charitable work elsewhere. For example, the Harry S. Truman Library and Museum and its associated foundation, the Truman Library Institute, are colocated and often collaborate on educational programs. The foundation describes itself as working with the library to “fulfill the Truman Library’s commitment to research and education.” In contrast, the mission of the foundation associated with the Jimmy Carter Library and Museum, The Carter Center, does not directly focus on the library, but rather “to advance peace and health worldwide.” NARA officials said that interaction between individual libraries and their foundations vary, but they also stressed that no one foundation’s emphasis is more correct than another. These are examples of differences among foundations and how those differences shape the level of involvement by a foundation with a library. We provided a draft of this report to NARA. NARA had no substantive comments and provided technical comments by e-mail, which we incorporated as appropriate. NARA’s letter is reprinted in appendix I. We will send a copy of this report to the Archivist of the United States. This report will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9110 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix II. In addition to the contact named above, David Lewis, Assistant Director; Sonya Phillips; Juliann Gorse; Brianna Benner; Sabrina Streagle; Lois Hanshaw; Susan Christiansen; Lindsay Read; and Jessica Thomsen made key contributions to this report.
The National Archives and Records Administration (NARA) operates presidential libraries for all of the former U.S. presidents since Herbert Hoover. These libraries received over 2.4 million visits in 2009, including researchers, public program attendees, and museum visitors. Each library is associated with a private foundation, which raised the funds to build the library and then turned the library facility over to the federal government. These foundations typically have ongoing relationships with the libraries they built, and some of these library-foundation relationships involve sharing of staff and facilities. Per congressional request, this report describes the principal laws, regulations, and NARA policies that govern library-foundation relationships and the appropriate use of library facilities and staff. GAO reviewed specific laws governing presidential libraries, and NARA regulations and policies. We also reviewed applicable laws and regulations governing activities held on government property and acceptable activities of federal employees. Further, we interviewed relevant NARA officials. NARA reviewed a draft of this report and had no substantive comments. NARA made technical suggestions which we incorporated as appropriate. GAO is not making any recommendations in this report. The federal laws specific to presidential libraries focus primarily on the design and construction of library facilities and, once constructed, the deeding of the library facilities, or the rights to use the facilities, to the federal government. NARA building-use regulations outline the permissible and prohibited uses of presidential library facilities by outside organizations. Prohibited uses include profit-making, commercial advertisement or sales, partisan political activities, or sectarian activities. Other laws and regulations govern what federal employees may and may not do in their official capacity. As federal employees, NARA library employees must follow these rules in their interactions with the foundation associated with the library. NARA's Office of Presidential Libraries has developed a policy manual and standards that address topics such as museum activities and records. This office also works with the NARA General Counsel to develop guidance governing the library-foundation relationship, such as those related to the foundations' use of library facilities and when and how library staff can support foundation activities. The libraries also have one or more written agreements with their associated foundation that govern different aspects of the relationship. These agreements differ in format; content; and the extent to which they address use of facilities, library and foundation staff relationships, and political activities.
In April 2011 DOD issued a directive establishing a “defense forensic enterprise” that, among other things, provided policy and assigned responsibilities within the department to develop and maintain an enduring and holistic forensic capability to support the full range of military operations. In January 2016 DOD reissued a directive establishing a “defense biometrics enterprise” that, among other things, provided policies and assigned responsibilities within the department to provide a critical end-to-end biometric capability to support decision- making across the full range of military operations. These directives assigned USD(AT&L) responsibility for overseeing and coordinating the department’s biometric and forensic enterprise activities. USD(AT&L) utilizes the Defense Biometrics and Forensics Office to carry out its oversight and coordination responsibilities. The office coordinates and synchronizes biometric and forensic requirements, as well as facilitates the development and implementation of enterprise-wide policies. In 2008 and 2011 the Secretary of Defense designated the Secretary of the Army as the executive agent for DOD’s biometric and forensic activities, respectively. In 2013 the Secretary of the Army designated the Defense Forensics and Biometrics Agency (DFBA) as the executive manager and tasked the agency with carrying out the Army’s biometric and forensic executive agent responsibilities, which include, among other things, leading enterprise coordination, acquiring common capabilities, ensuring that capabilities are planned and budgeted for, and overseeing and maintaining DOD’s authoritative biometric database through its Biometrics Operations Division. DFBA, in carrying out the Army’s forensics executive agent functions, also coordinates with the Army’s Criminal Investigations Command, which manages the Defense Forensic Science Center—the Army entity tasked with planning, programming, and providing joint or common forensic capabilities. By directive, the Office of the Secretary of Defense, the Joint Staff, the military services, and the combatant commands are required to support various programs and policies within the biometric and forensic enterprises, such as coordinating and integrating requirements and capabilities to prevent unnecessary duplication. For example, the combatant commands are responsible for identifying, validating, and prioritizing theater-specific, joint biometric and forensic requirements while the military services and other DOD components plan, program, and field biometric and forensic capabilities to meet warfighter needs. The individual military services, the geographic combatant commands, and Special Operations Command (SOCOM) all have their own offices to oversee their biometric and forensic activities. DOD utilizes the Joint Capabilities Integration and Development System to identify, assess, prioritize, and validate joint military requirements, including deployable biometric and forensic requirements. The Joint Capabilities Integration and Development System process is overseen by the Joint Staff’s Joint Requirements Oversight Council. Joint military requirement gaps are identified, typically by geographic combatant commands, and validated often by a military service or by the Joint Staff. DOD then studies potential non-materiel and materiel solutions to reduce or eliminate validated capability gaps. Non-materiel solutions include changes to doctrine, organization, training, or policy. Materiel solutions are items necessary to equip, operate, maintain, and support military activities, and they include biometric and forensic collection kits and communications equipment for transmitting biometric and forensic data to and from the warfighter. Potential materiel solutions are evaluated through an analysis-of-alternatives process whereby the performance, effectiveness, suitability, and estimated costs of potential materiel solutions are determined. DOD has a rapid acquisition process to support urgent and emergent combatant commander needs during ongoing and anticipated contingency operations. Urgent and emergent operational needs are generated when other means—such as the department’s traditional requirements and acquisition processes—cannot be tailored to address operational requirements in a timely fashion. A goal of the rapid acquisition process is to typically field a capability solution to an urgent or emergent operational need within 2 years. The rapid acquisition process is generally overseen by the Joint Staff and the Joint Rapid Acquisition Cell within the Office of the Secretary of Defense. Once a joint urgent or emergent operational need is validated by the Joint Staff, DOD may designate a sponsor—usually a military service—with responsibility for evaluating potential non-materiel and materiel solutions, and assigning a milestone decision authority to approve a solution and oversee its implementation. Based on validated requirements to support a range of military operations, DOD has fielded a number of deployable capabilities to collect, analyze, match, transmit, store, and share biometric and forensic information. Biometric collection capabilities include the following: Secure Electronic Enrollment Kit: Army, Navy, Marine Corps, and SOCOM hand-held device used to collect fingerprint, iris, facial images, and biographical information. Biometrics Automated Toolset: Army hand-held device and computer equipment used to collect (and transmit) fingerprint, iris, and facial images. Identity Dominance System: Navy and Marine Corps hand-held device and computer equipment used to collect (and transmit) fingerprint, iris, and facial images in both shore and maritime environments. The Navy and Marine Corps capabilities are separately managed, acquired, and funded through the individual services. BioSled: SOCOM hand-held device attached to a cellular phone used to collect fingerprint, iris, facial images, and biographical information. For examples of biometric collection devices, see figure 1. Forensic analysis capabilities include the following: Exploitation Analysis Center: SOCOM exploitation kit used to collect and process latent fingerprints and DNA samples, among other forensic material. Expeditionary Forensic Exploitation Capability: Marine Corps exploitation kit modeled after SOCOM’s exploitation analysis center and used to collect and process latent fingerprints and DNA samples, among other forensic material. Forensic Exploitation Analysis Tool: Managed by the Navy, this tool is a laboratory-information management and database sharing software system for documenting, tracking, reporting, and sharing forensic data. Forensic Exploitation Laboratories: Owned and operated by the Army’s Defense Forensics Science Center, these laboratories provide a modularized, scalable capability to forensically analyze latent fingerprints, DNA, explosives, drugs, and firearm and tool marks. The Army has also established a “reachback” operations center at the Gillem Enclave, Georgia, to oversee the deployment and management of the forensic exploitation laboratories, and to provide expertise and analytical capabilities to process forensic material (see figure 2). Biometric and forensic transmission, storage, and sharing capabilities include the following: DOD Automated Biometric Information System (DOD ABIS): DOD ABIS is the department’s authoritative biometric repository for non- U.S. persons. It supports the storing, matching, and sharing of biometric data collected as part of military operations, including fingerprint, iris, palm, facial images, and biographical information, as well as forensically collected latent fingerprint information. Biometric submissions and match requests are prioritized for processing based on agreements between DFBA and the submitting organization. Figure 3 shows a person of interest whom DOD identified through biometric data that were collected, analyzed, and stored in DOD ABIS. Special Operations Forces Exploitation: SOCOM communications architecture utilizing global satellite networks to transmit biometric and forensic information through an online portal to and from DOD ABIS with match/no-match responses. Department of the Navy Identification and Screening Information System: Navy and Marine Corps communications architecture to transmit biometric information through an online portal to and from DOD ABIS with match/no-match responses. The system is modeled after SOCOM’s Special Operations Forces Exploitation capability. Near Real Time Identity Operations: Army-provided regional forward server, communications platform, and collection devices that are fielded in U.S. Central Command’s (CENTCOM) area of responsibility in response to a 2014 CENTCOM joint emergent operational need. In September 2014 CENTCOM submitted a joint emergent operational need to meet 21 command-specific requirements. In November 2014 the Joint Requirements Oversight Council validated CENTCOM’s operational need and directed the executive agent to establish it as an enduring capability. In January 2015 the Joint Rapid Acquisition Cell assigned the Army as the office of primary responsibility for fulfilling the need. DOD has validated enduring non-materiel and materiel requirements for deployable biometric and forensic capabilities. DOD officials emphasized the importance of this step, given DOD’s increasing operational demand for biometric and forensic capabilities, as shown in figure 4—an interactive graphic—and in appendix II. To better support current and anticipated warfighter demand, DOD validated 30 non-materiel enduring requirements for deployable biometric and forensic capabilities, as shown in table 1. These requirements are designed to transition DOD’s biometric and forensic capabilities, over a multi-year period, from rapidly acquired and OCO-funded capabilities to enduring capabilities that are resourced through base funding. According to DOD officials, the 30 non-materiel requirements remain current and comprehensive, as of May 2017. We found that each biometric and forensic non-materiel requirement was submitted by the Army, as DOD’s executive agent for biometrics and forensics; coordinated across the department; and approved and documented by the Joint Requirements Oversight Council in August 2013 and November 2014, respectively. DOD has validated several materiel enduring requirements for deployable biometric and forensic capabilities that facilitate the recognition, collection, preservation, analysis, transmission, matching, storage, and sharing of biometric and forensic data. While DOD does not have a consolidated list of its validated biometric and forensic materiel requirements at this time, it is in the process of developing such a list. DOD’s materiel requirements are currently described in department, military service, and SOCOM strategies and acquisition documents, and in geographic combatant command operational plans. For example, the 2012 Marine Corps Identity Strategy identified a requirement for biometric and forensic collection, transmission, and storage capabilities to support operations globally. Additionally, the Army identified enduring requirements for DOD’s authoritative biometric database in documents such as its draft 2016 capability production document and its 2015 analysis of alternatives. DOD has made significant progress in addressing 7 of the 30 validated non-materiel enduring requirements for deployable biometric and forensic capabilities. The military services and SOCOM have also taken actions to ensure the continued availability of several deployable materiel biometric and forensic capabilities to meet enduring requirements. However, DOD’s efforts to institutionalize deployable biometric and forensic capabilities are limited by strategic planning gaps and acquisition management challenges. DOD has made significant progress in addressing 7 of the 30 validated non-materiel requirements for biometric and forensic capabilities that were identified in 2013 and 2014, as shown in table 2. According to DFBA documentation, DOD is in the process of addressing the remaining 23 non-materiel requirements, but as of May 2017 their status was below 75 percent complete. DFBA is leading DOD’s effort to address all 30 validated non-materiel requirements, and it has prioritized and established timeframes for their completion by 2020, as directed by the Joint Requirements Oversight Council. DFBA officials told us that they initially focused on doctrine requirements, such as issuing Joint Doctrine Note 2-16, Identity Activities, and integrating biometric and forensic activities into existing joint publications, to better address training and policy requirements. Appendix III includes a description of all 30 validated non-materiel enduring requirements by area, status, and anticipated completion, as of May 2017. DOD has developed biometric and forensic capabilities to meet several validated enduring materiel requirements, and it has made progress in transitioning these capabilities from OCO to base funding. The military services and SOCOM have initiated acquisition and sustainment programs, based on validated requirements, to ensure the continued availability of several materiel biometric and forensic capabilities, including the following: Army Next Generation Biometric Collection Device. The Army has initiated an acquisition program to identify a follow-on capability for its existing biometric collection device, the Biometrics Automated Toolset, which is scheduled to reach end-of-life in 2022, according to Army officials. The Army is conducting an analysis of alternatives to be completed at the end of fiscal year 2017 to inform its decision, according to the same officials. Biometric Enabling Capability (hereinafter referred to as the DOD ABIS follow-on system). In 2015 the Army completed an analysis of more than 10 alternatives to inform DOD’s decision regarding a DOD ABIS replacement. DOD ABIS is scheduled to be replaced in fiscal year 2022. Forensic Exploitation Laboratories. Army officials expect to transition these laboratories to an enduring, base-funded capability in 2019. Officials from the Defense Forensics Science Center noted that the Army’s draft expeditionary forensic strategy calls for an expeditionary lab to be aligned with each of the six geographic combatant commands. Identity Dominance System. The Navy and Marine Corps are jointly pursuing a replacement for their existing biometric collection device, the Secure Electronic Enrollment Kit, which, according to Navy and Marine Corps officials, is scheduled to reach end-of-life in 2019. SOCOM Biometric Collection Device. SOCOM has initiated an acquisition program to replace its existing Secure Electronic Enrollment Kit and BioSled collection devices, which currently fulfill validated requirements. SOCOM officials anticipate that the replacement capability will be available in 2019. DOD officials stated that the department has made progress in transitioning enduring biometric and forensic materiel capabilities from OCO to base budget funding. For example, Army officials stated that DOD ABIS has transitioned from a combination of OCO and base budget funding to an enduring capability funded through DOD’s base budget. The Navy, Marine Corps, and SOCOM have also developed comprehensive programs of record for their biometric and forensic materiel capabilities that are expected to be funded through their respective base budgets. In addition, the Army anticipates transitioning its forensic exploitation laboratories from OCO to base funding by 2019. Officials from across DOD noted the importance of continuing to transition biometric and forensic materiel capabilities from OCO to base funding, to better ensure their continued availability. DOD’s efforts to institutionalize its enduring deployable biometric and forensic capabilities are limited by strategic planning gaps and acquisition management challenges. These limitations include the absence of a current biometric strategic plan and supporting implementation plan, the absence of acquisition professionals to oversee CENTCOM’s Near Real Time Identity Operations solution, the absence of a geographically dispersed DOD ABIS back-up capability, and difficulties in hiring and retaining qualified personnel to operate and maintain DOD ABIS. While DOD has a current and approved forensic strategic plan, it does not have a current and approved biometric strategic plan. According to Standards for Internal Control in the Federal Government, strategic plans set the goals and objectives for an entity to achieve more effective and efficient operations and to minimize waste. Furthermore, the standards call for set goals and objectives to be reviewed periodically and updated as necessary. In 2015 DOD issued a forensic strategic plan to guide its forensic enterprise through fiscal year 2020. The plan identifies several goals and objectives, such as enhancing enterprise effectiveness and information- sharing. DOD also issued a supporting forensic implementation plan in 2015 that includes strategic planning elements for each of the objectives, such as intended outcomes, measures of effectiveness, and assigning offices of primary responsibility. According to USD(AT&L) officials, the forensic strategic plan plays a critical role in focusing and prioritizing DOD’s forensic enterprise activities. In contrast, DOD’s biometric strategic plan is out of date, and the department has not developed a supporting implementation plan. Specifically, DOD issued a biometric strategic plan in 2008, covering the 2008 – 2015 timeframe. The plan identifies several goals and objectives, such as institutionalizing biometric capabilities and coordinating biometric efforts across the department more effectively. The plan includes a requirement to be reviewed annually and updated as necessary. The plan also directs that a supporting implementation plan be developed. However, according to DOD officials, the biometric strategic plan has not been reviewed or updated since 2008, and a supporting implementation plan has not been issued. USD(AT&L), Army, Navy, Marine Corps, and DFBA officials agreed that the biometric strategic plan should be updated and a supporting implementation plan issued to better focus and prioritize enterprise goals and objectives for matters such as doctrine and policy, coordination, and acquisition and sustainment efforts. For example, DOD officials noted that the military services and SOCOM have a number of ongoing biometric acquisition and sustainment initiatives that are not articulated and synchronized in a single document, and that including information about these initiatives in an updated biometric strategic plan would enhance long-range enterprise planning. According to DOD officials, the 2008 biometric strategic plan has not been reviewed and updated, and a supporting implementation plan has not been issued, because no organization has been assigned responsibility for completing these tasks. Further, these officials stated that if an entity were to independently undertake these tasks without being assigned to do so, there likely would be mixed acceptance across the enterprise. Without a strategic plan that identifies goals and objectives and a supporting implementation plan that identifies outcomes, measures of effectiveness, and responsibilities, among other things, DOD may be missing an opportunity to reprioritize and better align enterprise efforts in important areas such as acquisition and sustainment. DOD’s acquisition management challenges that are specific to its biometric and forensic enterprises include the absence of a milestone decision authority to oversee CENTCOM’s Near Real Time Identity Operations solution, the absence of a geographically dispersed DOD ABIS back–up capability, and difficulties in hiring and retaining qualified personnel to operate and maintain DOD ABIS. CENTCOM’s Near Real Time Identity Operations solution lacks a milestone decision authority supported by acquisition professionals. According to DOD officials, the Army could have more thoroughly considered existing, viable, and potentially less costly alternatives to address CENTCOM’s 2014 operational need for a Near Real Time Identity Operations capability. In 2015 SOCOM offered the Army its Special Operations Forces Exploitation capability as a potential solution. According to military service, SOCOM, and DFBA documentation and officials, SOCOM’s capability was a proven, highly effective, and cost-efficient communications architecture that met many of CENTCOM’s 21 operational need requirements, including the ability to transmit and receive a match/no-match response from DOD ABIS within 3 minutes. Navy and Marine Corps officials stated that they modeled their communication architecture (i.e., the Department of the Navy Identification and Screening Information System) on the Special Operations Forces Exploitation capability, based on its demonstrated high performance and reliability. Other Army officials noted that the Army’s fielded Biometrics Automated Toolset capability could potentially have been leveraged to satisfy some of CENTCOM’s operational need requirements. When CENTCOM’s joint emergent operational need was validated by the Joint Staff and assigned by the Joint Rapid Acquisition Cell, the Army office responsible for overseeing the Near Real Time Identity Operations solution was given 90 days to identify and field a potential solution; thus, according to DOD officials, they had limited time to thoroughly assess alternative options. Army officials observed that while they discussed the feasibility of the Special Operations Forces Exploitation capability and other potential solutions with DOD, military service, and SOCOM officials in 2015, they rejected these alternatives because they did not meet all of CENTCOM’s requirements, including the ability to share unclassified information with allied partners and the ability to transmit and receive all match/no-match responses within 3 minutes. While we did not, in the following assessments, validate the findings or the Army’s efforts to address the corresponding deficiencies identified in them, the assessments highlight concerns within DOD regarding the performance of the Near Real Time Identity Operations solution. In June 2016 the Center for Naval Analyses issued an analysis of biometric and forensic data collected through November 2015 which examined several DOD information systems and found that the Near Real Time Identity Operations solution produced inconsistent match/no-match responses due to data synchronization challenges that could increase risk for existing and future missions conducted in the CENTCOM area of responsibility. In September 2016 the Army completed its operational assessment of the Near Real Time Identity Operations solution and found that it provided inconsistent match/no-match responses that “reduced warfighter confidence in the system.” Based on their lack of confidence in the system, SOCOM and the Marine Corps sought and received approval for their forces in the CENTCOM area of responsibility to use their existing capabilities instead of the Near Real Time Identity Operations solution. Marine Corps officials asserted that the Near Real Time Identity Operations solution continued to provide incomplete match/no-match data as of May 2017. Army officials acknowledged that the Near Real Time Identity Operations solution operational assessment identified major deficiencies; however, they stated that the Army had addressed the major deficiencies as of May 2017. In addition, CENTCOM determined that the solution has military utility, and CENTCOM is interested in pursuing further enhancements to meet all of its 21 operational need requirements. According to DOD Instruction 5000.02, a milestone decision authority, supported by acquisition professionals, will be assigned to oversee a rapid acquisition program such as the Near Real Time Identity Operations solution. The milestone decision authority is responsible for, among other things, overseeing the evaluation of alternative existing technologies to consider cost, schedule, performance, and operational risk before selecting a solution. However, according to DOD officials, the Army did not assign a milestone decision authority and also did not assign an office with experienced acquisition professionals to oversee the Near Real Time Identity Operations solution. DOD acquisition officials noted that if acquisition professionals had overseen the solution, they might have considered different performance, cost, or schedule trade-offs, which may have resulted in a different outcome. In 2015 DOD officials informed the Army of the need to assign a milestone decision authority, but as of May 2017 the Army had not assigned such an authority. Some Army officials told us that the office currently responsible for overseeing the Near Real Time Identity Operations solution has provided sufficient oversight. According to DOD guidance, no later than 1 year after a system enters operation and sustainment, DOD should complete a disposition analysis that recommends a course of action, including whether to retain the system. Given the absence of a milestone decision authority and the acquisition and performance challenges incurred with the Near Real Time Identity Operations solution, we believe that the department could benefit from a disposition analysis that is completed before the solution reaches operation and sustainment. A disposition analysis not only would inform DOD’s management of the Near Real Time Identity Operations solution, but also would inform the department’s other biometric and forensic acquisition programs, such as the DOD ABIS follow-on system. DOD ABIS lacks a geographically dispersed back-up capability. DOD’s mission-critical authoritative biometrics database (i.e., DOD ABIS) faces heightened operational risk because it does not have a geographically dispersed back-up capability. According to officials from across the biometric enterprise, U.S. forces rely on DOD ABIS to store and match biometric and latent fingerprint information. Without a geographically dispersed back-up, there is increased risk that if DOD ABIS were unavailable for unexpected and extended periods, U.S. forces would be unable to receive timely match/no-match information to identify enemy combatants and terrorists. DOD ABIS has a partial back-up system that is located less than 20 miles away from its primary site in West Virginia, thereby making it vulnerable to many of the same natural and man-made disasters to which the primary site is vulnerable. According to the National Institute of Standards and Technology, mission-critical information systems, such as DOD ABIS, should have a back-up capability located in a geographic area that is unlikely to be affected by the same hazards as the primary site. The Army, which has responsibility for operating and maintaining DOD ABIS, considered geographic dispersal as part of the 2015 DOD ABIS follow-on system analysis of alternatives. However, according to DOD officials, the Army has not included geographic dispersal as part of the selection criteria for the DOD ABIS follow-on system. When the Army fielded DOD ABIS in 2004 it was responding to a CENTCOM urgent need to support military operations, and therefore it focused on rapidly fielding an initial capability, according to DOD officials. At that time the Army did not develop a geographically dispersed DOD ABIS back-up capability, and it has not subsequently developed such a capability because of anticipated costs and the assumption that the existing back-up system suffices, according to DOD officials. However, DOD officials stated that the Army has an opportunity to consider the pros and cons of developing a geographically dispersed capability as part of the DOD ABIS follow-on system acquisition program. For example, one of the options under consideration entails transitioning DOD ABIS’s data to a virtual cloud format. According to DOD officials, doing so could reduce the operational risk associated with having limited geographic dispersal. DOD’s contractors face challenges in hiring and retaining qualified personnel to operate and maintain DOD ABIS. DOD ABIS’s operational risk is exacerbated by DFBA’s challenges in hiring and retaining qualified personnel to operate and maintain the system. DFBA’s Biometrics Operations Division is responsible for managing DOD ABIS’s day-to-day operations and uses contractors to support several services, including information technology security, staffing an around-the-clock watch desk to support warfighter requirements, and providing latent fingerprint examiners to adjudicate potential fingerprint matches when automated determinations are not definitive, according to officials. However, DFBA officials stated that its contractors have experienced difficulty in hiring and retaining staff for these functions because the current support contracts were issued using a lowest-price technically acceptable source selection process—that is, awarding contracts to the lowest bidder deemed technically qualified. This contracting approach limits DOD’s ability to attract bids from companies with less restrictive compensation, according to DOD officials. In contrast, a tradeoff contracting approach permits tradeoffs among cost and non-cost factors and allows a contract to be awarded to a contractor that is not the lowest bidder. According to DOD officials, a tradeoff approach could enhance the quality of contract offers and improve contractor hiring and retention through better compensation. According to DOD acquisition officials, a lowest-price technically acceptable approach should be used for basic services, such as sanitation and landscaping, and not for technical, highly-skilled services, such as information technology security and latent fingerprint examination. DFBA pursued a tradeoff approach for its DOD ABIS mission-critical functions, but a lowest- price technically acceptable approach was settled upon by Army Contracting Command, according to DFBA and Army Contracting Command officials. Specifically, DFBA’s inability to attain a tradeoff approach was caused by difficulty in completing required documentation, such as detailed job position descriptions, in a timely manner, despite DFBA’s and Army Contracting Command’s combined efforts. The National Defense Authorization Act for 2017 directs DOD to avoid the use of lowest- price technically acceptable selection criteria to acquire knowledge-based professional services such as information technology, cybersecurity, systems engineering, and technical assistance to the maximum extent practicable. Although the current DOD ABIS support contracts pre-date the passage of the Act, USD(AT&L) and DFBA officials stated that daily operation and maintenance of DOD ABIS are considered knowledge-based professional services that require highly skilled personnel to perform and therefore, consistent with the Act, the department should consider pursuing a tradeoff contracting approach when it is practicable to do so, such as during future contract solicitations. Standards for Internal Control in the Federal Government emphasizes the importance of recruiting, developing, and retaining competent personnel. DFBA’s ability to provide timely and authoritative match/no-match responses to U.S. forces engaged in ongoing operations might be negatively affected if its contractors cannot hire and retain sufficient numbers of highly skilled personnel to operate and maintain DOD ABIS’s mission- critical functions. In our prior reports on DOD’s biometric and forensic activities issued since 2011, we made 16 recommendations to enhance the biometric and forensic enterprises. As of May 2017, DOD had implemented 15 of the 16 recommendations and was making progress toward implementing the remaining recommendation, as shown in table 3. The 15 closed recommendations and additional steps DOD has taken since they were closed are summarized in appendix IV. In March 2011 we found that a biometric collection device used primarily by the Army did not meet DOD-adopted standards; and that DOD did not have a finalized biometric information-sharing agreement with the Department of Homeland Security; and we identified concerns that DOD ABIS might be unable to meet the search demands of non-DOD biometric systems. We made five recommendations addressing DOD’s process and policies for updating and testing collection devices and improving information-sharing across federal agencies. DOD has implemented each of these recommendations. For example, in January 2016 DOD updated its biometric directive that, among other things, now assigns responsibility for ensuring that its biometric-related systems conform to federal standards. In addition, in January 2016 the Assistant Secretary of Defense for Homeland Defense and Global Security, in coordination with the Department of Homeland Security, updated guidance to further improve the sharing of biometric, biographical, and identity-management data between the two departments for screening and identity-verification purposes. In April 2012 we found that biometric training for leaders did not provide instruction on the effective use of biometrics; several factors during the data transmission process limited the use of biometrics in Afghanistan; and requirements did not exist for DOD to disseminate biometric lessons learned across the department. We made seven recommendations to address these findings, six of which the department has implemented. For example, between February 2015 and January 2017 DOD approved 25 new universal joint tasks that relate to biometric and forensic training. This action is one of the first steps DOD must take in order to institutionalize biometric-related training and education to support its operational requirements. With respect to the recommendation that is not implemented, DOD officials told us that the department is taking actions to address several data transmission factors that hindered the Army’s and Marine Corps’ ability to identify (and capture) enemy combatants in Afghanistan in a timely manner. These factors include mountainous terrain, competing demands for communications infrastructure, and delays in updating hand-held biometric collection devices with the most current biometrically enabled watchlist. During this review, USD(AT&L) and military service officials told us that these data transmission factors will be analyzed and potentially addressed through the DOD ABIS follow- on system acquisition program and the CENTCOM Near Real Time Identity Operations solution. We believe that these actions will address the intent of our 2012 recommendation. DOD officials also stated that they have improved the reliability and responsiveness of DOD ABIS. From fiscal years 2014 through 2016, DOD ABIS was available more than 98 percent of the time, excluding brief scheduled periods of unavailability for system updates and planned maintenance actions. Additionally, in fiscal year 2016 DOD ABIS’s average match/no-match response time was generally between 1 and 11 minutes, depending on the prioritization level assigned to the biometric submission. In June 2013 we found that DOD’s draft forensic strategic plan was missing important elements such as milestones and metrics to gauge progress; that USD(AT&L) had not reviewed and evaluated military service and SOCOM budget estimates, as required by DOD’s forensic directive; and that DOD had not provided guidance to the military services on how they were to collect and report forensic budget data to USD(AT&L). We made four recommendations addressing DOD’s forensic strategic plan and the review and evaluation of forensic budget estimates. DOD has implemented each of these recommendations. For example, DOD issued a forensic enterprise strategy in March 2015 and a supporting implementation plan in September 2015. The strategic plan and implementation plan, when viewed together, contain several important elements for effective strategic planning, including goals, milestones, and metrics. DOD relies on its deployable biometric and forensic capabilities to support a range of military operations, including the identification and targeting of enemy combatants and terrorists. Since 2011 DOD has made considerable progress in institutionalizing these capabilities, the majority of which were developed through rapid acquisition processes and funded with OCO funds to meet urgent and emergent warfighter needs in Iraq and Afghanistan. For example, DOD has validated a number of non- materiel and materiel enduring requirements, and several of the resulting capabilities have transitioned, or are in the process of transitioning, from OCO to base funding. Furthermore, DOD has implemented almost all of our prior biometric- and forensic-related recommendations that we believe are consistent with the department’s efforts to institutionalize its deployable biometric and forensic capabilities. However, DOD’s continued success could be diminished by gaps in strategic planning documents and acquisition management challenges. Specifically, without a current biometric strategic plan and supporting implementation plan, DOD is not well positioned to prioritize and focus enterprise-wide activities. Furthermore, without a milestone decision authority to oversee DOD’s development of a Near Real Time Identity Operations solution, and a disposition analysis to recommend a path forward, DOD risks facing continued cost, schedule, and performance issues. Lastly, the ability of DOD ABIS to support future warfighter needs could be adversely impacted by not having a geographically dispersed back-up capability and challenges in hiring and retaining qualified personnel to operate and maintain the system. Addressing these strategic planning and acquisition management challenges will help DOD sustain the progress it has made toward establishing enduring deployable biometric and forensic capabilities. To enhance enterprise-wide biometric strategic planning, we recommend that the Under Secretary of Defense for Acquisition, Technology, and Logistics take the following two actions: 1. Publish an updated biometric strategic plan to identify enterprise goals 2. Publish a supporting biometric implementation plan that includes intended outcomes, measures of effectiveness, and responsibilities, among other things. To facilitate more effective and efficient acquisition management of DOD’s biometric and forensic enterprises, we recommend that the Secretary of the Army, in coordination with the Under Secretary of Defense for Acquisition, Technology, and Logistics take the following four actions: 3. Assign a milestone decision authority to oversee the Near Real Time 4. Complete a disposition analysis for the Near Real Time Identity Operations solution before the solution reaches operation and sustainment; 5. Consider including geographic dispersal as part of the selection criteria for the DOD ABIS follow-on system; and 6. Use tradeoff selection criteria, rather than lowest-price technically acceptable criteria, for determining contractor support for DOD ABIS mission-critical functions when it is practicable to do so. DOD reviewed a draft of this report and concurred with all of our recommendations. DOD also cited actions it plans to take to address them. We believe that if DOD completes the actions it outlines in its response, this will address the intent of our recommendations. DOD’s written comments are reprinted in their entirety in appendix V. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; the Under Secretary of Defense for Acquisition, Technology, and Logistics; the Chairman, Joint Chiefs of Staff; the Secretaries of the Army, the Navy, and the Air Force; and the Commandant of the Marine Corps. In addition, the report is available at no charge on the GAO website at http://www.gao.gov If you or your staff have any questions about this report, please contact me at (202) 512-9971 or at [email protected]. Key contributors to this report are listed in appendix VI. This report evaluates the extent to which the Department of Defense (DOD) has since 2011 (1) validated enduring requirements for deployable biometric and forensic capabilities; (2) taken actions to meet enduring requirements for deployable biometric and forensic capabilities and overcome any related challenges; and (3) taken actions to address prior GAO recommendations regarding DOD’s biometric and forensic capabilities. We did not assess digital; multimedia; cyber; or chemical, biological, radiological, and nuclear forensic requirements and capabilities. To evaluate the extent to which DOD has validated enduring requirements for deployable biometric and forensic capabilities since 2011, we identified and analyzed non-materiel requirements documents drafted by the Army, as DOD’s executive agent for biometrics and forensics, and validated by the Joint Requirements Oversight Council; and compared them to DOD’s requirements validation process. We met with officials from the Defense Forensics and Biometrics Agency (DFBA) and the Army’s Training and Doctrine Command to obtain greater specificity on the objective of each non-materiel requirement. We also identified biometric and forensic materiel requirements by analyzing relevant Office of the Secretary of Defense, military service, and combatant command strategies, plans, acquisition and sustainment documents, as well as written responses to question sets provided to each of the geographic combatant commands through the Joint Staff. This included reviewing and assessing the Army’s 2015 analysis of alternatives and 2016 draft capability production document for DOD’s authoritative biometric database to identify key performance requirements for the department’s follow-on biometric database. We discussed the materiel biometric and forensic requirements with Joint Staff, military service, combatant command, and DFBA officials responsible for requirements planning and oversight to understand the requirements validation process for materiel solutions. We also met with geographic combatant command officials and analyzed the commands’ written responses to a questionnaire to better understand their current and anticipated demand for biometric and forensic capabilities. To evaluate the extent to which DOD has taken actions to meet enduring requirements for deployable biometric and forensic capabilities since 2011, we reviewed and analyzed relevant planning, acquisition, and sustainment documents, including emergent and urgent operational needs statements, analyses of alternatives, and capability development documents, to identify any challenges and gaps in meeting validated joint requirements. During the course of our analysis, we determined that a DOD-reported completion status of 75 percent or more was reflective of the validated non-materiel requirement having made significant progress. We also compared the content and process for developing DOD’s biometric and forensic strategic plans with Standards for Internal Controls in the Federal Government for control activities to determine their enterprise utility. In addition, we compared federal information systems guidance on contingency planning with acquisition planning and development documents for DOD’s follow-on authoritative biometric database. Furthermore, we reviewed and compared contracting information for providing service contracts to DFBA’s Biometrics Operations Division, which manages the authoritative biometric database, with contracting provisions in the National Defense Authorization Act for Fiscal Year 2017 discouraging the use of lowest-price technically acceptable selection criteria in certain types of procurements. Finally, we met with Office of the Secretary of Defense, Joint Staff, military service, Special Operations Command (SOCOM), geographic combatant command, and DFBA officials responsible for biometric and forensic activities to determine the status of DOD’s deployable non-materiel and materiel biometric and forensic capabilities, current and anticipated funding sources for materiel solutions, and estimated timeframes for completion. To evaluate the extent to which DOD has taken actions to address our prior recommendations regarding its biometric and forensic capabilities since 2011, we reviewed our internal recommendation tracking system for status updates. We also analyzed DOD directives, guidance, and plans that had been updated or released since 2011, and written responses to our question set from each of the geographic combatant commands to determine whether the department had taken actions that met the intent of our recommendations. Finally, we met with program management, planning, and acquisition officials from the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics (USD(AT&L)) and the military services to gather information and clarification on additional steps the department had taken or planned to take to address our prior recommendations. To address our three reporting objectives, we met with biometric and forensic acquisition, operations, planning, and programming officials from the DOD organizations identified in table 4. We also met with officials from the Center for Naval Analyses to discuss their body of work on DOD biometrics and forensics. Between 2013 and 2014, DOD validated 30 non-materiel enduring requirements for its deployable biometric and forensic capabilities. These requirements are designed to transition DOD’s biometric and forensic capabilities, over a multi-year period, from rapidly acquired and OCO- funded capabilities to enduring capabilities resourced through base funding. The status and anticipated completion date of each requirement is detailed in table 5. As of May 2017, DOD had implemented 15 of 16 recommendations from our prior reports. Table 6 summarizes the 15 closed recommendations and additional steps that DOD has taken since they were closed. In addition to the contact above, Marc Schwartz, Assistant Director; David Adams; Vincent Buquicchio; Pamela Davidson; Richard Hung; Amber Lopez Roberts; Paul Seely; Sarah Warmbein; and Cheryl Weissman made key contributions to this report. Defense Forensics: Additional Planning and Oversight Needed to Establish an Enduring Expeditionary Forensic Capability. GAO-13-447. Washington, D.C.: June 27, 2013. Afghanistan: Key Oversight Issues. GAO-13-218SP. Washington, D.C.: February 11, 2013. Defense Biometrics: Additional Training for Leaders and More Timely Transmission of Data Could Enhance the Use of Biometrics in Afghanistan. GAO-12-442. Washington, D.C.: April 23, 2012. Afghan Security: Renewed Sharing of Biometric Data Could Strengthen U.S. Efforts to Protect U.S. Personnel from Afghan Security Force Attacks. GAO-12-471SU. Washington, D.C.: April 20, 2012. Defense Biometrics: DOD Can Better Conform to Standards and Share Biometric Information with Federal Agencies. GAO-11-276. Washington, D.C.: March 31, 2011. Defense Management: DOD Can Establish More Guidance for Biometrics Collection and Explore Broader Data Sharing. GAO-09-49. Washington, D.C.: October 15, 2008. Defense Management: DOD Needs to Establish Clear Goals and Objectives, Guidance, and a Designated Budget to Manage Its Biometrics Activities. GAO-08-1065. Washington, D.C.: September 26, 2008.
Since 2008 DOD has used biometric and forensic capabilities to capture or kill 1,700 individuals and deny 92,000 individuals access to military bases. These capabilities were mainly developed through rapid acquisition processes and were resourced with Overseas Contingency Operations funds—funds that are provided outside of DOD's base budget process. As a result, concerns have been raised about DOD's long-term ability to fund these capabilities. The House Armed Services Committee and House Permanent Select Committee on Intelligence included provisions in committee reports for GAO to review DOD's progress in institutionalizing deployable biometric and forensic capabilities. This report examines, among other issues, the extent to which DOD since 2011 has (1) validated long-term requirements for deployable biometric and forensic capabilities; and (2) taken actions to meet long-term requirements for deployable biometric and forensic capabilities and overcome any related challenges. GAO examined DOD directives, strategies, policies, plans, and requirements and met with cognizant DOD officials. The Department of Defense (DOD) has validated its requirements for long-term deployable biometric capabilities (such as fingerprint collection devices) and forensic capabilities (such as expeditionary laboratories). Biometric capabilities are used to identify individuals based on measurable anatomical, physiological, and behavioral characteristics such as fingerprints, iris scans, and voice recognition. Forensic capabilities support the scientific analysis of evidence—such as deoxyribonucleic acid (DNA) and latent fingerprints—to link persons, places, things, and events. DOD utilizes deployable biometric and forensic capabilities to support a range of military operations, such as targeting, force protection, and humanitarian assistance. DOD has made significant progress in addressing its long-term requirements for deployable biometric and forensic capabilities, such as issuing new doctrine and establishing long-term funding for several capabilities, including DOD's authoritative biometric database that is used for identifying enemy combatants and terrorists. However, DOD's efforts to institutionalize these capabilities are limited by the following strategic planning gaps and acquisition management challenges: While DOD has a current and approved forensic strategic plan, it does not have one for its biometric capabilities, because no entity has been assigned responsibility for developing such a plan, according to DOD officials. The Army did not follow DOD's acquisition protocols in developing a recent key biometric capability, and it may have missed an opportunity to leverage existing, viable, and less costly alternatives. DOD's authoritative biometric database that is used for identifying enemy combatants and terrorists does not have a geographically dispersed back-up capability to protect against threats such as natural hazards. Having such a back-up could enhance the database's availability. Addressing these strategic planning and acquisition management challenges could help DOD sustain the progress it has made to establish enduring deployable biometric and forensic capabilities. The photographs above depict a warfighter obtaining a biometric iris image (left) and a forensic investigator collecting a latent fingerprint (right). GAO is making 6 recommendations, including that DOD update its biometric enterprise strategic plan; take steps to more effectively manage the acquisition of a recent biometric capability; and consider developing a geographically dispersed back-up capability for its authoritative biometric database. DOD concurred with all of the recommendations and cited actions it plans to take to address them.
FDA conducts quality system inspections of medical device manufacturers’ establishments to assess compliance with applicable FDA regulations, including the quality system regulation to ensure good manufacturing practices and the regulation requiring reporting of adverse events. FDA’s routine postmarket quality system inspections include both comprehensive and abbreviated inspections, which differ in the scope of inspectional activity. A comprehensive postmarket inspection of an establishment assesses multiple aspects of the manufacturer’s quality system, including management activities to establish, implement, and review the quality system; procedures to control the design and the production or processing of the device to ensure that it conforms to specifications and user requirements; and procedures for preventing, identifying, and correcting quality problems. Based upon its findings during inspection, FDA classifies completed inspections into one of three categories based on the extent to which the establishment deviates from applicable requirements of the quality system regulation: No action indicated (which indicates no deviations or only minor deviations), voluntary action indicated (which indicates minor to significant deviations), or official action indicated (which indicates significant deviations and warnings). MDUFMA required FDA to accredit third persons—which are organizations—to conduct inspections of certain establishments. Manufacturers that meet eligibility requirements may request a postmarket inspection by an FDA-accredited organization. To be eligible to request an inspection of an establishment by an accredited organization, a manufacturer must manufacture a class II or class III medical device; market at least one of those devices in the United States; market or intend to market at least one of those devices in a foreign country and either (a) one of those countries certifies, accredits, or otherwise recognizes the FDA-accredited organization as authorized to conduct inspections of establishments or (b) the manufacturer submits a statement to FDA that the law of one of the countries recognizes an inspection by FDA or the FDA-accredited organization; have received, after its most recent inspection, a classification by FDA as “no action indicated” or “voluntary action indicated” for the establishment that it seeks to have inspected by an accredited organization; and request and receive FDA’s approval to use a specific accredited organization. In addition, to be eligible to request an inspection by an accredited organization, domestic establishments may not have been inspected by the accredited organization during the previous four years, unless the manufacturer requests and receives a waiver from FDA, and foreign establishments must be periodically inspected by FDA. Organizations seeking accreditation to conduct inspections through the accredited persons inspection program submit applications to FDA for review. FDA established criteria for accreditation that incorporate the minimum requirements set out in MDUFMA, including the independence and competence of the accredited organizations. For example, to ensure the independence of organizations accredited to conduct inspections of medical device establishments, MDUFMA prohibits accredited organizations from engaging in the design, manufacture, promotion, or sale of articles regulated by FDA, and FDA’s criteria include whether the organization has procedures in place to prevent conflicts of interest. To ensure that accredited organizations are competent to conduct inspections, MDUFMA requires that accredited organizations agree to limit their work to that for which they have sufficient competence and capacity, and FDA’s criteria include whether the organizations’ personnel have knowledge of pertinent FDA laws, regulations, and inspection procedures. FDA developed a scoring procedure to evaluate applications from organizations in light of these and other criteria. FDA also developed a training program for inspectors from accredited organizations that involves both formal classroom training and training inspections of establishments. The formal classroom training includes instruction on FDA’s regulations pertaining to medical devices and FDA’s techniques for conducting quality system inspections. FDA also requires inspectors to successfully complete three joint inspections with FDA before being cleared to conduct independent inspections. FDA relies on manufacturers to volunteer to host these joint inspections. During the first training inspection, an FDA inspector leads the inspection and the accredited organization’s inspector acts primarily as an observer. During the second training inspection, the accredited organization’s inspector conducts an inspection while being observed and evaluated by an FDA inspector who may provide assistance to the trainee. During the third training inspection, the accredited organization’s inspector conducts an inspection while being observed and evaluated by an FDA inspector who may not provide assistance to the trainee. Each individual inspector from an accredited organization must complete all training requirements successfully before being cleared to conduct independent inspections. Manufacturers that want to have an inspection through the accredited persons inspection program submit a request to FDA that identifies the accredited organization they intend to use and asks for FDA’s approval. Manufacturers include with that request documentation showing that they meet the eligibility criteria. FDA can then provide clearance and approve the request, ask for additional information, or deny the request. If the request is approved, the manufacturer enters an agreement with the approved accredited organization and schedules an inspection. Once the accredited organization completes its inspection, it prepares a report and submits it to FDA. FDA makes the final assessment of compliance with applicable requirements. FDA granted accreditation to 17 of 23 organizations. FDA denied accreditation to applicants that did not meet minimum criteria because their applications were not correctly completed or did not demonstrate technical competence. In addition, some applicants were denied accreditation because MDUFMA limited the number of organizations that could be accredited to 15 during the first year after FDA issued criteria for accreditation. FDA granted accreditation to 17 of 23 organizations that applied to conduct inspections of establishments through the accredited persons inspection program. One or more foreign governments had already authorized each of these accredited organizations to conduct inspections to assess compliance with quality system requirements. FDA announced accreditation of 15 of 22 applicant organizations on November 6, 2003. One of these accredited organizations withdrew from the program in December 2003, leaving 14 accredited organizations. After the initial accreditation year, FDA received two additional applications for accreditation, including one from an organization that had been denied accreditation during the first year; FDA accredited both of these organizations. The total number of accredited organizations as of October 31, 2006, was thus 16. FDA denied accreditation to applicants that did not meet minimum criteria because their applications were not correctly completed or did not demonstrate the applicants’ technical competence and because more organizations met the minimum criteria for accreditation than FDA could legally accredit. During the first accreditation year, FDA received a total of 23 applications from 22 organizations. Of these 23 applications, 2 were not correctly completed and the applicants were denied accreditation. For example, these applications did not include required documentation showing the authority, responsibility, and reporting structure of the individuals who would perform work through the accredited persons inspection program. One of the organizations that had initially submitted an application that was not correctly completed submitted a second, correctly completed application within the first accreditation year. (This second application is included among the total of 23 applications FDA received during the first accreditation year.) Thus, FDA received 21 correctly completed applications from 21 organizations during the first accreditation year. FDA also denied accreditation to applicants that did not meet minimum criteria because their applications did not demonstrate that the applicants had adequate technical competence. To evaluate organizations’ qualifications, FDA developed a checklist for scoring applications against the criteria for accreditation. A group of FDA staff assessed the applications and assigned scores to specific elements, such as technical competence and prevention of conflict of interest. FDA determined that 2 of the 21 correctly completed applications did not demonstrate that the organization had adequate technical competence, and it denied accreditation to these 2 organizations. FDA found that the remaining 19 organizations that applied for accreditation during the first accreditation year met the minimum criteria for accreditation, but it was limited to accrediting 15 organizations during that year. FDA rank-ordered the applications by the total score it assigned through use of the checklist. FDA granted accreditation to the 15 organizations with the highest ranking applications, and denied accreditation to the remaining 4 organizations with lower-ranking applications. Between March 11, 2004—the date when FDA first cleared an accredited organization to conduct independent inspections of establishments—and October 31, 2006, two accredited organizations conducted independent inspections—one inspection of a domestic establishment and one inspection of a foreign establishment. During the same time period, 36 inspections of domestic establishments and 1 inspection of a foreign establishment were conducted by accredited organizations jointly with FDA officials as part of the training FDA required of accredited organizations. As shown in table 1, individuals from 7 of 16 accredited organizations completed all training requirements and were cleared to conduct independent inspections by October 17, 2006. The remaining 9 accredited organizations had not completed all training requirements as of October 31, 2006. To gain perspective on the number of inspections conducted by accredited organizations, we asked FDA how many inspections it had conducted from March 11, 2004, through October 31, 2006, that could potentially have been conducted by accredited organizations. FDA could not provide exact counts of these inspections for two reasons. First, only those manufacturers that market, or intend to market, a device in a foreign country are eligible to be inspected by an accredited organization, but FDA does not routinely obtain information about foreign marketing activities or plans. Second, eligibility for an inspection by an accredited organization is limited to manufacturers of class II or III medical devices, but FDA does not have readily available information about the classification of devices that were manufactured at establishments at the time of inspection. Instead of providing exact counts of the number of inspections FDA had conducted that could potentially have been conducted by accredited organizations, FDA told us how many comprehensive postmarket quality system inspections it had conducted of establishments where class II or III medical devices were manufactured as of October 31, 2006, and that met the criteria for an inspection by an accredited organization other than the criterion that the manufacturer markets, or intends to market, a medical device in a foreign country. These counts provide an upper bound estimate of the number of inspections FDA had conducted that could potentially have been conducted by accredited organizations. From March 11, 2004, through October 31, 2006, FDA conducted 229 inspections of domestic establishments and 48 inspections of foreign establishments. According to FDA and representatives of affected entities, several factors could influence manufacturers’ interest in voluntarily participating in the accredited persons inspection program, whether by requesting an inspection or by hosting a training inspection. FDA and representatives of affected entities described factors that could serve as potential incentives, disincentives, or reasons to defer making a request for an inspection by an accredited organization. Additional factors may influence manufacturers’ interest in participating in the program by hosting required training inspections. Potential incentives to having an inspection by an accredited organization include the opportunity to reduce the number of inspections conducted to meet FDA and other countries’ requirements and to control the scheduling of the inspection by an accredited organization. FDA and representatives of affected entities told us that manufacturers would prefer to reduce the number of inspections they need to undergo by having a single inspection cover requirements of FDA and other governments, rather than having separate inspections. One reason for this preference is that inspections are disruptive to manufacturers. FDA and representatives of affected entities told us that FDA’s requirements are similar, but not identical, to the requirements of other countries. As a result, a single inspection designed to cover multiple requirements would likely take more time than a single inspection designed to meet any one set of requirements, but less time than separate inspections. Representatives of the accredited organizations with whom we spoke stated that they expect to be able to address multiple inspection requirements in a single inspection, and the one inspection of a domestic establishment that an accredited organization completed independently before October 31, 2006, was a single inspection designed to meet the requirements of FDA, the European Union, and Canada. According to FDA and many representatives of affected entities, another potential incentive to requesting an inspection by an accredited organization is that manufacturers can work with accredited organizations to schedule inspections and can schedule them months in advance. In contrast, FDA generally notifies manufacturers of inspections about a week in advance. The reasons representatives of affected entities gave for the preference for scheduling inspections well in advance include that it enables them to ensure the availability of their quality managers and minimize disruption to their normal work activities. FDA and representatives of affected entities told us that the potential disincentives to having an inspection by an accredited organization include bearing the cost for the inspection, doubts about whether accredited organizations can cover multiple requirements in a single inspection, and uncertainty about the potential consequences of making a commitment to having an inspection to assess compliance with FDA requirements in the near future. Manufacturers pay for inspections that are conducted by accredited organizations; in contrast, manufacturers are not charged for inspections conducted by FDA. Manufacturers that already pay for inspections to meet requirements of foreign countries will likely face a higher cost for an inspection that also covers FDA requirements because the requirements are not identical and the inspection will therefore likely take longer. FDA and representatives of affected entities stated that bearing the cost for the inspection might be a disincentive to participation in the program, and some of these representatives suggested that cost could be particularly important to small manufacturers. Although a goal of the accredited persons inspection program is to reduce the total number of inspections for manufacturers that market devices in the United States and other countries, some representatives of FDA and manufacturers raised doubts about whether the accredited organizations could cover multiple requirements in a single inspection. One of them told us that the accredited organization that inspects its establishments stated that it would not combine the inspection to assess compliance with FDA requirements with an inspection to address other requirements, and would instead conduct two separate inspections. Similarly, some FDA officials expressed uncertainty about whether all of the accredited organizations would develop inspection strategies that effectively address multiple requirements. FDA and Canada are in the process of establishing a pilot program to assess whether accredited organizations can meet the requirements of both countries in a single inspection. In addition, uncertainty about the potential consequences of making a commitment to having an inspection to assess compliance with FDA requirements in the near future is a potential disincentive. Manufacturers who request an inspection by an accredited organization are committing to an inspection to assess compliance with FDA requirements in the near future, even though it is possible that FDA would not inspect them in the next 5 or 6 years—and inspections carry the risk of regulatory action. FDA and most of the representatives of affected entities with whom we spoke told us that this commitment to an inspection is a potential disincentive to participation in the program. For example, one industry representative questioned why manufacturers would ask for—and pay for—inspections when the result could be that FDA closes them down. In addition, because FDA will make the final determination of compliance with its requirements, some representatives of affected entities suggested that manufacturers might be uncertain about whether the accredited organization’s inspection will satisfy FDA, or whether FDA will conduct an additional inspection after reading the report prepared by the accredited organization. Some representatives of affected entities suggested that manufacturers might defer a decision about whether to request an inspection by an accredited organization until uncertainties about the potential incentives and disincentives have been reduced. For example, manufacturers might defer a decision until there is greater certainty about whether accredited organizations are able to conduct single inspections to cover multiple sets of requirements and about how FDA will respond to the inspection reports prepared by accredited organizations. According to representatives of affected entities, some manufacturers— those that are already paying to have routine quality system inspections of their establishments to meet the requirements of other countries—might have other reasons for deferring a request for an inspection by an accredited organization. Manufacturers that already contract with a specific accredited organization to conduct inspections to meet the requirements of other countries might defer participation until that organization has completed all required training and been cleared by FDA to conduct independent inspections. In addition, because manufacturers want to minimize the disruptiveness of inspections, they might defer requesting an inspection through FDA’s accredited persons inspection program until accredited organizations have honed their procedures for conducting inspections to cover FDA’s requirements. Manufacturers’ participation in the accredited persons inspection program also includes their willingness to host training inspections. In addition to some of the potential incentives and disincentives to requesting an inspection by an accredited organization, other factors may have influenced manufacturers’ interest in hosting required training inspections. Fewer manufacturers have volunteered to host training inspections than needed for all of the accredited organizations to complete their training. Some representatives of affected entities speculated that manufacturers might have believed that training inspections would require more time and effort for their staff (and would thus be more disruptive) than inspections conducted by fully trained personnel, or that manufacturers might have believed that training inspections would be more rigorous than nontraining inspections if the trainees and FDA personnel were to take particular care to demonstrate their thoroughness to each other. Moreover, FDA and representatives of affected entities indicated that scheduling training inspections was difficult. For example, FDA schedules inspections a relatively short period of time prior to the actual inspection, and some accredited organizations were not available to participate because they had already made prior commitments. We provided a draft of this report to the Department of Health and Human Services for comment. The department stated that our report provides an accurate and balanced explanation of the accredited persons inspection program and provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Secretary of Health and Human Services and the Commissioner of FDA, appropriate congressional committees, and other interested parties. We will also make copies available to others on request. In addition, the report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have questions about this report, please contact me at (202) 512-7119 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. The Medical Device User Fee and Modernization Act of 2002 (MDUFMA) requires us to report on the number of inspections of medical device establishments conducted by the Food and Drug Administration (FDA). We are reporting the number of postmarket quality system inspections of domestic establishments where medium or high risk medical devices (referred to as class II or class III medical devices) are manufactured and the number of inspections of foreign medical device establishments conducted by FDA. To provide this information, we asked FDA how many inspections it conducted from March 11, 2004—the date when FDA first cleared an accredited organization to conduct independent inspections—through October 31, 2006. With regard to domestic establishments, we asked for the number of quality system inspections of establishments where class II or class III medical devices are manufactured. FDA provided us with the number of such inspections based on the classification of medical devices as of October 31, 2006, because FDA does not have readily available information about the classification of devices manufactured at the establishments at the time of inspection. FDA updates the information about device classification in its inspection database when the types of medical devices an establishment handles changes, for example, when a manufacturer changes its device inventory or when FDA reclassifies a device. Based on our review of FDA documents and discussions with FDA officials, we determined that the data FDA provided were sufficiently reliable for the purposes of this report. FDA reported that from March 11, 2004, through October 31, 2006, it conducted 2,814 postmarket quality system inspections of domestic establishments where a class II or III medical device was manufactured as of October 31, 2006. These establishments included medical device manufacturers and remanufacturers, packers and repackers, labelers and relabelers, contract sterilizers, software manufacturers, and reprocessors. During this time period, another 86 domestic inspections were conducted by state investigators under contract to FDA. FDA also reported that it conducted 656 inspections of foreign medical device establishments from March 11, 2004, through October 31, 2006. To determine the number of organizations that sought accreditation, the number that were accredited, and reasons for denial of accreditation, we reviewed FDA documentation of the number of applications for accreditation it received and its evaluation of those applications, and we interviewed FDA officials. To determine the number of inspections of foreign and domestic establishments conducted by accredited persons, we asked FDA to provide counts of the number of inspections conducted from March 11, 2004—the date when FDA first cleared an accredited organization to conduct independent inspections—through October 31, 2006. Based on our review of FDA documents and discussions with FDA officials, we determined that the data were sufficiently reliable for our purposes. To determine whether there are factors that could influence manufacturers’ interest in voluntarily participating in FDA’s accredited persons inspection program, we interviewed FDA officials and representatives of affected entities. As indicated in table 2, the affected entities with which we conducted interviews were four accredited organizations, three organizations that represent medical device manufacturers, and six global medical device manufacturers. For our sample of accredited organizations, we selected two that had been cleared by FDA to conduct independent inspections as of April 2006 and two that had not. To select our sample of manufacturers, we asked the representatives of each of the three organizations that represent manufacturers to provide us with a list of five manufacturers. Two of the organizations provided lists of five manufactures and one organization provided a list of four manufacturers. We randomly selected two global manufacturers from each list. The information we obtained from these representatives of affected entities can not be generalized to other manufacturers or accredited organizations. We also reviewed applicable law, regulations, legislative history, FDA guidance, and other relevant documents. We conducted our work from February 2006 through November 2006 in accordance with generally accepted government auditing standards. In addition to the contact named above, James McClyde, Assistant Director; Kristen Joan Anderson; Cathleen J. Hamann; and Julian Klazkin made key contributions to this report.
The Food and Drug Administration (FDA) inspects domestic and foreign establishments where U.S.-marketed medical devices are manufactured to assess compliance with FDA's quality system requirements for ensuring good manufacturing practices and other applicable requirements. The Medical Device User Fee and Modernization Act of 2002 (MDUFMA) required FDA to accredit organizations to inspect certain establishments where devices that are marketed in both the United States and other countries are manufactured. This report includes information that MDUFMA requires GAO to provide on (1) the number of organizations that sought accreditation, the number that were accredited, and reasons for denial of accreditation and (2) the number of inspections conducted by accredited organizations. It also includes information about factors that could influence manufacturers' interest in voluntarily requesting and paying for an inspection by an accredited organization. GAO examined FDA documents, interviewed FDA officials, and obtained information from FDA on the number of inspections conducted from March 11, 2004--when FDA first cleared an accredited organization to conduct independent inspections--through October 31, 2006. GAO also interviewed affected entities, including accredited organizations and medical device manufacturers. FDA granted accreditation to 17 of 23 organizations that applied to conduct inspections of establishments where medical devices are manufactured. FDA denied accreditation to applicants that did not meet minimum criteria because their applications were not correctly completed or did not demonstrate the applicants' technical competence. During the first accreditation year, which started in April 2003, FDA received 23 applications. Of the 23 applications, 2 were not correctly completed and 2 did not demonstrate that the applicants had adequate technical competence. Although the remaining 19 applicants met the minimum criteria, MDUFMA limited the number of organizations that could be accredited to 15 during the first year after FDA issued criteria for accreditation. FDA scored the 19 applications against these criteria and rank-ordered them. It accredited the 15 organizations with the highest ranking applications, but 1 organization later withdrew. After the initial accreditation year, FDA received 2 more applications for accreditation and it accredited both organizations. These 16 organizations remained accredited as of October 31, 2006. Between March 11, 2004, and October 31, 2006, two accredited organizations conducted independent inspections--one inspection of a domestic establishment and one inspection of a foreign establishment. During that same period, 36 inspections of domestic establishments and 1 inspection of a foreign establishment were conducted by accredited organizations jointly with FDA officials as part of training that FDA requires of accredited organizations. As of October 31, 2006, individuals from 7 of the 16 accredited organizations had completed all training requirements and were cleared to conduct independent inspections. Several factors may influence manufacturers' interest in voluntarily requesting an inspection by an accredited organization. According to FDA and representatives of affected entities, there are potential incentives and disincentives to requesting an inspection, as well as reasons for deferring participation in the program. Potential incentives include the opportunity to reduce the number of inspections conducted to meet FDA and other countries' requirements and to control the scheduling of the inspection. Potential disincentives include bearing the cost for the inspection and uncertainty about the potential consequences of making a commitment to having an inspection to assess compliance with FDA requirements in the near future. Some manufacturers might be deferring participation. For example, manufacturers that already contract with a specific accredited organization to conduct inspections to meet the requirements of other countries might defer participation until FDA has cleared that organization to conduct independent inspections. The Department of Health and Human Services provided technical comments on a draft of this report, which GAO incorporated as appropriate.
Throughout this century, railroads have been a primary mode of transportation for many products, especially for such bulk commodities as coal and grain. Yet, by the 1970s American freight railroads were in a serious financial decline. The Congress responded by passing landmark legislation in 1976 and 1980 that reduced rail regulation and encouraged a greater reliance on competition to set rates. Railroads also continued a series of combinations to reduce costs, increase efficiencies, and improve their financial health. In 1995, the Congress abolished the Interstate Commerce Commission (ICC)—the federal agency responsible for overseeing rates, competition, and service in the rail industry—and replaced it with the Surface Transportation Board (the Board). Rail shippers and others have expressed concern about the lack of competition in the railroad industry, the extent to which railroads are using their market power to set rates, and the quality of service provided, especially for those shippers with fewer alternatives to rail transportation to move their goods to market. They have also questioned whether the Board is adequately protecting shippers against unreasonable rates and service. By the 1970s, America’s railroads were in serious financial trouble. In a 1978 report to the Congress, the U.S. Department of Transportation (DOT) indicated that in 1976, 11 of 36 Class I railroads studied were earning negative rates of return on investment, and at least 3 railroads were in reorganization under the bankruptcy laws. Some of the railroads’ problems were due to federal regulation of rates that reduced management control and the flexibility railroads needed to react to changing market conditions. Prior to 1976, almost all rail rates were subject to ICC oversight to ensure they were reasonable. The Congress sought to improve the financial health of the rail industry by reducing railroad rate regulation and encouraging a greater reliance on competition to set reasonable rail rates. The Congress did so by passing two landmark pieces of legislation—the Railroad Revitalization and Regulatory Reform Act of 1976 (4R Act) and the Staggers Rail Act of 1980. The 4R Act limited the ICC’s authority to regulate rates to those instances where there was an absence of effective competition—that is, where a railroad is “market dominant.” Furthermore, the Staggers Rail Act made it federal policy to rely, where possible, on competition and the demand for rail services (called differential pricing) to establish reasonable rates. Among other things, this act also allowed railroads to market their services more effectively by negotiating transportation contracts (generally offering reduced rates in return for guaranteed volumes) containing confidential terms and conditions; limited collective rate setting to those railroads actually involved in a joint movement of goods; and permitted railroads to change their rates without challenge in accordance with a rail cost adjustment factor. Furthermore, both the 4R Act and the Staggers Rail Act required the ICC (now the Board) to exempt certain railroad transportation from economic regulation. The Staggers Rail Act required ICC to exempt railroad transportation from regulation upon finding that the regulation was not necessary to carry out the rail transportation policy and either (1) the transaction was of limited scope or (2) regulation was not needed to protect shippers from an abuse of market power. During the 1980s, railroads used their increased freedoms to improve their financial health and competitiveness. The railroad industry has continued to consolidate in the last 2 decades, a condition that has been occurring since the 19th century. In 1976, there were 30 independent Class I railroad systems (comprised of 63 Class I railroads); by early 1999, there were 9 railroad systems (comprised of 9 Class I railroads) and half of that reduction was due to consolidations.(See fig. 1.1.) The nine remaining Class I railroad systems are the Burlington Northern and Santa Fe Railway Co.; Consolidated Rail Corporation (Conrail); CSX Transportation, Inc.; Grand Trunk Western Railroad, Inc.; Illinois Central Railroad Co.; Kansas City Southern Railway Co.; Norfolk Southern Railroad Co.; Soo Line Railroad Co., and Union Pacific Railroad Co. In 1998, the Board approved the division of Conrail’s assets between CSX Transportation, Inc., and Norfolk Southern Corporation. Conrail is expected to be formally absorbed by CSX Transportation and Norfolk Southern in 1999, leaving a total of eight Class I railroad systems. Railroads consolidated to reduce costs and increase efficiencies, making them more competitive. For example, one of the justifications for the 1995 Burlington Northern-Santa Fe merger was to provide shippers with more efficient and cost-effective “single line” service. Both the Board and the railroads involved expected reduced costs and improved transit times because the railroad on which a shipment originated would no longer have to transfer the shipment to another railroad for routing to its final destination. Cost reductions and increased efficiencies were also expected from, among other things, rerouting of traffic over shorter routes, more efficient use of equipment, and increased traffic densities. Consolidations were also justified as providing competitive benefits—both within the rail industry and between railroads and other transportation modes. For example, the Board in its 1996 approval of the Union Pacific/Southern Pacific merger expected the merger would intensify rail competition in the West between Burlington Northern and Santa Fe Railway and the combined Union Pacific/Southern Pacific. The acquisition of Conrail by Norfolk Southern and CSX Transportation is expected to yield benefits—both by diverting substantial amounts of highway freight traffic to railroads and by introducing new railroad-to-railroad competition in those areas previously served only by Conrail. As Class I railroads consolidated, non-Class I railroads increased their importance in providing service. For example, in 1980, Kansas was served by seven Class I railroads (see fig. 1.2); in 1997, this number was three. Between 1991 and 1996, Class I railroads reduced their mileage operated in the state by about 1,400 miles while non-Class I carriers increased their mileage by about 1,700 miles (175 percent greater than in 1991). (App. I shows how Class I and non-Class I rail mileage changed in Montana, North Dakota, and West Virginia from 1980 to 1997.) In 1995, the Congress passed the ICC Termination Act of 1995, which abolished the ICC. The act transferred many of ICC’s core rail functions and certain nonrail functions to the Board, a decisionally independent adjudicatory agency that is administratively housed in DOT. Among other things, the Board approves market entry and exit of railroads; approves railroad mergers and consolidations; determines the adequacy of a railroad’s revenues on an annual basis; adjudicates complaints concerning rail rates on traffic over which a railroad has market dominance; adjudicates complaints alleging that carriers have failed to provide service upon reasonable request; and exempts railroad transportation from economic regulation under certain circumstances. The ICC Termination Act made several significant changes to railroad regulation. For example, the act eliminated the requirement for railroad tariff filings. However, the act did not alter railroads’ authority to engage in demand-based differential pricing or to negotiate transportation service contracts containing confidential terms and conditions that are beyond the Board’s authority while in effect. Several of the Board’s functions are particularly relevant to this report: the (1) responsibility for determining the adequacy of a railroad’s revenues, (2) jurisdiction over rail rate complaints, and (3) jurisdiction over complaints alleging that carriers have failed to provide service upon reasonable request. First, the Board is required to determine the adequacy of railroad revenues on an annual basis. In addition, the Board is required to make an adequate and continuing effort to assist railroads in attaining adequate revenues—that is, revenues that under honest, economical, and efficient management cover total operating expenses plus a reasonable and economic profit on capital employed in the business. Second, the Board is also responsible for protecting shippers without feasible transportation alternatives from unreasonably high rail rates. Where the Board concludes that a challenged rate is unreasonable, it may order the railroad to pay reparations on past shipments and prescribe maximum rates for future shipments. The Board does not have authority over rail rates for car movements made under contracts or for movements that it has exempted from economic regulation. Only about 18 percent of the tonnage moved in 1997 was subject to rate reasonableness regulation by the Board. The remainder was either moved under contract (70 percent), according to the Association of American Railroads (AAR),or was exempt from economic regulation (12 percent). Furthermore, rates on rail traffic priced below the 180-percent revenue-to-variable cost threshold are not subject to regulation by the Board. According to the Board, over 70 percent of all rail traffic in 1997 was priced below this threshold. Third, the Board has the authority to adjudicate service complaints filed by shippers. The Board’s process for handling formal service complaints, like its rate complaint process, is an administrative litigation process, in which parties to the dispute file pleadings, disclose and receive information from each other, and present evidence. If the Board decides a case in favor of the complainant, it can require the carrier to provide the shipper with monetary compensation or to adopt or stop a practice. Moreover, the Board is authorized to impose “competitive access” remedies, under which shippers can obtain access to an alternative carrier. However, to obtain permanent relief, the complaining shipper must demonstrate that the rail carrier currently providing the service (called the incumbent carrier) has engaged in anticompetitive conduct—that is, the carrier has used its market power to extract unreasonable terms or, because of its monopoly position, has disregarded the shipper’s needs by not providing adequate service. As discussed in chapter 5, the Board also has other procedures for providing temporary relief from service inadequacies without a showing of anticompetitive conduct where the carrier is not providing adequate service. The Board may also address service deficiencies through emergency service orders. The Board may issue an emergency service order if it determines that a failure of traffic movement has created an emergency situation that has a substantial impact on shippers or railroad service in a region or that a railroad cannot transport traffic in a manner that properly serves the public. Through emergency service orders, the Board may, among other things, permit the operation of one rail carrier over another carrier’s line to improve the flow of traffic. The Board may also direct a rail carrier to operate the lines of a carrier that has ceased operations. These arrangements may not exceed 270 days. Since 1990, the ICC and the Board have issued eight emergency service orders; prior to its termination, the ICC, in five of these instances, directed a carrier to operate the lines of another railroad. Senators Conrad Burns, Byron Dorgan, Pat Roberts, and John D. Rockefeller, IV, expressed concern that the continued consolidation within the rail industry has allowed railroads to charge unreasonably high rates and provide poor service. The Senators asked us to report on (1) how the environment within which rail rates are set has changed since 1990; (2) how rates for users of rail transportation have changed since 1990; (3) how railroad service quality has changed since 1990; and (4) what actions, if any, the Board and others have taken (or propose to take) to address rail rate and service quality issues. The requesters also asked us to identify difficulties and barriers for shippers, including small shippers, in obtaining relief from unreasonable rates from the Board. We addressed this latter topic and actions that the Board and others have taken to address rail rate issues in our companion report on issues associated with the Board’s rate relief process. To identify how the environment within which rail rates have been set has changed since 1990, we reviewed (1) legislation regarding the economic regulation of railroads, (2) regulations and decisions issued by ICC or the Board regarding rail rate and service issues, and (3) literature available in professional journals and trade publications. We also used reports we have issued on various aspects of the railroad industry and the Staggers Rail Act of 1980 and reviewed selected position papers prepared by railroad and shipper trade associations. To identify the economic and financial status of railroads in the 1990s, we collected information available from various AAR surveys of Class I railroads on the percent of railroad tonnage moved under contract and collected financial information from ICC’s Transport Statistics in the United States, the Board’s Statistics of Class I Freight Railroads in the United States, and AAR’s Railroad Facts. We also obtained information on the amount of intercity freight tonnage transported in the United States annually by transportation mode from Transportation In America, published by the Eno Transportation Foundation, Inc. To identify structural changes in the railroad industry since 1990, we reviewed information from AAR on Class I status, information on railroad industry combinations, and reviewed ICC’s and the Board’s decisions in selected railroad merger cases. To identify how railroad rates have changed since 1990, we obtained data from the Board’s Carload Waybill Sample for the years 1990 through 1996 (latest data available at the time of our review). The Carload Waybill Sample is a sample of railroad waybills (in general, documents prepared from bills of lading authorizing railroads to move shipments and collect freight charges) submitted by railroads annually. We used these data to obtain information on rail rates for specific commodities in specific markets by shipment size and length of haul. According to Board officials, revenues derived from the Carload Waybill Sample are not adjusted for such things as year-end rebates and refunds that may be provided by railroads to shippers that exceed certain volume commitments. Some railroad movements contained in the Carload Waybill Sample are governed by contracts between shippers and railroads. To avoid disclosure of confidential business information, the Board disguises the revenues associated with these movements prior to making this information available to the public. Using our statutory authority to obtain agency records, we obtained a version of the Carload Waybill Sample that did not disguise revenues associated with railroad movements made under contract. Therefore, the rate analysis presented in this report presents a truer picture of rail rate trends than analyses that may be based solely on publicly available information. The specific commodities selected for analysis were coal, grain (wheat and corn), chemicals (potassium and sodium compounds and plastic materials or synthetic fibers, resins, and rubber), and transportation equipment (finished motor vehicles and motor vehicle parts and accessories). These commodities represented about 45 percent of total industry revenue in 1996 and, in some cases, had a significant portion of their rail traffic transported where the ratio of revenue to variable costs equaled or exceeded 180 percent. Since much of the information contained in the Carload Waybill Sample is confidential, rail rates and other data contained in this report that were derived from this data base have been aggregated at a level sufficient to protect this confidentiality. We used rate indexes and average rates on selected corridors to measure rate changes over time. A rate index attempts to measure price changes over time by holding constant the underlying collection of items that are consumed (in the context of this report items shipped). This approach differs from comparing average rates in each year because over time higher- or lower-priced items can constitute different shares of the items consumed. Comparing average rates can confuse changes in prices with changes in the composition of the goods consumed. In the context of railroad transportation, rail rates and revenues per ton-mile are influenced, among other things, by average length of haul. Therefore, comparing average rates over time can be influenced by changes in the mix of long-haul and short-haul traffic. Our rate indexes attempted to control for the distance factor by defining the underlying traffic collection to be commodity flows occurring in 1996 between pairs of Census regions. To examine the rate trends on specific traffic corridors, we first chose a level of geographic aggregation for corridor endpoints. For grain, chemical, and transportation equipment traffic, we defined endpoints to be regional economic areas defined by the Department of Commerce’s Bureau of Economic Analysis. For coal traffic, we used economic areas to define destinations and used coal supply regions—developed by the Bureau of Mines and used by the Department of Energy—to define origins. An economic area is a collection of counties in and about a metropolitan area (or other center of economic activity); there are 172 economic areas in the United States and each of the 3,141 counties in the country is contained in an economic area. For each selected commodity and each corridor, we determined the average shipment distance over the 1990 through 1996 time period. We placed each corridor in one of three distance-related categories: 0-500 miles, 501-1,000 miles, and more than 1,000 miles. We then determined, for each selected commodity, the aggregate tonnage over the 1990 through 1996 time period and selected the top five corridors (based on tons shipped) within each distance category for further examination, including changes in revenues and variable costs per ton-mile over the time period. To assess how railroad service quality has changed since 1990, we (1) reviewed literature on how railroad service is (or can be) measured; (2) reviewed railroad and shipper statements on the quality of rail service in recent years; and (3) interviewed Class I railroads, shipper associations, and several individual shippers. To obtain a wider perspective on shippers’ views about the quality of service they have received and how it might be improved, we sent a questionnaire to members of 11 commodity associations that ship using rail in the United States and to those shippers that had filed rate complaints before the Board. The member organizations represent shippers of the four commodities that comprised the largest volume of rail shipments—coal, chemicals, plastics, and bulk grain. For coal, chemicals, and plastics, we surveyed all members of the associations, and this report provides the views of the 87 coal shippers and 99 chemicals and plastics shippers that responded to our survey. Because we used statistical sampling techniques to obtain the views of members of one grain association, the National Grain and Feed Association, the statistics we provide relating to the views of grain shippers and of all shippers responding to our survey are presented as estimates. The report provides estimates of the views of 523 grain shippers. In all cases, these estimated 709 coal, chemicals, plastics, and grain shippers indicated that they had shipped goods by rail in at least 1 year since 1990. Some estimates presented in this report do not represent the views of 709 shippers because some shippers did not answer all the questions. For more information on how we conducted our survey, as well as responses to individual questions, see our companion report on current issues associated with the Board’s rate relief process (GAO/RCED-99-46). We also determined the number of formal service complaints that were being adjudicated by ICC on January 1, 1990, and the number that have been filed with the ICC/Board from January 1, 1990, through December 31, 1998. To do this, we asked the Board to identify all formal service complaints between these two dates. In order to test the completeness of the Board’s identification of service complaints, we reviewed selected cases that the Board did not consider to be service-related. We found one service complaint not contained on the Board’s original list of complaints. We discussed this complaint with Board officials, who agreed that it should be considered a formal service complaint. We did not review the merits, or appropriateness, of any ICC/Board decisions associated with these complaints. To determine actions the Board and others have taken or have proposed to take to address service issues, we interviewed officials from the Board, DOT, and U.S. Department of Agriculture (USDA); industry association officials; and officials from Class I railroads and reviewed the documents that they provided. We also reviewed statutes and regulations pertaining to service issues, recent Board decisions on service issues, and emergency and directed service orders issued by the ICC or the Board since 1990. We interviewed officials from the Board, DOT, and USDA about their recent and planned efforts to address the needs of agricultural shippers and obtained and reviewed relevant agency agreements and reports. We interviewed Class I railroad and AAR executives about, and obtained and reviewed documentation on, their 1998 meetings with shippers; efforts to develop and disseminate measures of service; agreements with grain and feed shippers and small railroads; and efforts to improve customer service. We also attended the railroad/shipper meetings held in Chicago in August 1998 and in Atlanta in October 1998. The organizations we contacted during our review are listed in appendix III. Our work was conducted from June 1998 through March 1999 in accordance with generally accepted government auditing standards. In commenting on a draft of this report, the Board noted that our map of Class I freight railroads in the United States in 1997 (fig. 1.1) did not include trackage rights of Class I railroads over other Class I railroads, including about 4,000 miles of Burlington Northern and Santa Fe trackage rights over Union Pacific. The Board also noted that it has an informal process for handling railroad service complaints and that this process can be used to resolve service problems quickly and inexpensively. In response to these issues, we modified the note to figure 1.1 to indicate that Class I trackage rights over other Class I railroads is not shown on the map, including the 4,000 miles of Burlington Northern and Santa Fe trackage rights over Union Pacific. We also added language better recognizing the Board’s informal service complaint process. Railroads’ rate setting since 1990 has increasingly been influenced by ongoing industry and economic changes such as continued rail industry consolidation, which has concentrated the industry into fewer and bigger railroads, and the need for investment capital to address infrastructure constraints. Rail rates are also a function of market competition. Using differential pricing, railroads continued to set rates in the 1990s according to the demand for their services. Overall railroad financial health has improved during the 1990s, and railroads increased their share of the freight transportation market. However, many Class I railroads continued to earn less than what it costs them to raise capital (called the revenue adequacy standard). Ongoing industry and economic changes have influenced how railroads have set their rates. Since 1990, there has been considerable change in the rail industry and the economic environment in which it operates. Not only has the rail industry continued to consolidate, potentially increasing market control by the largest firms, but capacity constraints have led to an increased need for capital; industry growth has raised the specter that productivity gains may moderate; and domestic and worldwide economic changes have caused fluctuations in the demand for rail transportation. Many of these changes are expected to continue into the future. Other actions are also expected to influence the rate-setting environment, including ongoing actions to deregulate the electricity generating industry. The 1990s have seen significant consolidation within the railroad industry. For the most part, this consolidation has concentrated the rail industry in fewer and larger companies and potentially increased market control by these firms. The number of independent Class I railroad systems has decreased from 13 in 1990 to 9 in early 1999. These firms control a significant portion of industry revenues as well as traffic. In 1990, the five largest railroads accounted for about 74 percent of total rail industry operating revenue. In 1997, this percentage had increased to about 94 percent. In fact, the two largest Class I railroads (Union Pacific and Burlington Northern and Santa Fe Railway) accounted for about 55 percent of total industry operating revenue. An analysis of ton-miles of revenue freight transported shows similar results. In 1990, the five largest railroads accounted for about three-fourths of total revenue ton-miles transported by the railroad industry. In 1997, the five largest railroads accounted for about 95 percent of revenue ton-miles transported. Again, the two largest Class I railroads accounted for just under two-thirds of all revenue ton-miles transported in 1997. Some shipper groups and others have expressed concerns about industry consolidation. For example, the Railroad-Shipper Transportation Advisory Council, created by the ICC Termination Act, reported in 1998 that, because of rail industry consolidation, some shippers have developed fears that the railroad that serves them not only dictates the terms of their relationship but also whether they remain economically viable. The Consumers United For Rail Equity, representing various shipper and industry trade associations, has also expressed concerns that dwindling competitive rail options resulting from industry consolidation have increased the number of shippers that consider themselves captive to railroads. Finally, the Alliance for Rail Competition, also representing various shipper and industry trade associations, has expressed concern that deteriorating rail service and the potential for monopoly rate abuse by railroads have resulted from the creation of fewer and bigger railroads. This organization believes increased competition in the railroad industry, rather than regulation, would better protect shippers against abuses. The Board plays a role in rail industry consolidation. Not only does the Board approve proposed mergers and acquisitions when it finds them in the public interest, but monitors them once they have been approved. As part of the review and approval process, the Board has the authority to attach conditions to a merger or acquisition. In general, these conditions are designed to protect the public against any harm that might otherwise be experienced as the result of one railroad taking over another and to protect against the potential loss of competition or protect affected shippers from the loss by another rail carrier of the ability to provide essential service. According to the Board, merger conditions are routinely imposed to ensure that any shipper that was capable of being served by more than one railroad before a merger will continue to have more than one railroad available after the merger. These conditions typically involve granting another railroad either rights to operate on the combining railroads’ track or some form of switching rights to gain access to affected customers of the combining railroads. These conditions have been imposed in all large mergers occurring during the 1990s. Board officials have acknowledged, however, that due to staff and resource limitations they must by necessity be less proactive in monitoring mergers to ensure that conditions imposed are working properly to preserve pre-merger competition. The rate-setting environment has also been increasingly affected by railroads’ infrastructure needs. Railroads have increased their market share and the amount of tonnage they carry each year. However, even with the increased demand for rail transportation, real rail rates have declined, necessitating that railroads seek ways to continue to reduce costs. Two ways such costs have been cut are reductions in miles of road operated and employment levels. (See figs. 2.1 and 2.2.) From 1990 to 1997, the miles of road operated by Class I railroads decreased about 15 percent (from about 119,800 miles to about 102,000 miles), and Class I employment decreased by about 18 percent (from 216,000 employees to 178,000 employees). Although reductions in miles of road operated and employment have helped to reduce costs, they have also created capacity constraints and a need for investment capital to address these constraints as the rail market has grown in recent years. Obtaining this capital has become a concern of the rail industry, particularly given falling rates and revenue trends. Some of the railroad officials we spoke with acknowledged this concern and were unsure about how this problem would be addressed. For example, officials of one Class I railroad told us that, in the future, their company would have a difficult time meeting increased market demand because of a lack of equipment and inadequate track and rail facility infrastructure. The officials suggested that additional capital investment would be needed to address choke points—that is, sections of track and facilities that have more traffic than they can handle. However, making such investments would be difficult given falling rail rates. Officials at two other Class I railroads also expressed concern about market growth and capacity constraints and said that additional investment would be needed. The officials also agreed that this would be difficult, at best, given rail rate trends and the need to price their services to be competitive. The rate-setting environment has also been influenced by productivity gains. In particular, productivity gains have helped railroads reduce costs, which in turn has allowed railroads to reduce rates in order to be competitive. The productivity gains achieved in the 1980s have largely continued into the 1990s. (See fig. 2.3.) We looked at three measures of productivity—net ton-miles per train hour, revenue ton-miles per gallon of fuel consumed, and revenue ton-miles per employee-hour worked. In general, each of these measures, except net ton-miles per train-hour, increased since 1990. Net ton-miles per train-hour has fluctuated since 1990, and in 1996, was about 2 percent lower than it was in 1990. Revenue ton-miles per employee-hour worked, in particular, has shown dramatic increases since the late 1980s. Using an index based on 1980 (1980 equals 100), revenue ton-miles per employee-hour worked more than doubled from 1986 through 1996—rising from an index value of 151 to an index value of 344. According to railroad officials, most of the productivity gains achieved have been shared with customers through rate reductions. Although productivity gains have played a significant role in past rate making, there is some question as to whether these gains can continue to be achieved. One recent study suggests that the prospects for continued productivity improvements may be diminishing. This was attributed to the expectation that, because industry consolidation has permitted significant reduction in miles of road operated and employment levels, the next round of industry consolidation and mergers (network rationalization) might yield only modest productivity benefits. If so, then there may be fewer opportunities for the rail industry to rely on productivity gains to achieve cost reductions and therefore rate reductions. In fact, future productivity gains may be reduced because what was once redundant track and facilities (and therefore eliminated to reduce costs) might have to be brought back into service to meet market growth. Doing so could minimize productivity improvement. The rate-setting environment has been affected by domestic and world economic changes. This is especially true for rail commodities that are exported. For railroads, volatility in world grain markets can affect the volume of grain transported by rail. Over the last 10 years, the volume of export grain transported by rail has ranged from a low of about 28 million tons in 1994 to a high of about 56 million tons in 1988. Other rail commodities can also show fluctuations over time. From 1992 through 1996, the nation’s coal exports ranged from a low of about 71 million tons in 1994 to a high of about 103 million tons in 1992. The volatility in commodity markets can affect railroad rates because it affects the demand for rail transportation. As demand changes, railroads adjust rates to attract or retain business. For example, officials at one Class I railroad told us that it has a wide range of pricing policies for chemicals that allow it to react to changes in world chemicals markets. Officials from the same railroad said that export demand can play a particularly strong role for grain. Although grain rates can be affected by decreases in demand, there is more of an impact when exports are strong and their railroad is trying to keep business away from a competitor. The rate-setting environment has also been affected by legislative and/or regulatory actions. In 1990, the Clean Air Act was amended to, among other things, reduce sulfur dioxide emissions by electric generating plants. The act spurred the demand for low sulfur coal for use in generating electricity. This increased the demand for western coal, especially from the Powder River Basin area of Wyoming and Montana, which is known for its low sulfur content. In 1996, Wyoming produced more coal than any other state in the nation (about 278 million tons or about 63 percent more than the next highest state, West Virginia). About 85 percent of this coal moved by rail. Although demand for Powder River Basin coal has increased substantially, our analysis shows that inflation-adjusted Powder River Basin rail rates on both long (over 1,000 miles) and medium distance (over 500 miles) routes have generally decreased since 1990. Ongoing efforts to deregulate the electricity generation industry can be expected to affect future rail rates. Electricity generation is heavily dependent on coal as a fuel source. A recent Energy Information Administration study found that over 87 percent of all coal consumed in the United States was for electricity generation by utilities. Moreover, railroads are the largest carrier of coal, and transportation is a major component of the price of coal delivered to electric power generators. The study suggested that as the electricity generating industry becomes more competitive there will be pressure for the industry to reduce its costs, including the price it pays for coal and the transportation of coal. These cost reductions may have significant impacts on the railroad industry and future rail rates. In reducing the economic regulation of railroads through the 4R Act and Staggers Rail Act, the Congress expected that rates determined by market competition would, in general, benefit both railroads and shippers. In many instances, railroads faced competition from other railroads or modes of transportation, and the new congressionally set rail transportation policy recognized the broader nature of this competition by permitting railroads the flexibility to set their rates in response to rates and services available to shippers from other transportation options. In particular, railroad rates set in response to truck, barge, or railroad competition would typically be different (lower) than rates based primarily on a railroad’s full cost to provide service. Differential pricing then is a means by which railroads set rates reflecting the demand characteristics of shippers, with the result that shippers with similar cost characteristics (such as the number of railcars to be shipped or lengths of haul to destination) can pay quite different rates. Although rail rates set using demand-based differential pricing reflect the demand characteristics of shippers and market competition, such rates are also linked to railroad costs. Generally, the nature of a railroad’s fixed costs (e.g., physical plant such as rail, bridges, and signalling) is such that the costs of providing it are (1) incurred before any traffic moves and (2) insensitive to the level of rail traffic. Fixed costs are also largely unattributable to any particular shipper. For a railroad to be profitable, it must recover all of its costs—fixed as well as variable costs. Differential pricing is a pricing mechanism in which a railroad’s fixed costs can be recovered collectively from all shippers but not necessarily proportionately from each shipper. Under differential pricing, shippers without effective alternatives to a railroad’s transportation generally pay proportionately greater shares of the railroad’s fixed costs, while shippers with more alternatives pay proportionately less. Differential pricing was envisioned as benefitting both railroads and shippers. Railroads were expected to benefit from gaining the pricing flexibility to retain or attract shippers that would otherwise choose other transportation modes. In this way, railroads were expected to benefit from a larger and more diversified traffic base than under the previous regulatory scheme. Those shippers with competitive alternatives were expected to benefit from lower rail rates. Shippers without competitive alternatives were also expected to benefit. In theory, these shippers would pay less than if competitive traffic were diverted to an alternative transportation mode, thus leaving those shippers without alternatives to bear the unattributable costs previously assigned to the diverted traffic. The Congress expected that the transition to differential pricing and a more market-oriented system would not affect all shippers equally because, in general, transportation characteristics and market conditions vary among commodities. In practice, these expectations have been met. Data from the Board show that in 1990 about one-third of all rail traffic (as measured by revenues) was transported at rates generating revenues exceeding 180 percent of variable costs. By 1996, this percentage had decreased to 29 percent. That means that about 70 percent was transported at rates generating revenues that were less than 180 percent of variable costs. In addition, in 1996, the percent of commodity revenue for shipments transported at rates generating revenues exceeding 180 percent of variable costs fluctuated widely by commodity—ranging from a low of near 0 percent for fresh fish and tobacco products to a high of about 73 percent for crude petroleum and gasoline. Among the commodities included in our analysis of rail rates (coal, grain, chemicals, and transportation equipment), the percent of commodity revenue for shipments transported at rates generating revenues exceeding 180 percent of variable costs ranged from about 23 percent for farm products (grain) to about 54 percent for chemicals. One important factor that has played a role in how railroads set their rates has been the financial health of the railroad industry. During the 1990s, railroad financial health generally improved compared with the 1980s. Not only were returns on investment and equity higher, but railroads were able to increase their market share. However, most railroads have been determined by the Board to be “revenue inadequate”—that is, their earnings were less than the railroad industry’s cost of capital. Revenue adequacy determinations have been controversial, and some shippers have questioned the meaningfulness of the current method of determining revenue adequacy. Not being able to earn the cost of capital can affect a railroad’s ability to attract and/or retain capital and remain financially viable. In general, railroad financial health improved in the 1990s. For example, railroad returns on investment and returns on equity—both measures of profitability —were higher during the 1990s than they were in the 1980s. From 1990 through 1997, returns on investment averaged 8.5 percent per year while returns on equity averaged 10.7 percent per year. (See fig. 2.4.) This was about 61 percent and 24 percent greater, respectively, than the 5.3 percent and 8.7 percent returns on investment and equity achieved during the 1980s. The operating ratio, which shows how much of a railroad’s operating revenues are taken up by operating expenses, also showed improvement. From 1990 through 1997, railroad operating expenses accounted for, on average, about 87 percent of operating revenues annually—about 1 percentage point less than the average from 1980 through 1988. According to a Board official, every 1-percentage point change in the operating ratio can be significant to the railroad industry. However, not all aspects of financial health improved. For example, railroads’ ability to meet their short-term and long-term obligations were either about the same as, or worse than, during the 1980s. The current ratio, which compares the dollar value of current assets (such as cash) to the dollar value of current liabilities (such as short-term debt), averaged about 64 percent from 1990 through 1997. (See fig. 2.5.) In contrast, this ratio averaged about 113 percent from 1980 through 1988. Maintaining a current ratio of less than 100 percent may jeopardize a firm’s ability to pay its short-term debts when they come due. A firm’s ability to pay its long-term debt is generally measured by the fixed charge coverage ratio, which compares the income available to pay fixed charges with the interest expense that must be paid on debt outstanding. Since 1990, the fixed charge coverage ratio for the railroad industry was only marginally better than it was during the 1980s. From 1990 through 1997, the fixed charge coverage ratio averaged about 4.7—that is, the income available to pay fixed charges was about 4.7 times the interest to be paid. From 1980 though 1988, the ratio averaged about 4.6. Railroads have also increased their market share during the 1990s. (See fig. 2.6.) In 1990, railroads transported almost 38 percent of intercity revenue freight ton-miles. By 1997, the market share had increased to 39 percent. This increase came despite a general slowdown in the growth of intercity freight traffic handled by railroads in this decade. From 1990 through 1997, the amount of intercity freight tonnage handled by railroads grew, on average, about 2 percent annually. This compares with about a 3-percent average annual growth in the 1982 through 1989 period. The market share change may be a reflection of railroads’ increased use of contracts to tailor their rates and service to meet customer needs. According to AAR, in 1997 about 70 percent of all railroad tonnage moved under contract—up 10 percentage points from 1988. However, contracts are more prevalent for the shipment of some commodities than others. AAR statistics show that, in 1997, over 90 percent of all coal tonnage, but only about 26 percent of grain tonnage, moved under contract. In fact, the percentage of grain tonnage moved under contract has decreased over time. In 1994, about 50 percent of grain tonnage moved under contract compared with 26 percent in 1997. According to an AAR official, this decrease was primarily attributable to (1) an increased use by railroads of noncontract car reservation/guarantee programs to supply grain cars to shippers and (2) a 1988 regulatory change that increased the amount of public information about grain contracts. Under car reservation/guarantee programs, for a fee, shippers can obtain a set number of railcars for delivery at a future date(s). Although railroad financial health has improved, most Class I railroads are still not earning revenues adequate to meet the industry cost of capital. From 1990 through 1997, in any one year no more than three of nine Class I railroads were determined by the ICC/Board to be revenue adequate. From 1990 through 1994, in any one year no more than 2 of 12 Class I railroads were determined to be revenue adequate. The returns on investment of the remaining railroads have been below the railroad industry’s cost of capital. The degree that Class I railroads did not earn the industry’s cost of capital has fluctuated since 1990. (See table 2.1.) This appears to reflect fluctuations in average return on investment more than a change in the cost of capital. The cost of capital has generally remained between 11.4 percent and 12.2 percent from 1990 through 1997. In contrast, return on investment has ranged from just over 1 percent to just under 9.5 percent. As we reported in 1990, revenue inadequacy affects the ability of a railroad to attract and/or retain capital. Insufficient profit not only makes it difficult for railroads to cover costs, maintain operations, and remain financially viable, but may also induce investors to place their funds elsewhere. Revenue adequacy determinations for the railroad industry have been controversial. According to Board officials, controversy over revenue adequacy determinations is not new and that these issues have been addressed at length by the Board’s predecessor. However, in recent years, shippers and others have again questioned the meaningfulness of the current method of determining revenue adequacy, particularly railroads’ ability to attract capital for mergers and acquisitions. For example, in 1996, Union Pacific was expected to spend about $1.6 billion to acquire Southern Pacific Railroad. Nevertheless, in this same year, the Board determined Union Pacific to be revenue-inadequate. Similarly, in 1998, CSX Transportation estimated that it would incur over $4 billion in acquisition costs in the joint CSX Transportation/Norfolk Southern acquisition of Conrail. In 1997, CSX Transportation was determined by the Board to be revenue-inadequate. In April 1998, the Board began a proceeding to address issues related to railroad access and competition. As part of this proceeding, the Board called upon both railroads and shippers to mutually agree on an independent panel of disinterested experts to review how revenue adequacy is determined and to develop recommendations as to how, if at all, this determination should be changed. According to the Board, as of February 1999, although railroad representatives were satisfied with the neutral panel approach, shipper representatives opposed it and suggested instead that the Board initiate a rulemaking proceeding to address revenue adequacy issues. In commenting on a draft of this report, Board officials said that we should better explain that the Board, in its merger decisions, has taken actions to ensure that no shipper has become captive to a single railroad. The Board also said we should better recognize that controversy over revenue adequacy determinations is not new and that these issues have been addressed at length by the Board’s predecessor. To address these concerns, we have modified the report to acknowledge that the Board imposes merger conditions to ensure that any shipper that was capable of being served by more than one railroad before a merger would continue to have more than one railroad available after the merger. We also added language to better recognize that revenue adequacy determinations have been controversial for some time and that these issues had been dealt with by the Board’s predecessor. Since 1990, railroad rates have generally fallen both overall as well as for specific commodities. However, rail rates have not decreased proportionately for all shippers and users of rail transportation. Some shippers, like those transporting coal, have experienced larger rate decreases than other shippers. In addition, in other cases, such as long-distance wheat shipments from Montana and North Dakota to west coast destinations for export, real rail rates have stayed about the same as, or were slightly higher than, they were in 1990. We also found that revenues were 180 percent or more of variable costs for a number of routes, including short-distance movements of coal and long-distance movements of wheat from northern plains states such as Montana and North Dakota. The degree of competition on a route may have played a role in both how rates changed and/or how high or low a revenue to variable cost ratio may be for a specific commodity or route. While the revenue to variable cost ratio is often used as a proxy for market dominance, use of the ratio for this purpose may lead to misinterpretations. For example, even when railroads pass all cost reductions along to shippers in terms of reduced rates, the ratio can increase. Conversely, the ratio can decrease if railroads pass all cost increases along to shippers in the form of higher rates. In general, real (inflation-adjusted) rail rates have decreased since 1990. In fact, real rail rates have been falling since the early 1980s. In February 1998, the Board found that the average, inflation-adjusted Class I railroad rate had decreased by about 46 percent from 1982 through 1996.The Board found that rates in all major commodity groups decreased, including coal and farm products, which, as bulk commodities, have historically been shipped by rail. However, the decreases were not uniform. (See table 3.1.) Also, in general, the average annual rate of decrease in rail rates was somewhat lower in the 1990s (about 4 percent annually) compared with what it was from 1982 through 1989 (4.6 percent annually). The average annual rate of decrease in rail rates for farm products (which include grains such as corn and wheat) was about 7 percent in the 1980s, compared with only about 1 percent in the 1990s. In contrast, the average annual rate of decrease for coal was just over 3 percent in the 1980s, compared with almost 8 percent in the 1990s. Our analysis of overall real rail rates showed similar results, with certain exceptions. Using the Board’s Carload Waybill Sample—a data base of actual rail rates provided to the Board annually by individual railroads—we constructed rate indexes for coal, grain, certain chemicals, and transportation equipment for the period from 1990 through 1996. (See fig. 3.1.) As the figure illustrates, in general, rail rates for most of these commodities decreased over time. The exceptions were wheat, corn, and chemicals (potassium and sodium; plastics and resins). Wheat in particular showed general rate increases from 1992 through 1994—from about 2.1 cents per ton-mile to about 2.5 cents per ton-mile—before falling back to about 2.4 cents per ton-mile in 1996. Corn also showed increases from 1990 through 1995—from about 1.8 cents per ton-mile to just under 2.1 cents per ton-mile—before decreasing in 1996 to about 1.9 cents per ton-mile. There may be a variety of reasons behind the rate changes shown in figure 3.1. As we reported in 1990, to become more competitive railroads reduced rates. In addition, railroads have made extensive use of contracts to do business. Finally, rail rates reflect the specific characteristics of each commodity and the demand for rail transportation. According to USDA, transportation of wheat is dominated by railroads—in 1996 railroads transported about 57 percent of all wheat in the nation—and exports greatly affect the demand for rail transportation. Since 1990, the demand for rail transportation of wheat for export has fluctuated from a high of about 25 million tons in 1993 to a low of about 15 million tons in 1994. (See fig. 3.2.) In contrast, transportation of corn is more dependent on trucks—in 1996, trucks transported about 41 percent of corn production compared with about 38 percent for rail—and corn is primarily used for domestic poultry and cattle feed, domestic processing into ethanol alcohol, and other purposes. Also, significant amounts of corn are grown in areas accessible to navigable waterways, and much of the corn exported is transported by barge to such ports as New Orleans. As shown in figure 3.2, since 1990 the rail transportation of domestic corn has fluctuated from about 58 million tons in 1995 to about 45 million tons in 1991. These commodity characteristics may at least partially account for the overall difference in prices between wheat and corn—2 to 2.5 cents per ton-mile for wheat and less than 2 cents per ton-mile for corn. Our analysis of rail rates since 1990 for coal, grain (corn and wheat), chemicals, and transportation equipment in selected transportation markets/corridors generally showed that real rail rates have fallen.However, not all rates have fallen, and rail rates were sensitive to competition—both intermodal (competition between railroads, trucks, and other transportation modes) and intramodal (rail to rail). For example, we found that real rail rates for corn shipments from the Midwest, where there is barge competition, to the Gulf Coast were significantly less than rail rates for corn shipments on similar distance routes that appeared to offer little nonrailroad competition. We also found that rates in markets/corridors that are considered to have less railroad-to-railroad competition, such as the plains states of North Dakota and Montana, were generally higher than rail rates on similar distance corridors that might offer more railroad options. Finally, we found that the relationship of shipment size (number of railcars) to rates varied by commodity. Typically, as shipment size increases, rates charged per ton decrease, reflecting increased efficiencies in train operations. For coal and some other commodities we reviewed, we generally found that the size of shipments remained relatively constant from 1990 through 1996. However, at the same time rates were generally falling. This implies that factors other than shipment size accounted for the rate decreases. We also found that on at least one northern plains wheat corridor we reviewed, railroad rates generally did not decrease even as average shipment size increased. In general, real rail rates for coal shipments have fallen since 1990. This was true for overall rates and for the specific long-, medium-, and short-distance transportation corridors/markets. The rates on medium-distance routes (between 501 and 1,000 miles) provide a good illustration of the changes we found in coal rates. (See fig. 3.3.) As figure 3.3 shows, real rail rates for both the eastern (Central Appalachia) and western (Powder River Basin) coal routes that we looked at generally decreased since 1990. On the eastern medium-distance coal routes, rates generally decreased one-half to 1 cent per ton-mile. On the western medium-distance coal routes, rates generally decreased between two-thirds of a cent and one cent per ton-mile. The only real exception to the rate decreases was a slight increase in real rail rates from 1994 through 1996 on a route from Central Appalachia to Orlando. However, the rate in 1996 was still about seven-tenths of a cent less than the rate in 1990. There may be a number of reasons why rail rates for the transportation of coal have fallen. Although changes in shipment size may affect rail rates, in general we did not find any significant changes in shipment sizes from the 1990 through 1996 period for the routes/corridors we reviewed. On the medium-distance routes, shipment size for the eastern coal routes generally remained between 80 and 90 railcars over the entire period, except for the Central Appalachia to Norfolk, Virginia, route where shipment size generally stayed between 40 and 50 railcars. Shipment size on the medium-distance western coal routes generally remained between 100 and 115 railcars. Shipment size on western long-distance routes (over 1,000 miles) also generally remained in the 100 to 120 railcar range, while shipment size on the shorter distance coal routes (500 miles or less) generally remained in the 70 to 90 car range. One exception was a short-distance route between Central Appalachia and Charleston, West Virginia. On this route, the average shipment size increased from about 70 railcars in 1990 to about 100 cars in 1996. Over the same time period, the rail rate decreased about 30 percent—from about 6.5 cents per ton-mile in 1990 to about 4.5 cents per ton-mile in 1996. The coal rates we examined may have been affected by rail competition. Currently, two Class I railroads serve the Powder River Basin—the Burlington Northern and Santa Fe Railway and Union Pacific Railroad—and three Class I’s serve the Central Appalachia region—Conrail, CSX Transportation, and Norfolk Southern. Whether these or other railroads have the market power to extract higher rates from coal shippers is unclear. On the one hand, data from the Board show that from 1990 through 1996 the percent of coal shipments transported where revenues exceeded 180 percent of variable costs averaged about 53 percent. However, in 1996, 47 percent of the coal shipments were transported at rates where revenue exceeded 180 percent of variable costs. This was the lowest percentage since 1987. On the other hand, if the number of rate complaints filed with ICC or the Board is indicative of shippers’ views of market power wielded by railroads, about half of the approximately 40 rate complaints filed since January 1, 1990 (or were pending on that date), involved coal rates. As discussed earlier, rail rates for transporting grain such as wheat and corn have generally stayed the same or increased since 1990. However, rail rates for medium-distance routes (501 to 1,000 miles), such as from central plains origins around Oklahoma City and Wichita to Houston, showed some decreases. (See fig. 3.4.) On the other hand, rail rates from Great Falls, Montana, to Portland, Oregon, stayed about the same or increased slightly between 1990 and 1996. We found similar trends in other distance categories, particularly long-distance (greater than 1,000 miles) wheat routes. The rail rates on long-distance wheat routes from Billings, Montana, and Minot, North Dakota, to Portland both stayed relatively constant, at about 3 cents per ton-mile over the entire 7-year period. Rate trends for corn shipments were similar to those of wheat. Again, the variety of rate trends we found for shipments of corn can be seen on the rates for medium-distance routes. (See fig. 3.5.) Although the rates on some of the routes, most notably those routes from the midwest to Atlanta, showed decreases, rates for corn shipments from selected origins in Illinois to New Orleans showed some increases. As with wheat, rail rates for long-distance corn shipments on the routes we reviewed generally varied little, remaining in the 1.4 to 1.6 cents per ton-mile range from 1990 through 1996. We also found that rail rates for wheat and corn shipments appeared to be sensitive to both inter- and intramodal competition. For example, as shown in figure 3.4, rail rates for shipments of wheat from Duluth, Minnesota, to Chicago, Illinois—a route that is potentially competitive with Great Lakes water transportation—were significantly less—generally between 0.75 to almost 2 cents less per ton-mile—than rail rates on other medium-distance wheat routes. This includes rail rates for shipments from Great Falls, Montana, to Portland, Oregon, which some consider to lack effective transportation alternatives to rail. The same was true for corn shipments. The rail rates for corn shipments from Chicago and Champaign, Illinois, to New Orleans—routes which are barge competitive—were substantially less (in some years over 2 cents per ton-mile less) than rail rates on the other medium distance corn routes. (See fig. 3.5.) The sensitivity to intramodal competition is best seen by comparing rail rates for wheat shipments originating in the central plains states with the rail rates for shipments originating in the northern plains states. As figure 3.4 illustrates, rail rates for wheat shipments originating in Oklahoma City and Wichita were generally about 1 cent per ton-mile less than rates on the Great Falls, Montana, to Portland, Oregon, route which originated in the northern plains. Northern plains states, such as Montana and North Dakota, generally have fewer Class I railroad alternatives than the central plains states, such as Kansas. (See fig. 1.1.) Shipment size is an important factor influencing railroad costs and hence rates, particularly for agricultural commodities. Loading more cars at one time increases railroad efficiency and reduces a railroad’s costs. We found that the average shipment size of wheat originating in the northern plains was typically smaller than for wheat shipments originating in the central plains. For example, average shipment size on the Great Falls, Montana, to Portland, Oregon, route was about half that of shipments going from Wichita to Houston—about 40 railcars from Great Falls compared with about 70 railcars from Wichita. (See fig. 3.6.) This may partially explain why rail rates and costs for wheat shipments are higher in the northern plains than in the central and southern plains. To investigate further the effects of shipment size on railroad rates and variable costs, we developed regression equations using waybill data in which annual average revenues per ton-mile and average variable costs per ton-mile were calculated for export wheat corridors and shipment size categories, and then regressed on distance, a time trend, and indicators of the shipment size category. For a set of northern plains export corridors, the effects of increased shipment size on revenues were modest compared with the effects of shipment size on variable costs per ton-mile on these routes, and compared with the effects of shipment size on both revenues and variable costs for a set of central and southern plains export corridors. Specifically, revenues per ton-mile for the northern plains corridors were estimated to be 0.2 of a cent less on shipments between 5 and 50 cars than for shipments of fewer than 5 cars, while revenues per ton-mile for the central and southern plains corridors were estimated to be 0.6 of a cent less for a similar shipment size increase. Additionally, revenue per ton-mile in the central and southern plains for shipments exceeding 50 cars were estimated to decrease an additional 0.3 of a cent, while in the northern plains, the estimated reduction in revenue per ton-mile for this increase in shipment size was not statistically significant. For variable costs per ton-mile, there was more similarity between northern plains and central and southern plains states. For example, estimated cost reductions were statistically significant for all shipment size categories, although the magnitudes were greater in the central and southern plains case. For comparison purposes, we also reviewed rail rates for certain chemicals and transportation equipment. In general, we found that real rail rates for chemical shipments exhibited many of the characteristics of coal and grain discussed previously—that is, many of the rail rates on various routes fell, but rates did not fall on all routes. An illustration of these trends can be seen for shipments of potassium/sodium on medium distance routes. (See fig. 3.7.) As figure 3.7 shows, rail rates from Canadian origins to Minneapolis, Minnesota, decreased about one-third over the 7-year period—from about 5.4 cents per ton-mile to about 3.7 cents per ton-mile. However, rates from Casper, Wyoming, to Portland, Oregon, remained relatively stable at 3.4 cents per ton-mile. One of the largest rate changes was a decrease in rail rates for transportation of plastics and resins within the New Orleans, Louisiana, economic area (a short-distance route). On this route, rail rates decreased about 70 percent from 1990 through 1996—from about 47 cents per ton-mile to about 14 cents per ton-mile. (See app. II.) According to the Chemical Manufacturers Association, nearly two-thirds of the tonnage of chemicals and allied products shipped are transported less than 250 miles. At these distances, trucks are a competitive option for chemical shippers, and in 1996, about 52 percent of the tonnage of all chemicals and allied products shipped were by truck, with railroads only accounting for 21 percent. Rail rates for shipments of finished motor vehicles and motor vehicle parts and accessories also showed a variety of patterns. One of the most dramatic rate changes was a decrease in rail rates for the transportation of finished motor vehicles from Ontario, Canada, to Chicago, Illinois. On this route, rates fell about 40 percent—from 19.5 cents per ton-mile to 11.7 cents per ton-mile. In general, most rail traffic in motor vehicles and motor vehicle parts or accessories is under contract or has been exempt from economic regulation. According to AAR surveys, the percent of motor vehicle traffic that moved under contract increased from 55 percent in 1994 to 81 percent in 1997. Whether railroads have the market power to charge high rates is unclear. Officials from Norfolk Southern told us that automotive shippers “pay a premium rate for premium service.” This suggests that rates may be related to factors other than market power. In addition, officials from Union Pacific said their company has offered shippers reduced rates in return for guaranteed high volumes of shipments, again suggesting that rates are related to factors other than market power. Revenue to variable cost ratios are often used as indicators of shipper captivity to railroads. If used in this way, the higher the R/VC ratio the more likely it is that the shipper has used only rail to meet its transportation needs and the more likely it is that the railroad can use its market power to set rates that extract revenues much greater than its variable costs. Since 1990, about one-third of all railroad revenue has come from shipments transported at rates that generate revenues exceeding 180 percent of variable costs. However, the percentage varies by commodity and has changed over time. Our analysis suggests that competition can influence specific R/VC ratios for specific routes and commodities. In general, we found that R/VC ratios exceeded 180 percent on short-distance movements of coal and long-distance movements of wheat from northern plains states—movements where there may be less competition for the railroad. In contrast, R/VC ratios were consistently 180 percent or less on a wide variety of routes, including long-distance movements of coal. While R/VC ratios are often used as proxies for market dominance, use of such ratios for this purpose may lead to misinterpretations because R/VC ratios can increase as rail rates go down and, conversely, can decrease as rail rates go up. Overall, the percent of railroad revenue from shipments transported at rates generating revenues exceeding 180 percent of variable costs differs by commodity. (See table 3.2.) As table 3.2 shows, from 1990 through 1996, for all commodities, about one-third of all revenues generated by railroads came from movements transported at rates generating revenues exceeding 180 percent of variable costs. However, several commodities, such as coal, chemicals, and transportation equipment, had higher percentages of revenue from shipments at rates generating revenues exceeding 180 percent of variable costs. Farm products (which include grain shipments) had a lesser percentage. As table 3.2 shows, these percentages can change over time. For example, for coal and transportation equipment, in 1996, the percentage of revenue generated from shipments at rates generating revenues exceeding 180 percent of variable costs were the lowest they had been since 1990. By contrast, for chemicals, in 1996, the percentage of revenue generated from shipments at rates generating revenues exceeding 180 percent of variable costs was the highest it had been since 1990. We found a wide variety of R/VC results for the specific commodities and routes that we looked at. In general, R/VC ratios were consistently above 180 percent on short-distance movements of coal (such as from Central Appalachia) and certain long-distance movements of wheat. The R/VC ratios were consistently below 180 percent on long-distance movements of corn and of coal from the Powder River Basin and on medium-distance movements of corn and wheat. The ratios for the other commodities and routes that we reviewed showed no consistent pattern. The ratio results suggest that demand-based differential pricing may have played a role in how railroads set their rates. The fact that R/VC ratios were typically higher for short-distance movements of coal than for medium- and long-distance movements reflects the possibility that, as shipping distance increases, the shipper or receiver is better able to substitute other sources of coal. This same distance-related pattern of R/VC ratios was found for corn, illustrating both the nature of domestic corn markets as well as geographic considerations that favor barge options for the transportation of corn. In both the coal and corn cases, various competitive pressures may constrain the rates that railroads were able to charge for longer-distance movements, and this resulted in lower R/VC ratios. Long-distance movements of wheat often occurred at much higher R/VC ratios than were typically found for corn and coal. For example, the R/VC ratios for long-distance wheat movements originating in Montana and North Dakota were consistently at 180 percent or higher from 1990 through 1996. In contrast, the R/VC ratios on a Minneapolis, Minnesota, to New Orleans, Louisiana, route—where barges offer competition—were always below 100 percent. We also found differences in the ratio between northern and central plains routes for the medium-distance shipments of wheat. (See fig. 3.8.) The northern plains states are considered by some to have fewer rail alternatives than the central plains states. As figure 3.8 shows, the R/VC ratios for those wheat shipments originating in Wichita and Oklahoma City were consistently below 180 percent from 1990 through 1996. On the other hand, the R/VC ratio for wheat shipments originating in Great Falls, Montana, were consistently above 180 percent over the entire period. R/VC ratios have their limitations. One of these is how variable costs are determined. According to the Board, variable costs are developed in accordance with the Uniform Railroad Costing System (URCS). URCS is a general purpose costing system used by the Board for jurisdictional threshold determinations and other purposes. By necessity, URCS incorporates a number of assumptions and generalizations about railroad operations to determine variable costs. Because of these assumptions and generalizations, the variable costs developed under URCS may not necessarily represent the actual costs attributable to the particular shipment involved. The revenues used to calculate R/VC ratios may also not be actual. Board officials told us that revenues shown in the Carload Waybill Sample are not adjusted for such things as the year-end rebates and refunds often provided to shippers exceeding minimum volume commitments. As a result of these limitations, it is possible that some of the R/VC ratios used in our analysis would be different if actual revenues and variable costs were known. Perhaps a more serious limitation is possible misinterpretations of R/VC ratios. Because an R/VC ratio is a simple division of revenues by variable costs, it is possible an R/VC ratio could be increasing at the same time revenues and variable costs are both decreasing. For example, if rail revenues are $2 and variable costs are $1, the R/VC ratio would be 200. However, if revenues decrease to $1.50 and variable costs decrease to $0.50, the ratio becomes 300. Under this scenario, although railroads have passed all cost reductions along to shippers in terms of lower rates, the increased R/VC ratio makes it appear as though the shipper is worse off. On the other hand, R/VC ratios could be decreasing at the same time revenues and variable costs are increasing. For example, using the example above ($2 in revenues and $1 in variable costs with a ratio of 200), if revenues increase to $2.50 and variable costs increase to $1.50, the ratio becomes 167. In commenting on a draft of this report, the Board noted that competition is better measured by the effectiveness of transportation alternatives rather than the number of competitors. In response to this issue, we modified report language to better recognize the importance of effective competition in measuring the effects of competition on rail rates. In recent years, shippers have increasingly criticized Class I railroads for providing poor service. Rail service disruptions in the western United States in the summer and fall of 1997 brought national attention to these concerns. Among the problems cited by shippers were an insufficient supply of railcars when and where needed, inconsistent pickup and delivery of cars, and longer than necessary transit times to a destination. In general, railroad officials believe the railroads provide adequate service. However, they agree that service is not what it could be and that the industry has failed to meet shipper expectations. The quality of railroad service, over time for individual rail carriers or between specific railroads, cannot be measured currently. The Board determines whether service is reasonable on a case-by-case basis. In addition, the railroad industry has been reluctant to develop specific service measures for fear they could be misinterpreted or misused by the public or might reveal business-sensitive information. In reaction to widespread criticism of rail service, however, railroads have developed four performance indicators. Although these indicators may be helpful in assessing certain aspects of service, they are more an evaluation of operating efficiency than of quality of service. In recent years, railroad shippers, shipper associations, and local communities have complained in various forums about poor railroad service. Complaints have been particularly strong from agricultural shippers and communities in the West and Midwest. Union Pacific Railroad’s merger with the Southern Pacific Railroad in 1996 and the subsequent widespread delays in delivering railcars to destinations brought national attention to the seriousness of railroad service problems. Shippers attribute many of the problems they experience to a decrease in competitive transportation options as a result of railroad mergers. In addition, some shippers believe railroads must improve the consistency of their operations and increase the number of available railcars, among other things, in order to improve service levels. Many rail shippers believe service has been poor. Events in recent years may have exacerbated the problems. For example, in the summer of 1997, during implementation of the Union Pacific/Southern Pacific merger, rail lines in the Houston/Gulf Coast area became severely congested, and freight shipments in some areas came to a complete halt. As the problem spread, many grain shippers experienced delays in railcar deliveries of 30 days or more, while some grain shippers in Texas did not receive railcars for up to 3 months. Transit times for movements of wheat from Kansas to the Gulf of Mexico in some cases exceeded 30 days—four to five times longer than normal. In late 1997, the Board determined that the service breakdown, which had a broad impact throughout the western United States, constituted an emergency and, among other things, ordered Union Pacific to temporarily release its Houston area shippers from their service contracts so that they could use other railroads serving Houston, and to cooperate with other carriers in the region that could accept Union Pacific traffic for movement, to help ease the gridlock. The lack of predictable, reliable rail service has been a common complaint among some shippers. For example, during public hearings conducted by USDA in 1997, over 400 grain shippers and rural residents from Iowa, Kansas, Minnesota, Montana, and North Dakota expressed their concerns about cars not being delivered; little, or no, notification when railcars would be delivered; little or no success in trying to reach appropriate railroad officials for information on car deliveries; and the general lack of available cars when and where needed. These same types of problems were identified by shippers and shipper associations during additional hearings in Montana and North Dakota conducted in December 1997 by a Senate Subcommittee and in April 1998 by the Board during hearings on railroad access and competition issues. Our survey responses from about 700 bulk grain, coal, chemicals, and plastics shippers conducted in the fall of 1998 also reflect concerns about railroad service. An estimated 63 percent of the shippers responding to our survey (329 of 525 shippers that answered this question) said that the overall quality of their rail service was somewhat or far worse in 1997 than it was in 1990. Chemicals and plastics shippers were among the most dissatisfied with the overall quality of their rail service—approximately 80 percent of these shippers indicated that the overall quality of rail service they received in 1997 was somewhat or much worse than in 1990. About 71 percent of coal shippers indicated that the overall service levels provided by the railroads serving them were somewhat or much worse. Finally, echoing the complaints expressed during congressional hearings, an estimated 57 percent of grain shippers responding to our survey indicated their overall quality of rail service was somewhat or much worse in 1997 than it was in 1990. On the basis of our survey results, the types of problems experienced since 1990 have varied by commodity. (See table 4.1.) About 66 percent of coal shippers responding to our survey indicated that they experienced somewhat or much worse service in terms of car cycle time—that is, the amount of time it takes to deliver a commodity to its destination and return—in 1997 compared with 1990. Chemicals and plastics shippers identified problems with the consistency of on-time delivery as most problematic; about 84 percent of the shippers responding to our survey identified this problem as worse in 1997 compared with 1990. Grain shippers identified railcar availability as their most troublesome problem. An estimated 67 percent of grain shippers indicated that railcar availability during peak periods was somewhat or much worse in 1997 than it was in 1990. Railcar availability, in general, was rated as worse by an estimated 63 percent of the grain shippers. Shippers responding to our survey also indicated that the quality of service provided by the railroads has decreased relative to the amount paid for that service. This was particularly true in 1997 compared with 1990. An estimated 43 percent of those shippers (247 of 570 shippers) indicated that the quality of service provided by railroads in 1990 was somewhat or far less relative to the amount paid in 1990. In contrast, the percent of shippers indicating that the quality of service they received from railroads in 1997 was either somewhat or far less relative to the amount paid for that service had increased to an estimated 71 percent of those responding to our survey. Coal shippers and chemicals and plastics shippers were the most dissatisfied—about 80 percent and 88 percent, respectively, were dissatisfied with the value of their service. An estimated 66 percent of grain shippers responding to our survey said the quality of rail service was somewhat or far less relative to the amount that they paid for such service in 1997. The widespread dissatisfaction with railroad service has not necessarily resulted in many formal service complaints being filed with the ICC or the Board. Only 25 formal service-related complaints were pending with the ICC as of January 1, 1990, or were subsequently filed with the ICC or the Board. These complaints involved a wide range of alleged service problems, including failure to provide a sufficient supply of railcars; late inbound and outbound deliveries; and other kinds of inconsistent service. Of the seven cases that had completed the adjudicatory process as of February 1999, five were decided in favor of railroads and two in favor of shippers. Thirteen cases did not result in a decision because ICC/the Board did not have jurisdiction over the matter or the shipper withdrew the complaint. Five formal service complaints were pending as of February 1999. Typically, no more than two or three complaints were filed each year, except in 1995, when seven complaints were filed. Most of the complaints were filed against Class I railroads (68 percent), with the rest filed against smaller railroads (32 percent). Of the Class I railroads involved in these complaints, Burlington Northern had the greatest number of complaints filed against it (six) followed by Conrail (five) and CSX Transportation (three). On a commodity basis, customers who shipped grain products represented the largest proportion of complaints (20 percent), followed by customers who shipped steel and railcars (12 percent each). Many shippers and their associations have attributed service problems, at least in part, to railroad mergers or consolidations. When asked in our survey the extent to which mergers or consolidations since 1990 (excluding the Union Pacific merger with Southern Pacific) have affected the quality of rail service they received, an estimated 50 percent of the shippers (268 of 536 shippers responding) indicated that service levels were somewhat or much worse as a result of mergers or consolidations. When asked specifically about the effects of the Union Pacific merger with Southern Pacific on service levels, an estimated 84 percent of the shippers (371 shippers) indicated that the quality of rail service they received was either somewhat or much worse since the merger. Chemicals and plastics shippers indicated they were most affected by the Union Pacific/Southern Pacific merger—about 97 percent indicated that the rail service their companies received was somewhat or much worse. Similarly, about 94 percent of the coal shippers indicated that the Union Pacific merger had resulted in worse rail service. An estimated 77 percent of the grain shippers indicated they received somewhat or much worse rail service after the merger than before the merger. Shippers have also attributed service problems to a lack of competitive alternatives to rail transportation. Some shippers who told us that historically they have only been served by a single railroad or have no access to other transportation modes maintain that the rail service they receive is poor. For example, some North Dakota grain shippers told us that they are heavily dependent upon railroads to transport their grain because shipping grain by truck (the only other major mode of freight transportation available in the state) over long distances to mills, processors, and export markets is not economically feasible. As a result of this dependence, they claim there is little incentive or reason for the one railroad that serves them to provide quality service. These shippers told us that not only have railroads become more arrogant and stopped providing good service to those shippers for which they no longer face rail competition, but also railroads have tended to serve those customers with competitive alternatives first—leaving those shippers without competitive alternatives to receive the last and worst service. Shippers responding to our survey identified several changes that they believe railroads should make to increase rail service quality. Although grain shippers cited the lack of available cars as the aspect of service that has caused them the most problems, an estimated 68 percent of the grain shippers (331 of 485 shippers responding) indicated that they would like to see the consistency of on-time delivery of cars improved. An estimated 51 percent of the grain shippers (246 of 485 shippers responding) believe the number of available cars should be increased, and an estimated 33 percent (162 of 484 shippers responding) want to see the consistency of on-time pick up of cars improved. While both coal shippers and chemicals and plastics shippers identified consistency of on-time delivery as among the three most important changes needed to improve service, they identified improving transit times as among the most important changes that should be made by the railroads—about 75 percent of the coal shippers (62 of 83 shippers responding) and about 84 percent of the chemicals and plastics shippers surveyed (81 of 97 shippers responding) expressed the need for improved transit times. In general, rail industry officials believe the service they provide to their customers is adequate. In fact, railroads have made capital expenditures in recent years to improve system capacity and service levels. However, railroad officials recognize that railcar availability and the timeliness of rail shipments, among other things, do not always meet shipper expectations. Some industry officials believe capacity constraints, industry downsizing, and an inadequate railcar supply are among the factors that have contributed to the difficulties in meeting shipper service expectations. In addition, some railroad officials agree that rail mergers and consolidations, in particular the Union Pacific merger with Southern Pacific, have exacerbated service problems. Addressing service problems can be a challenge; railroad officials told us that they often face the difficult task of balancing the service needs of customers with the financial viability of the railroads. In general, railroad officials believe that current service is adequate. This is particularly true when compared with 1990. With the exception of service problems associated with the Union Pacific/Southern Pacific service crisis, officials from the four largest Class I railroads we spoke with about service said overall service in 1997 was at least as good as it was in 1990. They provided a number of illustrations for why service was as good as or better than in 1990. For example, Norfolk Southern officials said that their railroad and other railroads have made significant investments in cars, locomotives, and people to improve service. Officials from CSX Transportation said that investments in such things as the installation of continuously welded rail throughout the network, purchase of new cars and locomotives, and the development of better information technology to respond to customer problems have all contributed to improved service. There was also general agreement that rail industry consolidation, including the Union Pacific merger with Southern Pacific, has benefitted shippers by creating more single-line service that reduces the number of trains that must handle goods enroute, thereby reducing costs and transit times. However, many railroad officials also agree that service is not what it should be and may not have met shipper expectations for various reasons. For example, some railroad officials told us that delays on rail systems have been primarily caused by capacity constraints. As railroad traffic has been growing in recent years, and as railroads have been scaling back operations in order to cut costs, system capacity has become inadequate. In addition, to cut costs, railroads have reduced employment levels. Now, given the growth in railroad traffic, railroads have had insufficient people or crews available to provide the required service. For example, train delay data we obtained from one Class I railroad indicated that both a shortage of locomotives and crews were major causes of train delays from 1992 through 1996. Finally, an inadequate supply of railcars, especially for grain shippers, has contributed to shipper dissatisfaction. As one railroad official told us, railcar availability will always be a point of contention between railroads and shippers, and some railroads are reluctant to invest in the number of cars needed to handle peak demand if those cars might sit idle for a significant portion of the year. Some rail industry officials we spoke with, including those at the Union Pacific Railroad, acknowledged that the Union Pacific merger with Southern Pacific contributed to the service crisis which began in the late summer of 1997 in and around Houston, Texas. According to Union Pacific officials, Southern Pacific had more problems than Union Pacific officials expected, especially a substantial amount of deferred track maintenance. In general, these officials said that Southern Pacific had made a lot of operating decisions based on short-term cash flow considerations rather than long-term financial health. As a result, Union Pacific’s high traffic levels and a series of external stresses overwhelmed a weak Southern Pacific infrastructure. Union Pacific officials expect that as the railroad recovers from its difficulties, service levels will return to their pre-merger levels—which in their opinion, had improved since 1990. The difficulties experienced by Union Pacific affected other railroads as well. For example, officials at Norfolk Southern told us that because Norfolk Southern receives cars from Union Pacific Railroad for shipment to ultimate destinations and sends other cars to destinations that are on Union Pacific’s tracks, the Union Pacific’s problems adversely affected Norfolk Southern’s customer commitments. Officials at Burlington Northern and Santa Fe Railway told us that it took on a significant amount of additional business during the service crisis that would usually have been carried on Union Pacific, which resulted in a trade-off: railroad officials decided it was better to serve more shippers with a lower level of service rather than a more limited number of customers at a higher level of service. Officials from CSX Transportation also said the Union Pacific/Southern Pacific failures were a “wake up call” to the railroad industry to do a better job of serving its shippers. In providing high-quality service, railroad management faces the difficult task of balancing the needs of shippers with the financial viability of the railroad. In discussing service adequacy and shipper dissatisfaction, railroad officials made clear the role financial tradeoffs play in service decisions. Officials from CSX Transportation told us that their company could hire more crews and invest in assets to address capacity problems. However, in their opinion, the competitive nature of today’s railroad business precludes these extra costs from being passed on to shippers. Officials from other railroads agreed, saying that railroads need to add capacity—which will require a significant capital investment. In considering this investment, their companies will have to weigh issues such as the potential for future traffic growth; cost of adding capacity; and effects on rates and service. Tradeoffs will also be a part of the decision making process regarding railcars. Some railroad officials noted that shippers and railroads historically have disagreed on the adequacy of the supply of railcars, but actual investment in such cars involves a tradeoff between the investment in railcars and the return on that investment. Often, the return on investment is not sufficient to justify the investment cost. Management discretion that is inherent in railroad operations can also influence the quality of rail service. The logistics of moving different kinds of freight to a myriad of markets in different geographical locations can be a difficult task. Management decision making may play a larger role than technology in influencing service levels. This was the conclusion of a 1993 study conducted by the Massachusetts Institute of Technology, Center for Transportation Studies, on freight railroad reliability. This study concluded that decisions regarding power management (availability and positioning of locomotives), train operations (which trains to run, with what cars, and at what time), and the management of railroad terminals all had important consequences on railroad reliability. Some railroad officials we spoke with agreed that management decision making plays a significant role in the quality of service. For example, officials at Norfolk Southern told us that, although it has taken actions to minimize management decisions in providing service, there is still a fairly high degree of management discretion in service decisions. Officials at CSX Transportation told us that 85 to 90 percent of service performance involves management decision making about capital expenditures and operating expenses. In their opinion, at the local level, service decisions are very much influenced by budget and financial decisions, and insufficient funding could lead to reductions in such things as train service. Currently, the overall quality of railroad service provided by railroads cannot be measured. While the legislation governing railroad service requires that railroads provide service upon reasonable request, the Board and federal courts determine what constitutes reasonable service and whether a railroad has satisfied its service obligations in the context of deciding specific complaints. Industrywide measures of rail service for the most part do not exist. In general, the very limited industrywide measures we were able to obtain suggest some improvement in these measures in recent years. However, these measures are not enough to conclude that service has improved overall. Railroad officials told us they have been reluctant to develop service measures, fearing they could be misinterpreted or misused by customers and/or the public or that they may reveal business-sensitive information. According to AAR, individual rail carriers have developed measures of service over time that, while addressing carrier and/or customer specific service performance, are not necessarily consistent or continuous measures of service either between carriers or over time for individual carriers. Railroads are required by statute to provide service upon reasonable request; furnish safe and adequate car service; and establish, observe, and enforce reasonable rules and practices on car service. The Board (and its predecessor, ICC) and federal courts determine what constitutes reasonable service and whether a railroad has satisfied its service obligations in the context of deciding specific complaints. For example, in a 1992 case, the ICC addressed the issue of railcar supply in connection with a complaint challenging the legality of Burlington Northern Railroad’s Certificate of Transportation Program. The ICC held that Burlington Northern had not violated its statutory obligations and observed that the common carrier obligation requires that a railroad maintain a fleet sufficient to meet average—not peak—demand for service. According to the ICC, a requirement for a fleet sufficient to meet peak demand would result in a wasteful surplus of equipment detracting from a railroad’s long-term financial health. Other cases have involved such matters as whether a railroad was justified in refusing a shipper’s request to restore service on an embargoed line. However, ICC and the Board’s decisions are situation-specific and do not easily lend themselves to developing a single set of measures that would allow an assessment of a railroad’s—or the industry’s—quality of service in all circumstances. For the most part, industrywide measures of service performance do not exist. For example, according to AAR, there is no standard railroad industry definition of transit time and no central clearinghouse to collect industry service performance data. As a result, the types of service measurements maintained can vary from one railroad to another. The officials told us that trying to understand and develop industrywide service measures has been an important issue in the rail industry but “the least fertile area for information.” In addition, officials said that some industrywide service data that used to be collected have been discontinued. For example, AAR used to prepare reports on car cycle times, the percent of the railcar fleet that was out-of-service, and car shortages. These reports are no longer prepared because of data quality problems. A factor complicating the collection of industrywide service measures is that individual railroads have been reluctant to make such information public. According to AAR and officials at some Class I railroads we spoke with, this reluctance is based on concerns that service information could be misinterpreted or misused by the public, customers, or others or that the information may be proprietary. For example, AAR noted that providing information such as railcar transit and cycle times can be misleading because (1) cycle times are typically increased when additional railcars are added to the fleet (because it may take longer to load and unload trains with additional cars), (2) cycle times should be compared with target performance levels or standards which reflect seasonal fluctuations, (3) an increase in long-haul business may lead to a lengthening of cycle and transit times, and (4) a railroad cannot control what happens to a car once it leaves its tracks for movement to a final destination via another railroad. Regarding the latter, AAR said meaningful data on interline traffic (traffic which interchanges from one railroad to another), which represents roughly one-third of all rail freight revenue, are generally not maintained by individual railroads and would, therefore, not be captured in measuring railroad performance. As officials from one Class I railroad told us, just getting raw service data may not indicate the root cause of problems. Despite these limitations, two measures of industrywide service offer a narrow view of how service has changed since 1990. One is cycle time for freight railcars, which shows a slight improvement. (See table 4.2.) (In general, the faster the cycle time the more readily cars are available for additional trips.) In 1990, the average cycle time for all railcars was just under 18 days. In 1995 (the last year data were available), the average cycle time was just under 17 days. However, as table 4.2 shows, cycle time can fluctuate over time and, as AAR has pointed out, cycle time may be influenced by several factors, such as change in trip length. Another measure, the number of revenue freight cars undergoing or awaiting repairs (and, therefore, not available for active revenue service), also dropped slightly since 1990. (See fig. 4.1.) In 1990, about 52,000 of 677,800 cars (about 8 percent of railcars owned) were undergoing or awaiting repairs. In 1996 (the last year data were available), about 27,000 of 576,800 cars (about 5 percent of railcars owned) were in this category. However, this measure does not shed any light on how efficiently these cars were deployed or whether an adequate supply existed. Measuring service performance of the rail industry is further complicated by the fact that individual railroads do not maintain measures of service performance that are continuous or consistent across the industry. For example, we asked for, but generally did not obtain, information from individual Class I railroads about their service performance since 1990 in the following areas: (1) average car transit time—the amount of time from the departure of a shipment from an origin to delivery to a destination; (2) average car cycle time for unit trains; (3) car availability, during both peak and nonpeak periods—this would include the identification of car surpluses and shortages at each period; (4) on-time pickup of shipments; (5) on-time delivery of shipments; and (6) train delay summaries, including causes of train delays. Although some of the railroads we contacted maintained some of this information, including on-time pick up and delivery of cars and causes of train delays, most of this information was either not available going back to 1990 or was only used for specific analyses. In general, railroad representatives told us that railroads develop and maintain their own unique set of service performance measures that are tailored to their needs and their customers’ needs. Because no two rail customers may have identical service demands, and what is acceptable service to one shipper might not be acceptable to another, most railroads have developed service measures that meet the needs of their specific customers’ situations. The type and level of service can also be commodity-specific. For example, officials from CSX Transportation told us that shippers of different types of commodities demand different levels of service. For some commodities (such as intermodal containers and auto parts), on-time pick up and delivery are very important. For other commodities (such as coal and grain), through-put (total amount of tonnage) may be more important than timeliness. Finally, officials from Norfolk Southern also pointed out that differences exist between eastern and western railroads in terms of the types of service measures a railroad might keep, because eastern railroads carry, for example, more coal and western railroads carry more grain. As a result, eastern railcar delivery delays are generally measured in hours, not days as they might be in the west. Railroad mergers have also influenced the availability and consistency of service measures. As an illustration, Burlington Northern and Santa Fe Railway officials noted that, prior to the Burlington Northern merger with Santa Fe in 1995, each railroad collected its own unique service data. Because of this, data for the pre-merger period may not be available in all cases or may be inconsistent in what it measured. In addition, officials from Union Pacific Railroad told us they had concerns about providing us with service data because the type of measures collected had changed over the last 10 years—Union Pacific Railroad today is the product of mergers of several railroads, each of which had maintained unique data systems. Union Pacific officials also noted that computer technology advances have allowed Union Pacific to generate new types of data that were previously impossible to generate and that are not comparable with any data from pre-merger periods. In part due to the widespread criticism of the industry over the quality of its service, railroads are developing industrywide performance measures. As part of its overall review of railroad access and competition issues, the Board directed railroads to establish a more formal dialogue with shippers for this purpose. In response, from August to November 1998, AAR held a series of meetings across the country between Class I railroad executives and shippers to discuss service issues. As a result of these meetings, the Class I railroads decided to make available, through the Internet, actual data (not an index) on four measures of performance directed at providing shippers and others with a means to evaluate how well traffic moves over railroad systems. These measures, which the railroads began reporting in January 1999, include (1) total railcars, by type, currently on the rail system; (2) average train speed by type of service; (3) average time railcars spend in major terminals; and (4) timeliness of bills-of-lading (a receipt listing goods shipped). These measures are updated weekly and broken out by individual railroad. According to AAR, these measures are informational in nature, but consideration is being given to establishing standards and goals in these four areas. According to AAR, it is expected that rail customers will be able to use the data to determine what is happening in terms of performance on each railroad. However, according to AAR, these measures are not uniformly calculated across the industry and may be influenced by operating differences among railroads, including traffic mix, weather conditions, and terrain. Therefore, AAR cautions that this information should not be used to compare one railroad against another. Although these measures may be helpful in assessing certain aspects of service, they are more an evaluation of railroad operating efficiency rather than of quality of service. They also may not resolve more fundamental concerns about service. For example, in a November 1998 letter to the Board, several shipper associations and shippers expressed their concern that better information alone will not solve the service problems resulting from railroad consolidations and enhanced market power. In commenting on a draft of this report, the Board indicated that 1997 was not a typical year in terms of the quality of railroad service due to the unusual, severe congestion that occurred in the West. The Board also suggested that performance measures recently developed by the railroad industry can be helpful in measuring some aspects of service quality. In response to the these comments, we added material to the report reflecting the Board’s assessment that railroad service in 1997 was atypical and that service has improved since that time. We also revised the report to better recognize that recently developed performance measures may be helpful in measuring some aspects of service quality. However, we continue to believe that these measures are more an evaluation of railroad operating efficiency than of quality of service. Federal agencies and railroads have taken a number of actions to address the service problems that originated in the Houston/Gulf Coast area in 1997 during the implementation of the Union Pacific/Southern Pacific merger as well as service issues that are more longstanding and widespread. These actions have led to some progress, particularly the dissemination of new information regarding rail service and additional options for shippers and carriers to resolve disputes. However, in spite of the various actions to address service issues, shippers remain concerned about a lack of access of many shippers to competitive rail alternatives and the effect of this lack of competition on service levels. Shippers and railroads hold widely differing views on this key issue. The Board has tried, without success, to get the two sides to reach some agreement on this issue and has suggested that these issues are more appropriately resolved by the Congress. If the Congress decides to address this issue, it will need to weigh the potential of increased competition to improve service against the potential financial and other effects on the railroad industry. The Union Pacific/Southern Pacific system started experiencing serious service problems in July 1997 during the process of implementing the merger of the two railroads. Congestion on this system spread to the Burlington Northern and Santa Fe Railway system, affecting rail service throughout the western United States. Serious rail service disruptions and lengthy shipment delays continued throughout the last half of 1997, particularly in the Houston area. To address service problems on the Union Pacific/Southern Pacific system, Union Pacific adopted a Service Recovery Plan in September 1997. Under this plan, the railroad, among other things, took actions to reduce train movements on the Union Pacific/Southern Pacific system and manage traffic flows into congested areas, acquired additional locomotives, and hired additional train and engine crew employees. In response to growing concerns about the deteriorating quality of rail service in the West, the Board issued an emergency service order in October 1997. This order, and subsequent amendments to it, directed a number of actions aimed at resolving service problems in the Houston area, the source of the crisis. In particular, the order directed temporary changes in the way rail service was provided in and around the Houston area to provide additional options for shippers and carriers and required weekly reporting by Union Pacific on a variety of service measurements, such as system train speed and locomotive fleet size. In December 1997, the service order was expanded to require grain loading and cycle time information to be submitted by Burlington Northern and Santa Fe Railway. In August 1998, the order expired and the Board decided not to issue another emergency service order, finding that there was no longer any basis for such an order given the significant improvements in Houston area rail service. However, the Board noted that service was still not at uniformly improved levels, as reflected by congestion in Southern California. Accordingly, the Board ordered Union Pacific/Southern Pacific and Burlington Northern and Santa Fe Railway to continue the required reporting on a biweekly basis so that it could continue to monitor service levels. In December 1998, the Board discontinued this requirement, citing further service improvements and the intention of all of the Class I railroads to start issuing weekly performance reports in January 1999. As part of its oversight of the Union Pacific/Southern Pacific merger, the Board has considered requests by various parties for additional merger conditions that would modify the way in which rail service is provided in the Houston area. In its December 1998 decision, the Board announced several changes in response to these requests in order to enhance the efficiency of freight movements in the area. Most significantly, the Board authorized the joint Union Pacific/Burlington Northern and Santa Fe Railway dispatching center at Spring, Texas, to route traffic through the Houston terminal over any available route, even a route over which the owner of the train does not have operating authority. However, the Board declined to adopt a plan sponsored by a group of shippers, two affiliated railroads, and the Railroad Commission of Texas that would have displaced the current Union Pacific operations in the Houston terminal area by establishing neutral switching and dispatching operations by a third party, the Port Terminal Railroad Association, in order to increase competition in the area. According to the Board, implementing this plan would have required Union Pacific to give trackage rights to this association and all other railroads serving Houston. In making its decision not to adopt the plan, the Board concluded that the service crisis in Houston did not stem from any competitive failure of the Union Pacific/Southern Pacific merger. The Board further concluded that the plan was not necessary to remedy any merger-related harm because it would add new competitors for many shippers in the Houston area that were served by only one carrier prior to the merger and, therefore, had not experienced a decrease in competition as a result of the merger. According to the Board, absent merger-related competitive harm, such an arrangement would thus constitute “open access”—an idea that shippers should, wherever possible, be served by more than one railroad, even if, in order to produce such a system, railroads that own a majority of an area’s rail infrastructure would be required to share their property with others that do not—an action which Board officials said the law does not provide for at this time. Union Pacific has recently taken further actions aimed at improving its service levels. These actions have included decentralizing railroad operations and implementing capital and maintenance projects, such as projects to improve, expand, and maintain its railroad track. Also, in August 1998, the railroad created a new internal organization, called Network Design and Integration, which will be responsible for identifying the services most needed by shippers and developing plans for delivering them. This organization is expected to serve as a link between the marketing and operating departments, to ensure that service commitments to shippers match the railroad’s capacity to deliver these services. In December 1998, Union Pacific reported to the Board that its operations had returned to normal levels, citing its average system train speed that had risen above 17 miles per hour for the first time since July 1997, when its service crisis began. The railroad acknowledged that its service levels still needed improvement but maintained that its latest service measures demonstrated a recovery from its prior serious service problems. Federal agencies as well as railroads have recently taken a number of actions aimed at addressing freight rail service issues of a broader nature than the recent service crisis in the West. These issues include the need to foresee and prevent service problems and expeditiously resolve them when they do arise and the need to expand the capacity of the railroad system to provide service. Among the actions by federal agencies are efforts by the USDA and the Board to disseminate information that can help railroads, shippers, and receivers anticipate changes in transportation demand and supply and the adoption by the Board of new procedures allowing it to authorize temporary alternative rail service more quickly for shippers affected by serious service disruptions. In addition, individual railroads have recently made efforts to improve service through changes in their customer service organizations and increased investments in infrastructure. Finally, partly at the urging of the Board, the railroad industry has acted to address some service issues. Actions include a commitment by the Class I railroads to issue weekly measures of their service performance, an agreement between Class I railroads and grain and feed shippers to resolve some service-related disputes through binding arbitration, and an agreement between Class I and smaller railroads aimed at allowing smaller railroads to play a greater role in providing service to shippers. The rail congestion that occurred during the 1997 rail crisis in the West severely affected the movement of grain to market. This situation illustrated the need to better monitor production levels, the transportation needs of grain shippers, and the capacity of the railroads to meet those needs, so that shippers and railroads could anticipate changes in transportation demand and supply and make adjustments that could lessen the severity of such changes. To meet this need, the Board and USDA signed an agreement in May 1998 to create a Grain Logistics Task Force. This task force, made up of Board and USDA officials, was tasked with identifying and disseminating information on grain production and consumption and transportation requirements. The task force began issuing reports in August 1998 and expects to issue them five times a year. These reports contain information on such things as expected production levels of various grains (by state), grain supplies and storage capacity, and railcar loadings and the demand for rail transportation. To address long-term transportation issues facing the nation’s agriculture sector in the 21st century, USDA also held a National Agricultural Transportation Summit in Kansas City in July 1998. This meeting provided a forum for agricultural shippers and others to express their concerns about grain marketing and demand, and railroad service quality issues. A significant outcome of this summit was an agreement between USDA and DOT to create a Rural Transportation Advisory Task Force. The objectives of this task force include undertaking joint outreach to users and providers of agricultural and rural transportation services to further identify transportation challenges and ways in which these challenges can be met and considering joint research efforts and policy initiatives to address these challenges. While the scope of the task force’s responsibilities will be broad, freight rail service to the nation’s agricultural community will be a key component of its work. At hearings held by the Board in April 1998 to review issues concerning rail access and competition, shippers complained about a number of service problems, including the difficulties in seeking relief from serious service disruptions through the Board’s existing procedures. In response, the Board adopted new procedures in December 1998 providing temporary relief from serious service problems, through service from an alternative rail carrier, more quickly. Shippers and smaller railroads can seek temporary alternative service in two ways: (1) through an 8-day evidentiary process for requesting short-term emergency relief for up to 270 days or (2) through a 45-day evidentiary process for requesting longer-term relief for serious, though not emergency, service inadequacies. Prior to obtaining either type of relief, the petitioning shipper or railroad must discuss the service issues with the incumbent rail carrier and obtain the commitment from another rail carrier to meet the identified service needs. These expedited procedures do not require a showing that the rail carrier has engaged in anticompetitive conduct. Rather, the petitioning shipper or railroad must show a substantial, measurable deterioration or other demonstrated inadequacy in rail service over an identified period of time. In order to be better able to resolve service problems brought to their attention by customers, individual Class I railroads have recently taken a number of actions to improve their customer service organizations. For example, some railroads have removed their local customer service personnel from field offices and replaced them with centralized customer service centers. At these service centers, service representatives either route the customer to the appropriate department at the railroad for problem resolution or handle the calls directly. As noted previously, Union Pacific Railroad expects to improve its ability to meet its customers’ service expectations through the creation of its new organization that will serve as a link between its marketing and operating departments. In its attempts to improve customer service, Norfolk Southern has added yard operations, billing, and freight claim settlement to the responsibilities of its customer service center. Finally, Burlington Northern and Santa Fe Railway has instituted a Grain Operations Desk that serves as a point of contact for grain shippers throughout its rail system for obtaining information on the arrival of empty grain cars, improving the spotting of loaded cars, and improving overall communications between the railroad and its customers. The Class I railroads have also been attempting to improve service through capital investments to improve their infrastructure and expand their capacity to provide service. Class I railroad capital expenditures in 1997 were about 31 percent higher (in constant dollars) than they were in 1990. Rail industry officials told us that these investments are important because they help relieve capacity constraints caused by restructuring of railroad operations and the growth of traffic in recent years. Investments have included new rail yards and terminals, additional sidings and track, and additional cars and locomotives. However, these railroad representatives believe that further capital investments are needed to address service problems. Railroad officials also told us that hiring new employees is important to increase the number of train crews available. In April 1998, following its hearings on rail access and competition issues, the Board issued a decision that called on railroads and shippers to discuss and identify solutions to a number of service-related problems. One problem that the Board noted was the need for greater communications between railroads and their customers and the need for railroads to find a more systematic way of addressing customer concerns. Accordingly, the agency directed the railroads to establish formal dialogue with shippers. In response, from August through November 1998 the AAR held five meetings across the country, attended by the Board’s chairman, between Class I railroad executives and their customers to discuss service issues. At these meetings, the railroads introduced four proposed measures of railroad service predictability and asked for feedback on their usefulness. The industry had developed these measures in July 1998 in response to customer suggestions that such measures were needed. The industry maintains that these indicators will reflect the general health of each railroad and will provide an early warning of developing operational problems. The Class I railroads began making these measures available on the Internet in January 1999; they plan to update the measures weekly. In addition, AAR held a “customer service symposium” in March 1999 in order to facilitate further dialogue with shippers on aspects of service such as shipment tracking and problem resolution. Although many shippers have welcomed these efforts, some have expressed skepticism about their impact on broader transportation issues. For example, in November 1998, 27 shipper associations sent a letter to the Board noting that, while they welcomed the railroads’ efforts to improve service predictability, the meetings have not addressed shipper concerns regarding systemic issues such as the lack of competitive rail alternatives and the effectiveness of available regulatory remedies. Shippers with specific complaints regarding rail service may seek a resolution of the problem through the Board’s formal complaint adjudication process. However, in order to establish an alternative private sector process for resolving disputes between agricultural shippers and rail carriers, the National Grain and Feed Association reached an agreement with Class I railroads and the AAR in August 1998 that provides for compulsory, binding arbitration—as well as nonbinding mediation—to resolve specific types of disputes. Although this initiative was not specifically called for by the Board, the Board noted that it is consistent with its preference that private parties resolve disputes without Board involvement and the litigation that it involves. The agreement covers a wide range of grain and feed products and covers such disputes as the misrouting of loaded railcars, disputes arising from contracts, and disputes involving the application of rules governing car guarantee programs. Those parties agreeing to use this arbitration process are not obligated to arbitrate claims that exceed $200,000. Officials from one Class I railroad we spoke with said this agreement is like a small claims court for handling small rate and service problems. The agreement is not designed to handle multimillion dollar cases. The role of non-Class I railroads in providing freight service has been another issue of concern. These railroads, as well as shippers, have expressed concerns regarding obstacles, such as inadequate railcar supply and lack of alternative routings, that prevent small railroads from expanding their business and providing increased service options to their customers. In its April 1998 decision, the Board directed short line and regional railroads (collectively called small railroads) and Class I railroads to complete discussions they had begun on these problems. In September 1998, the American Short Line and Regional Railroad Association and the AAR announced that they had reached agreement on provisions aimed at giving short line and regional railroads access to new routing arrangements to develop new business. The agreement also contains guidelines for how certain fees and rates charged by Class I railroads to provide service to small railroads will be set and how revenue would be divided between Class I and smaller railroads. As part of the agreement, the railroads agreed to submit disputes regarding these provisions to binding arbitration. The president of the American Short Line and Regional Railroad Association described the agreement as a “framework of partnership and growth for years to come.” In a survey conducted by the association at the end of 1998, executives of small railroads were also optimistic but cautioned that the implementation of the agreement depended on cooperation by Class I railroads. While the actions described above have addressed some service-related issues, some shippers remain concerned regarding the systemic issue of increasing consolidation within the railroad industry. They complain that this consolidation has reduced competition within the railroad industry, leading to a situation in which many shippers are without competitive rail alternatives and must pay higher rates for inadequate service. The divergent views held by railroads and shippers on this issue make it much more difficult to address than the issues described previously. The Board is authorized to impose remedies giving shippers access to more routing options—alternative through routes, reciprocal switching, and terminal trackage rights—on a permanent basis. However, under its competitive access regulations, the shipper must demonstrate that its incumbent rail carrier has engaged in anticompetitive conduct.Specifically, the shipper must show that the carrier has used its market power to extract unreasonable terms or, because of its monopoly position, has disregarded the shipper’s needs by providing inadequate service.Some shippers have complained that this requirement is too difficult to meet, and as a result, the Board has not imposed competitive routing options where shippers believe such options are needed. Some shippers consider the requirement to demonstrate anticompetitive conduct to be the most problematic aspect of the Board’s interpretation of its statutory authority on this issue. The shippers believe that the elimination of this requirement is essential. However, the railroads believe that the demonstration of anticompetitive conduct is a necessary prerequisite to the imposition of a competitive routing option. Railroads cite concerns that increased competition imposed through regulation would undermine the industry’s ability to cover their high fixed costs and earn adequate returns. In its April 1998 decision regarding rail access and competition issues, the Board stated that it would consider whether to revise its competitive access rules. However, the Board directed that, first, railroads should arrange meetings with a broad range of shipper interests under the supervision of an administrative law judge to examine the issue. In these meetings, shippers and railroads were to try to mutually identify appropriate changes to the Board’s rules that would facilitate greater access to competitive rail alternatives where needed. In response, shippers and railroads held discussions in May and June 1998 on proposed revisions to these rules but, due to widely divergent views on the topic, could not come to any agreement. In its December 1998 report to Members of Congress on rail access and competition issues, the Board declined to initiate further action on this issue, pointing to its adoption of new rules, described previously, that allow shippers temporary access to alternative routing options during periods of poor service. In response to the impasse between the representatives of railroads and shippers, the Board observed that the competitive access issue raises basic policy questions that are more appropriately resolved by the Congress. These questions include the appropriate role of competition, differential pricing, and how railroads earn revenues and structure their services. The Board noted that this issue is complex, and it is unclear how changes in its rules pertaining to competitive routing options would affect the nation’s rail system and the level of service provided by this system. In its December 1998 decision in the Houston/Gulf Coast oversight proceeding, the Board recognized the possibility that opening up access could fundamentally change the nation’s rail system, possibly benefitting some shippers with high-volume traffic while reducing investment elsewhere in the system and ultimately reducing or eliminating service for small, lower-volume shippers in rural areas. Board officials noted that many small, low-volume shippers have already lost service options as larger railroads shed their low-density and otherwise unprofitable lines. Fundamental differences exist between shippers and railroads on the issue of mandating additional competition in the railroad industry. If it decides to address this issue, the Congress will need to weigh the potential benefits of increased competition with the potential financial and other effects on the railroad industry. In deliberating this issue, the Congress will need to consider such things as the potential impacts of proposed changes on shipper routing options and railroad service levels as well as the rail system as a whole, including railroad revenues, infrastructure investment, capacity, and operations. In commenting on a draft of this report, the Board suggested that we modify our characterization of the 1997 service problems in the West to make clear that these problems were not the result of the Union Pacific/Southern Pacific merger and that implementation of this merger helped solve the problems. In addition, the Board suggested changes to present a more complete and precise portrayal of both its October 1997 emergency service order in response to these service problems and its December 1998 decision in the Houston/Gulf Coast oversight proceeding. Finally, the Board suggested we expand our discussion of the Board’s assessment of the possible impacts of providing “open access” throughout the nation’s rail system. In response to these comments, we revised our description of the service problems in the West to eliminate the impression that these problems were caused by the Union Pacific/Southern Pacific merger; we revised the report to provide a more complete discussion of the Board’s emergency service order and decision in the Houston/Gulf Coast oversight proceeding; and we added material to the report discussing the Board’s views on the potential impacts of implementing railroad open access.
Pursuant to a congressional request, GAO provided information on: (1) the environment within which railroad rates have been set since 1990; (2) how railroad rates have changed since 1990; (3) how railroad service quality has changed since 1990; and (4) actions taken by the Surface Transportation Board and others to address railroad service quality problems. GAO noted that: (1) the environment in which railroads set their rates has been influenced by ongoing industry consolidation, competitive conditions, and railroads' financial health; (2) as a result of mergers, bankruptcies, and the redefinition of what constitutes a major railroad, the number of independent Class I railroad systems has been reduced from 30 in 1976 to 9 in early 1999, with the 5 largest Class I railroads accounting for 94 percent of industry operating revenue; (3) this increased concentration has raised concerns about potential abuse of market power in some areas due to railroads' use of market-based pricing; (4) under market-based pricing, rail rates in markets with less effective competition may be higher than in markets that have greater competition from railroads or other modes of transportation; (5) railroads' financial health has also improved since 1990; (6) however, despite these improvements, the Board has determined that most Class I railroads are revenue inadequate because they do not generate enough revenue to cover the industry's cost of capital; (7) although such determinations are sometimes controversial, revenue inadequacy affects the ability of a railroad to attract or retain capital and remain financially viable; (8) railroad rates have generally decreased since 1990; (9) the decrease has not been uniform, and in some cases, rail rates have stayed the same as, or are higher than, they were in 1990; (10) this was particularly true on selected long distance rail shipments of wheat from northern plains states like Montana and North Dakota to west coast destinations; (11) rail routes with effective competitive alternatives--either from railroads or from trucks and barges--experienced greater decreases in rail rates; (12) as the rail industry has consolidated, shippers have complained that service quality has deteriorated; (13) shippers' complaints have included a lack of railcars when and where they were needed and inconsistent pickup and delivery of cars; (14) roughly 60 percent of the coal, grain, chemicals, and plastics shippers responding to GAO's survey said that their service was somewhat or much worse in 1997 than it was in 1990; (15) the overall quality of rail service cannot be measured; (16) federal agencies and railroads have taken a number of actions to address rail service problems; and (17) although these actions are expected to yield benefits, they do not address some shippers' belief that greater competition in the rail industry is needed to improve service.
Since 1992, we have reported that the government could pay hundreds of millions of dollars to and on behalf of DOD contractors for cleanup resulting from their operations. In October 1992, we reported that DOD reimburses contractors for cleanup expenses at their private property in different ways, with wide variances in reimbursement decisions and in investigations into possible wrongdoing by contractors. In July 1994, we reported that DOD had also incurred cleanup expenses in cases where contractors and other private parties were involved in contamination of government property. DOD had inconsistent policies and practices for recovering costs from other responsible parties. In both reports, we recommended that the Secretary of Defense provide guidance to resolve the disparities. One of the principal laws governing responsibility for hazardous waste cleanup at federal facilities is the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) of 1980, as amended (42 U.S.C. 9601). This act, commonly known as Superfund, holds owners, operators, and other responsible parties, including federal agencies, liable for cleanup of past contamination. Cleanup at federal facilities is also subject to the legal requirements of the Resource Conservation and Recovery Act of 1976, as amended (42 U.S.C. 6901), and applicable state laws. DOD’s Defense Environmental Restoration Program addresses identification, investigation, and cleanup of past contamination on DOD installations. Funding for the cleanup has come primarily through the Defense Environmental Restoration Account (DERA). The individual services and DLA are responsible for cleaning up their respective installations, while the Army Corps of Engineers is responsible for cleaning up formerly used DOD sites. In the absence of sufficient DOD guidance, the services have taken different approaches in asking parties associated with GOCOs to share the cost of cleaning up contaminated sites and wide disparities still remain. Since our 1992 report, the Air Force has issued guidance for dealing with other responsible parties at its facilities. The Air Force, the Navy, and the Army Corps of Engineers have policies or guidance in place to encourage cost sharing with contractor operators and other responsible parties, while the Army itself and DLA generally do not. Except for the Navy, each service has obtained some cost sharing at GOCO facilities with other responsible parties. However, only the Air Force and the Army Corps of Engineers have achieved cost sharing with contractors that operated government-owned facilities. The Army has no servicewide policy regarding cleanup cost sharing. However, in a series of actions, the Secretary of the Army approved indemnification of ammunition plant operators from financial liability for environmental cleanup. Army officials state that there has been no actual payment to operators under indemnification because the Army pays for the cleanups directly out of its own funds. In fiscal year 1994, Army ammunition plants accounted for $3.1 billion (86 percent) of the $3.6 billion in past and future cleanup costs reported by DOD. Pursuant to the Secretary’s approval, the Army authorized the inclusion of Public Law 85-804 indemnification clauses in its contracts with ammunition plant operators. These clauses indemnified the contractors against unusually hazardous risks, including environmental releases. According to Army officials, contingency clauses in the contracts also protect ammunition plant operators against environmental liability. The Army has not negotiated any cost-sharing agreements with contractor operators at the ammunition plants. However, the Army negotiated a cost-sharing settlement with a contractor who produced ammunition for the Army as a tenant at one plant we visited. Also, as discussed in our July 1994 report, the Army Corps of Engineers negotiated a cost-sharing settlement with contractors and other private parties at formerly used defense sites. Since 1989, Navy policy has required major command officials to immediately negotiate cost-sharing arrangements with contractors as soon as the need for cleanup is identified. The policy requires that past and current GOCO contractors pay “any and all” cleanup costs associated with their operation of Navy facilities. However, the Navy has not initiated timely requests for cost sharing or followed up. For example, although Navy’s 1989 policy required officials to begin negotiation on cost-sharing arrangements at the two facilities we visited, the Navy has not initiated timely requests for contractor participation in the cleanup. The Navy did not send a letter requesting contractor participation in cleanup at the Allegany Ballistics Laboratory in West Virginia until 1994, and has not begun as of March 6, 1997, the required negotiations with the contractor at the Naval Industrial Reserve Ordnance Plant in Fridley, Minnesota. Neither operator plans to pay any cleanup costs involving Navy property. Under the facilities-use contracts at these locations, GOCO contractors provide goods and services to the Navy, and the service does not directly manage their operations. Navy documents show that operational decisions, including those involving waste disposal, are made by the contractor. To date, the Navy has taken responsibility for cleanup costs. Navy officials said the Navy intends to clean up the facilities first and then decide whether to pursue contractors to recover a share of the costs. Cost-recovery decisions are to be based on evidence, litigation risk, the contractor’s level of responsibility, and other factors. However, Navy officials stated that the Navy is reluctant to pursue GOCO contractors because of concerns they will pass costs back to the government as an allowable expense or through overhead charges. They also said that a divisive liability issue could slow cleanup operations and hurt relations between the Navy and its contractors. In December 1995, the Air Force General Counsel’s office developed guidance that recognizes that past and present contractors, as generators of contaminants and operators at federal facilities, share the liability for environmental contamination. The guidance calls for sharing remediation costs, based on the facts of each situation. In commenting on this guidance, Air Force officials stated that the Air Force approved a practice similar to the Navy policy for cost sharing. Air Force officials stated that the practice is intended to share cleanup costs equally with operators unless conditions warrant otherwise. At the two locations we visited, the Air Force was paying all cleanup costs, but may later pursue other parties. However, at two other locations, the Air Force had agreed with the facility operators to share costs. According to Air Force officials, the settlement agreement prohibits the contractors from charging their environmental cleanup costs back to a government contract. Air Force officials also stated that the absence of federal guidance governing how to treat environmental cleanup costs, together with inconsistent treatments and allowances throughout DOD, have slowed cost-sharing negotiations with contractors. DLA’s policy requires current operating contractors to pay cleanup costs in cases of wrongdoing, but allows fuel customers to pay for past contamination through a surcharge. However, DLA does not have a specific policy for its fuel supply centers to address those cases in which parties other than contractors, such as lessees or tenants, are responsible for contamination. DLA has considered developing such cost-sharing guidance, but had not done so as of March 1997. The Norwalk center we visited has been negotiating for the recovery of costs. Officials are negotiating with a lessee to pay for most of the facility’s cleanup costs. However, the facility did not gather sufficient evidence to determine whether to seek recovery from another party for $10 million in environmental damage at an off-post location. Even though we recommended in 1992 and again in 1994 that DOD issue guidance to resolve disparities between DLA’s and the military services’ cleanup policies and procedures, DOD has not done so. In a letter dated January 9, 1995, responding to our 1994 report, the Deputy Under Secretary of Defense (Environmental Security) stated that DOD’s policy for cost sharing is to comply with the Federal Acquisition Regulation, which provides for the allowability of costs incurred by government contractors. However, the regulation only applies to costs incurred by contractors. It does not prescribe an approach for seeking contractor contributions to DOD cleanup efforts. The policies and practices for seeking contractor participation in cleanup efforts continue to vary widely among the services and DLA. Some variances, such as DLA’s policy to pay for old contamination (not from current operations) through a surcharge to customers, may be justified where no specific evidence identifies the responsible party or when other case-specific factors, such as frequent changes in contractors, may preclude assigning responsibility. However, we continue to believe that uniform guidance from DOD would help resolve disparities among DLA and service cleanup policies and practices. Following our July 1994 report that cleanup at GOCO plants would take longer and cost far more than DOD’s estimate, DOD increased its fiscal year 1993 estimate of $1.4 billion to $3.6 billion in fiscal year 1994. For example, in fiscal year 1993, DOD estimated the Twin Cities Army Ammunition Plant would be cleaned up by the year 2000 at a total cost of $154 million, which was not consistent with supporting data showing costs of about $600 million through 2052. DOD’s fiscal year 1994 report was more consistent with supporting data, showing estimated completion by 2080 at a total cost of about $773.2 million. Although DOD’s report to Congress and service estimates for our case studies were relatively close in total, table 1 shows significant differences for individual locations for fiscal year 1994. Some of the reasons for these cost differences include different estimating methodologies, an input error, and the inclusion of more accurate future cost estimates. In addition, cleanup expenses not identified in either DOD or service component estimates included: $120 million to decontaminate and dispose of the chemical plant at the Newport Army Ammunition Plant; $6 million in cleanup costs for uranium-tipped bullets at the Lake City $4 million in 1983 and 1984, which was paid for cleanup costs at Air Force Plant 4 before DERA funds were available; $836,000 already spent on a cleanup study at the Navy’s Allegany Ballistics money paid to the Environmental Protection Agency (EPA) and state regulatory agencies for overseeing the cleanup at several sites (as an example, at the Fridley Naval Industrial Reserve Ordanance Plant, $481,000 was paid to EPA and $106,000 was paid to the state of Minnesota). DOD’s report for fiscal year 1995, dated May 15, 1996, showed that total cleanup cost estimates for GOCO facilities decreased from $3.6 billion to $3.3 billion, but it did not include cleanup costs for our 2 DLA case studies, or with 1 exception, any of the 21 DLA facilities reflected in prior DOD reports. According to DOD officials, these facilities were excluded from the latest report because customer surcharges rather than DERA funds paid for cleanup costs. DLA cleanup costs totaled $101 million in DOD’s fiscal year 1994 report. We recognize that cleanup estimates for facilities will be preliminary until DOD fully characterizes contaminants, selects a remedy, and finances the remedy. However, most of the cost differences noted in our case studies can be accounted for given the stage of cleanup in each case. Furthermore, excluding environmental cleanup costs from DOD’s restoration program report because the funding source is other than DERA can be misleading. For example, the DLA cleanups excluded from DOD’s report for fiscal year 1995 are, except for funding source, similar to cleanups still reported for the military services. Also, DOD’s report still includes cost for cleanups totaling $624 million in 1995 that were funded by its base realignment and closure account rather than DERA. Finally, the services’ stated plans to later obtain cost sharing from other responsible parties require that complete cost data be readily available. To address the inconsistencies in cost-sharing approaches and the potential for disparate treatment of other responsible parties described in this and past reports, we recommend that the Secretary of Defense issue guidance to DOD components to resolve current disparities and to promote future consistent treatment of all parties in cost recovery decisions. So that sufficient data will be available for cost-sharing negotiations and program oversight, we also recommend that the Secretary direct the military services and DLA to: Identify, to the extent it has not already been done, whether parties other than the government were involved with any contamination, as part of environmental cleanup preliminary assessments at GOCO facilities. Obtain all relevant data regarding other responsible parties identified, whether or not wrongdoing is an issue. Gather and maintain the most timely and accurate DOD cost data available in DLA, military service, and other agencies’ records. Provide consistent estimates, including all cleanup costs for DOD’s environmental reports to Congress, regardless of the source of funds. In commenting on a draft of this report, DOD stated that it was generally complying with all five of our recommendations under existing practices. However, as we detailed below, DOD has not fully addressed the issues and specific cases discussed in this report and we continue to believe that DOD needs to take additional actions on each of our recommendations. Regarding the need for DOD guidance on the recovery of cleanup costs, DOD stated that its policy is to comply with the Federal Acquisition Regulation and that the Defense Contract Audit Agency issued audit guidance for field auditors in 1992 on how to interpret the regulation. However, as we stated in this and prior reports, federal acquisition laws, regulations, and policies do not provide specific guidance to decision-makers on how to treat environmental cleanup costs. In the absence of guidance that explicitly addresses the sharing of DOD cleanup costs, the services and DLA have taken different approaches to deciding whether and when to seek contributions from contractors and other responsible parties. We continue to believe that a DOD-wide policy is needed to address these disparities and promote consistent treatment of all parties in the recovery of DOD-incurred cleanup costs. DOD stated that it is already identifying parties involved with contamination and obtaining all relevant data for other responsible parties, in line with our second and third recommendations. However, our case studies indicate that searches for potentially responsible parties were not done and services had not obtained all relevant information. DOD’s comments did not identify what actions it had taken to resolve such cases or the Air Force concerns about the lack of DOD guidance. Thus, we continue to believe that more should be done in this area. DOD indicated that it did not believe it should gather costs incurred by all non-DOD organizations. We agree and modified our recommendation to focus primarily on DOD costs. Nevertheless, if another federal agency has pertinent information on added DOD cleanup costs, as we found in each case study, efforts should be made to gather and maintain that information. DOD stated that its report to Congress is not intended to represent all expenses associated with other funding sources, with the exception of the Base Closure and Realignment Account. DOD also stated that there is no value added to reconstructing past non-DERA expenses. We agree that it may not be worthwhile to reconstruct minor costs incurred prior to availability of DERA funds. However, excluding all cleanup expenses of an entire agency such as DLA simply because the money to pay those expenses came from a different federal account results in reports that materially understate federal expenses for cleanup costs. It may also lead to omissions by the military service where they funded cleanups from business operating funds. The use of business operating funds for cleanup is already prevalent in the Navy. Finally, complete cost data is necessary for the military services’ stated plans to obtain cost sharing from other responsible parties. DOD’s comments are reprinted in their entirety in appendix V. The high cleanup costs, coupled with inconsistent policies and practices for recovering costs from other parties, can lead to adverse budget consequences. Because DOD’s comments indicate that it does not plan to take any actions to address the problems set forth in this report, Congress may wish to call upon the Secretary of Defense to issue guidance to address inconsistencies in cost-sharing approaches and to promote future consistent treatment of all parties in cost recovery decisions. We conducted our work at the Washington, D.C., area headquarters offices of DOD, DLA, and the military services and at selected commands and field installations. The Washington, D.C., area commands included the Naval Air Systems Command, Naval Sea Systems Command, and the Defense Fuel Supply Center. We also visited the Army Environmental Center in Aberdeen, Maryland; the Air Force Acquisition Environmental Management Directorate in Dayton, Ohio; and the Naval Facilities Engineering Command Southern Division in Charleston, South Carolina. At headquarters, command, and field locations, we interviewed DOD, contractor, state agency, and EPA officials. To assess consistency of cost-sharing practices, we compared headquarters policies and field practices at case study locations identified below. To examine cleanup cost estimates, we obtained data on DOD environmental cleanup program status and costs, noted differences among organizations, and examined supporting documents, but did not independently determine actual costs. We used a case study methodology at selected field facilities. We visited nine GOCO facilities to determine the status and cost of cleanup, and the extent of cost sharing for environmental cleanup at the facilities. We selected facilities with larger total cleanup costs, managed by each of the military departments and DLA. We determined whether site specific data identified all known costs and compared the data to military service records and DOD reports. We reviewed cost-sharing practices across the locations visited, but did not independently evaluate liability issues or the merits of cost-sharing decisions in individual cases. Lake City Army Ammunition Plant, Missouri Newport Army Ammunition Plant, Indiana Twin Cities Army Ammunition Plant, Minnesota Air Force Plant 4, Fort Worth, Texas Air Force Plant 44, Tucson, Arizona Allegany Ballistics Laboratory, West Virginia Naval Industrial Reserve Ordnance Plant, Fridley, Minnesota Defense Fuel Support Point Norwalk, California Defense Fuel Support Point Ozol, California We performed our work from June 1995 through March 1997 in accordance with generally accepted government auditing standards. Unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies to the appropriate congressional committees; the Secretaries of Defense, the Army, the Navy, and the Air Force; and the Directors of DLA and the Office of Management and Budget. We will also make copies available to others upon request. Please contact me on (202) 512-8412 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix VI. We visited three Army ammunition plants—one active plant, two inactive—still owned by the Army. The Lake City Army Ammunition Plant, Independence, Missouri, was active. The Newport Army Ammunition Plant, Newport, Indiana, and the Twin Cities Army Ammunition Plant, Arden Hills, Minnesota, no longer produce ammunition. The Army owns a total of 27 government-owned, contractor-operated (GOCO) plants, of which 24 are ammunition plants. Seven of the 24 are currently active. The Army has no overall policy for sharing costs with other parties and does not plan to pursue current or past GOCO operators to share environmental cleanup costs at the case study facilities. However, at one plant we visited, the Army negotiated cost-sharing arrangements with contractors who are not considered operators and is seeking reimbursement from the operator’s insurance company. According to Army officials, the ammunition plant operators are protected against environmental liability by protective clauses in their contracts, such as the “Responsibility of Contractor - Contingencies” clause, and by an indemnification clause, which was recently added. The Secretary of the Army authorized the indemnification clauses under Public Law 85-804 in a series of memoranda. For the three locations we visited, we found relevant memoranda dated May 1985, November 1990, and November 1992. Army officials stated that the indemnification provision would allow ammunition plant operators to claim recovery of cleanup costs, but that such a claim has not been made because the Army has assumed all cleanup costs at its ammunition plants. Army officials said that the Army, as the landowner, should be responsible for cleaning up the property. They stated that it would be inappropriate to hold former contractors liable for the cleanup costs because contamination resulted not from bad faith or willful misconduct, but from industrial practices that used to be considered acceptable. Army officials stated that indemnification of ammunition plant contractors was justified by the unusually high risk they encountered in handling explosives and reactive and hazardous materials. Despite the Army’s view, a finding of wrongdoing is not a required condition for cost sharing under the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA). Owners and operators at private facilities have not been relieved of liability on that basis. Although the Army has not achieved cost sharing by its ammunition plant operators, it has pursued other responsible parties. For example, at the Twin Cities facility, the Army is attempting to recover more than $10 million from one GOCO operator’s insurance company. The Army did negotiate a settlement with a contractor who was a tenant at this facility. This contractor—like the GOCO operator—produced ammunition for the Army for decades, but did so under a “facility contract,” that did not indemnify the tenant. Under an agreement, the tenant contractor must pay all cleanup costs associated with its production and a percentage of the cleanup for areas in which the source of contamination is unclear. The Twin Cities Army Ammunition Plant is an inactive facility that occupies about 2,370 acres in Arden Hills, Minnesota. Established in 1941, the plant produced ammunition intermittently until 1976. Throughout all but the last 1 of the plant’s 55 years, the Federal Cartridge Company was its only operating contractor. Alliant Techsystems, a long-standing tenant at the plant, took over as the GOCO operator in November 1995. Alliant, formerly Honeywell, had been a tenant at the Twin Cities plant since the late 1950s, manufacturing small ammunition for the Department of Defense (DOD). Also, the 3M Company, as a lessee, conducted commercial production activities on the facility between 1950 and 1993. The production activities at the Twin Cities facility generated hazardous waste that contaminated the soil, structures, and groundwater, including the drinking water for the facility and the city of New Brighton, Minnesota. Soil was contaminated with explosives, metals, polychlorinated biphenyls, and volatile organic compounds. Plant property occupied by the lessee was contaminated by low-level radioactivity. Groundwater was contaminated with trichloroethylene and had migrated off the site. The Twin Cities plant was placed on the Environmental Protection Agency’s (EPA) National Priorities List in 1983 as part of the New Brighton/Arden Hills Superfund site, an approximately 36-square-mile site encompassing the plant and the contaminated groundwater. The Superfund site was divided into three main units. Two of the units contain distinct plumes of contaminated groundwater, known respectively as the north plume and the south plume. The third unit consists of contaminated soils and groundwater within the plant’s boundary. Production waste from the plant also contaminated three privately owned disposal sites to which the operator sent the waste. According to a contractor official, the company had complied with the standards of the time. Also, between 1959 and 1962, over 1,400 drums of waste from classified munitions and, in 1945, 500 tons of 50-caliber bullets were disposed of in Lake Superior. Records about the classified waste are not available, but Army officials said that the waste had been packed into 55-gallon drums, transported over land under Army escort to Duluth, Minnesota, and dumped into the lake from barges. The state pollution agency and Corps of Engineers had not yet decided whether an investigation by the Army of the 50-caliber bullet disposal was necessary at the time of our review. Investigations at the Twin Cities plant began after the 1981 discovery of contamination in the drinking water supply. Six interim remedial actions and three removal actions have been completed at the facility. As of December 1996, the final remedy to pump and treat groundwater from the south plume is in place, and the final remedy for the north plume has been implemented. The remedy for cleaning up contamination within the boundary of the facility has been proposed and is under evaluation. DOD and Army cleanup cost estimates for fiscal year 1994 ($773.2 million and $810.9 million, respectively) were much closer than in 1993 ($154 million according to DOD, versus about $600 million according to installation data). DOD’s May 15, 1996, report for fiscal year 1995 increased the total past and future cleanup cost estimate to $828.2 million. Neither DOD’s report nor the Army’s estimate included all known cleanup costs for the Twin Cities plant, with at least an additional $8.2 million of expenditures. Examples where either Defense Environmental Restoration Account (DERA) funds were not designated as being used for cleanup at the Twin Cities plant or where non-DERA funds were used for cleanup at the Twin Cities plant, but not reported, include: more than $560,000 paid to regulators, including $125,000 for EPA investigations at the Lake Superior disposal site, and about $435,650 paid for state regulatory oversight at the plant and $398,000 expended by the Army Corp of Engineers for work at the Lake Superior site. Expenditures from Army operations funds and judgment funds that were not in DOD’s and the Army’s estimates include: As a result of a toxic tort case settlement related to contaminated drinking water at the site, the Army reimbursed the Federal Cartridge Company $3.7 million for the company’s share of a settlement in litigation. Relative to the above case, the Army settled for a $1.3-million Army share, which was paid out of the Department of Justice Judgment Fund. The Army reimbursed Federal Cartridge $1.9 million for disposal-related cleanup costs. The U.S. government paid $70,000 on behalf of all other federal potentially responsible parties for cleanup-related expenses at a disposal site in Oak Grove, Minnesota. The Army paid an additional $234,292 for attorney time relating to cleanup. The Army does not plan to pursue Federal Cartridge, the former operator, to share environmental cleanup costs at this facility. However, both Alliant and 3M, who also produced at the plant, are being held liable for contamination associated with their activities and have agreed to share the cleanup costs. Federal Cartridge was responsible for manufacturing and testing ammunition, disposing of production waste, and maintaining the facility. Beginning in the early 1980s, the company was also responsible for performing the preliminary environmental damage assessments and engineering evaluations and analyses. At peak production in 1943, according to Army officials, almost all of the 26,000 employees who worked at the plant were contractor personnel. By 1995, the total decreased to about 1,000 employees, and all but about 19 were contractor personnel. The Army is assuming costs not already covered by the other two private companies and Federal Cartridge believes it has no liability for cleanup costs. Reasons given by the Army are the Secretary of the Army granting indemnification status to the contractor under Public Law 85-804, and contract clauses that address contractor liability. In addition, Federal Cartridge Company officials stated that disposals were not due to any company wrongdoing, either willful or knowing, and were at state-approved landfills under the review and approval of the Army. Also, they said that the Army did not disapprove of company practices, which were considered state-of-the-art. However, Army officials have participated in pursuing Federal Cartridge’s insurance company to recover cleanup costs associated with the company’s operations at the plant. The Army asked the Justice Department to help it recover about $10.2 million, plus interest, that it reimbursed Federal Cartridge for cleanup-related costs. Negotiations are underway. Both of the companies that operated on plant property as tenant and lessee are sharing in cleanup costs. Alliant produced ammunition for the Army as a tenant using government facilities, but Alliant’s facility contract did not contain indemnification provisions. In 1995, an attorney for Alliant estimated that the company had paid over $10 million since the 1985 apportionment agreement, whereby Alliant is to pay the cleanup costs at the South plume, and the Army is responsible for costs at the North plume. The cost of cleaning up groundwater where the origin of contamination is unclear will be split between the parties, with the Army paying 80 percent and Alliant 20 percent. The 3M Company produced for the commercial market under a lease with the Army. The company is solely responsible for cleanup of radioactive contamination of property on the site. The company has cleaned up the contaminated buildings and soils, but the Army has not yet examined and approved 3M’s cleanup actions. The Lake City Army Ammunition Plant is the Army’s only installation that now manufactures small-caliber ammunition. The plant, which occupies about 4,000 acres in a rural area near Independence, Missouri, began operating in 1941. Remington Arms operated the facility until 1985, when the current contractor, the Olin Corporation, took over. Manufacturing operations at the Lake City plant generated hazardous wastes. Soil has been contaminated with explosives; volatile and semivolatile organic compounds; oil and grease; low-level radioactive materials; and such metals as arsenic, lead, mercury, and zinc. Groundwater was contaminated with dichloroethylene, lead, and vinyl chloride. Because these contaminants exceed levels set by EPA, groundwater from wells on the installation must be treated before it can be consumed. For example, the EPA maximum contaminant level for vinyl chloride is 2 parts per billion, but the drinking water aquifer at the plant contained 8,000 parts per billion. According to test results and studies, contamination has not yet migrated off the site but will do so eventually, unless preventive action is taken. Because the site is located in a rural, sparsely populated area, no immediate threat exists to the groundwater of surrounding communities. The Lake City plant was placed on EPA’s National Priorities List in 1987. The contaminated areas at the plant are divided into four units. Preliminary assessments and site inspections were conducted in 1979. EPA and the Missouri Department of Natural Resources approved the remedial investigation for one unit in March 1995. Another was completed in May 1995, but awaits EPA and Missouri approval. The Army is not proceeding with remedial investigations for the other two units until it receives comments from EPA and the state of Missouri on the May 1995 investigation report and a feasibility study submitted in June 1995 for the first unit. The proposed corrective actions mainly involve groundwater treatment and soil excavation. Both DOD and Army estimates increased from fiscal year 1993 to 1994. The DOD estimate increased from $52 million to $339.2 million, while the Army estimate increased from $24.8 million to $168.1 million. Army officials attributed the increase to including long-term cleanup costs beyond 2001. Earlier estimates considered only a 7-year budget cycle. DOD’s estimate was more than double what Army officials at the plant reported to us for the same time frame. Lake City officials believed their estimate was accurate, and they did not know why DOD’s estimate was so much higher. According to a DOD official, it might have been due to a data entry error. The difference was generally resolved with DOD’s May 15, 1996, report for fiscal year 1995, which updated the figure to $139.4 million. Lake City officials stated that it is difficult to accurately project the cost of cleanup until options have been selected and approved by EPA and the state regulatory agencies. We found about $22.9 million in costs that were not included in either DOD or Lake City estimates. Remediation may take longer than the year 2024 estimated, thus increasing costs by $16.8 million. The feasibility study for one operating unit stated that the contaminated water should be pumped, treated, and monitored for at least 50 years, or until 2048. The Army’s estimated cost for such remedial action was about $700,000 a year, including $500,000 for pumping and treating the water and $200,000 for monitoring. Costs excluded an estimated $6 million to clean up low-level radioactive contamination caused by ammunition made from depleted uranium. The cost was excluded from DOD and Army estimates because the cleanup will be conducted under the direction of the Nuclear Regulatory Commission. The state was paid $91,000 for oversight costs. Also, the use of a residential cleanup standard as opposed to an industrial cleanup standard could increase the cost of cleaning one area by about $23.6 million, from $5 million to $28.6 million. The cleanup standard for an industrial site assumes human exposure of 40 working hours per week, whereas a residential standard assumes continuous human exposure of 168 hours per week. The Army estimates it will cost $5 million to remediate the contamination at its Area 18 Operable Unit to the industrial standard. However, the EPA and the state of Missouri believe that the residential cleanup standard should be used. The Army does not plan to pursue cost sharing by current or former operators of the Lake City plant. Olin has been the operator since 1985, and Remington operated the plant for more than 40 years. No other private parties, such as lessees, operated at the facility. Army officials said they do not plan to pursue cost sharing with Olin because of the Secretary of the Army’s decision to indemnify plant operators under Public Law 85-804. Likewise, they applied this decision to relieve Remington, Lake City’s prior contractor. The Newport Chemical Facility, formerly Army Ammunition Plant, occupies about 7,000 acres in a sparsely populated rural area near Newport, Indiana. The plant, which has been inactive since 1975, currently serves as a storage facility for a nerve agent the Army plans to incinerate as part of its chemical material program. The Newport plant was established in 1941; from then until 1974, several contractors, including E.I. duPont, FMC Corporation, Liberty Powder Corporation, and Uniroyal, Inc., produced explosives such as trinitrotoluene (TNT) and chemical agents. The current operator for the storage function is Mason & Hanger. Manufacturing operations at the Newport plant generated various hazardous wastes. Soil, groundwater, and surface water were contaminated with explosives, solvents, heavy metals, oils, and grease. Groundwater contaminated with carbon tetrachloride and trichloroethylene has not yet migrated off the site, but EPA and Army officials are concerned that it may. If contaminated groundwater reaches the plant’s boundaries, it could threaten the safety of the surrounding area’s drinking water. Preliminary investigations were completed in 1986. The Army identified 16 sites, 12 of which it believed required some remedial action or additional study. The Army classified four sites requiring no further action, but EPA disagreed and is requiring additional testing and monitoring activities for these four sites. The Army removed underground petroleum storage tanks and currently plans to remove other contaminants. Investigations and studies are continuing. DOD’s estimate of the cleanup costs for the Newport plant was higher than the Army’s. DOD’s report for fiscal year 1994 put the total cost at about $55.5 million, as compared to an Army estimate of $41.5 million. Officials could not reconcile the difference, but said part could be explained by DOD’s estimated completion in 2010, versus the Army estimate of 2006. DOD’s report for fiscal year 1995 increased the estimate to about $68 million, with completed cleanup still estimated for 2010. A cost not reflected in either DOD or Army data was about $120 million for a chemical plant cleanup that was excluded because that effort will be funded by the Chemical Munitions Destruction Defense Account, not DERA. Army officials stated that costs cannot be accurately estimated until more is known about the sites. Until the contamination is known and the remediation methods are selected, the costs of remediation options can vary significantly. For example, the Army’s cost estimate assumed that the service will incinerate contaminated soils, but Army officials said that soils may be cleaned up biologically through composting at about half the cost of incineration. Army officials do not plan to pursue cost sharing by the current or any past operators of the Newport plant. They said this is because of the Secretary of the Army’s decision to indemnify plant operators under Public Law 85-804. We visited two active Navy GOCO manufacturing facilities: the Allegany Ballistics Laboratory, Mineral County, West Virginia, and the Naval Industrial Reserve Ordnance Plant, Fridley, Minnesota. Both facilities have been in operation since the early 1940s. The Allegany facility was operated by Hercules, Inc., until Alliant Techsystems purchased Hercules and took over operations in 1995. The Fridley facility also involved changes in ownership. The Northern Pump Company operated the facility from 1942, until FMC purchased a subsidiary of Northern in 1964. The Navy has had a policy since 1989, which states that the government and current and former contractors share the liability and responsibility for cleaning up GOCO facilities. The current contractor is to pay all cleanup costs associated with its operation of the facility unless the operating contract contains provisions to the contrary. According to a Navy official, the Navy has the right to seek reimbursement from prior contractors for the costs it incurred for cleaning up contamination resulting from their activities. Navy officials stated that GOCO operational decisions, including those about disposal, were left to its contractors, and the Navy had little presence at its GOCOs. Contractors operated the facilities under a facilities-use contract to provide goods and services for the Navy without direct Navy management of operations. According to the Navy’s cost-sharing policy, if further study and remediation are recommended after initial cleanup research, the Navy command is required to immediately begin discussions with the GOCO contractor regarding responsibility for and participation in the cleanup effort. Participation is also to be discussed prior to cleanup, including any removal or interim actions. According to Navy legal representatives, the policy provides contractors an opportunity to participate in the cleanup process as a means of reducing litigation risk—that is, a contractor that participates in the cleanup process is less likely to argue that cleanup costs were excessive or unnecessary. If the contractor declines to participate, all cleanup costs are to be identified for possible future recovery from the contractor. Despite its 1989 policy, the Navy has not initiated timely requests for contractor participation in the cleanup. The Navy did not send a letter requesting contractor participation until 1994 at one of the two facilities we visited and has not begun the required negotiations with the second facility. At both the facilities we visited, some of the contamination related to production for the Navy at contractor-owned property adjacent to the government-owned sites. In one case, the contamination was on the contractor property, and in the other, it had been transferred to the Navy property. Navy officials said the Navy will likely clean up its facilities and then decide whether to seek a share of the costs from the operators. They provided a number of explanations for not pursuing cost sharing more actively: (1) operators who help pay for the cleanup may later get reimbursed for the expenditures; (2) a divisive liability issue might drive a wedge into an otherwise productive relationship between the Navy and its contractors; (3) cost-sharing negotiations could slow the cleanup; and (4) cost recovery is easier after the cleanup is done, because all costs, contamination, and responsible parties will have been identified, and the costs can then be allocated to the responsible parties based on their contributions. Since 1945, the Allegany Ballistics Laboratory has researched, developed, produced, and tested solid propellant rocket motors on about 1,600 acres in Mineral County, West Virginia, about 10 miles southwest of Cumberland, Maryland. The laboratory has been operated by Hercules, Inc., for all but 2 of its 54 years in operation. George Washington University, under contract with the Army, operated the laboratory from 1943 until 1945, when Hercules, Inc., took over operations under a Navy contract. In 1995, the laboratory’s current operating contractor, Alliant Techsystems, purchased the division of Hercules that had been operating the facility. Hercules also began operating commercial businesses on and adjacent to the laboratory in 1967. Hercules purchased 56 acres adjoining the laboratory in 1967 and built a propellant production facility. In addition to rocket development, Hercules began operating a commercial automobile testing business at the GOCO facility in 1973. According to a Navy study, no written agreement exists between the Navy and Hercules regarding the use of laboratory property for the disposal of waste generated by the adjacent Hercules-owned facility. Manufacturing operations at the laboratory, as well as disposal of contaminated waste produced at the nearby commercial plant, have generated hazardous waste. This waste contaminated soil and groundwater with trichloroethylene, explosives, and volatile and semi-volatile organic chemicals, and the laboratory was placed on the EPA National Priorities List in 1994. Navy officials do not believe the contractor’s on-site automobile testing business contributed to the contamination. However, some of the contamination at the laboratory stemmed from burning of propellant-contaminated waste from the adjacent contractor-owned production facility. Multiple studies and investigations have been performed, starting with environmental studies initiated in fiscal year 1983 that identified 11 sites and a later study in fiscal year 1986 that recommended further study at 8 sites. A subsequent assessment in fiscal year 1993 identified an additional 105 sites and only recommended further action at 30 of the sites. As of September 1994, the Navy reported that remedial actions should be completed by fiscal year 1998. DOD reported in March 1995 that cleanup-related operations were expected to continue to fiscal year 2010. Navy officials later stated they expect the study phase to be completed in fiscal year 2003, remedial actions to be completed by fiscal year 2010, and long-term operations to be completed in 2025. According to the officials, limited DERA funding and the unavailability of field data have delayed cleanup efforts. The Navy’s cleanup cost estimates for the laboratory increased from about $18.7 million in fiscal year 1993, to $27.8 million in 1994, and $43.5 million in 1995. DOD’s estimates were about $21.2 million, $30.7 million, and $24.4 million for the respective years. Navy officials attributed the increases to an extension of the cleanup time frames and a change in the estimating methodology used. The Navy began to use a projection model in July 1994 to project future cleanup costs based on factors such as contamination type and degree of contamination. The Navy attributed the differences between the Navy and DOD for fiscal year 1995 mainly to the different data used. For example, DOD’s 1995 reports excluded unfunded Allegany Ballistics Laboratory requirements included by the Navy for fiscal year 1998 and beyond. In addition, the Navy estimate increased because additional investigations revealed more extensive contamination. Although DOD and Navy sources agreed on expenditures to date, we found other costs totaling 76 percent more than the $1.3 million reported for 1994. Expenditures not reported in the above sources for Allegany Ballistics Laboratory were (1) $836,000 that was paid through the Naval Sea Systems Command Operations and Maintenance account, as directed by congressional appropriations language for a remedial investigation; (2) $60,000 for an initial assessment study funded by the Naval Facilities Engineering Command; (3) $45,460 provided by the U.S. Army Corps of Engineers in DERA funds to the state of West Virginia for regulatory oversight and technical assistance; and (4) $45,285 paid to EPA through the Superfund for oversight. Also, costs beyond 1994 for EPA oversight are expected to exceed $667,000. According to Navy officials, the contamination at the laboratory resulted from the contractor’s operation of both the laboratory and the adjacent contractor-owned facility. The Navy sent a letter on February 22, 1994, asking that Hercules, the facility operator for more than 50 years, participate in financing the laboratory cleanup. Hercules declined to participate, saying that the Navy had assumed all responsibility for the cleanup. Hercules stated that it would also bill the Navy for cleanup-related costs incurred in managing the restoration contractor, because it considers such costs to be above and beyond its normal operating costs. Navy officials agreed that their 1994 letter to Hercules was not timely, but said the Navy will continue to clean up the facility and then determine whether to pursue a cost-sharing arrangement with Hercules. They said their decision to pursue Hercules will be based on such factors as evidence, litigation risk, and the level of independence of the contractor. Further, Navy officials stated that the Navy has never had a significant presence at the laboratory, leaving the contractor free to make operational decisions, including those involving disposal. In the 1960s, about 40 government employees worked on site with 3,200 contractor personnel. In the 1990s, about 4 government staff worked with 500 contractor personnel. The Naval Industrial Reserve Ordnance Plant, Fridley, occupies about 83 acres in the city of Fridley, Minnesota, within the Minneapolis-St. Paul metropolitan area. Since 1941, the plant has produced gun mounts, torpedo tubes, and missile-launching systems. With changes in ownership, the same company has operated the plant for more than 54 years. Northern Ordnance, Inc., formerly a subsidiary of Northern Pump Company, operated the facility from 1942 to 1964. At that time, FMC Corporation purchased the company and continued operations until 1994, when United Defense Limited Partnership, a subsidiary of FMC, took over the plant’s operations. Manufacturing at Fridley generated hazardous waste that contaminated soil and groundwater with petroleum, oil, and other lubricants, and such volatile organic chemicals as trichloroethane. Contamination has resulted from a leaking sewer system under one of the plant’s production buildings. The plant was placed on EPA’s National Priorities List in 1989. Contamination was also discovered at off-site locations, including the operating contractor’s private facility next to Fridley and three municipal landfills. From the 1940s through 1969, the contractor disposed of chemicals and other hazardous waste materials on 18 acres it owned south of the Fridley facility. In addition, FMC disposed of foundry sand at landfills in Andover, East Bethel, and Oakgrove, Minnesota, and it was subsequently named as a potentially responsible party under CERCLA. Chemicals now considered to be carcinogens were reportedly detected in the foundry sand, but FMC stated that the chemicals were absorbed by the sand after its disposal at the landfill. The Fridley site was divided into three units for investigation and cleanup: groundwater, soils around the building, and soils under the building. A 1990 record of decision for the first unit called for initially pumping and treating contaminated groundwater and discharging it into a sanitary sewer. Later, a permanent groundwater extraction system would treat groundwater for discharge to the Mississippi River. The final remedy for the second unit is being developed. It involves containing contaminated soils and buried drums of waste, and later removing the contamination. For the third unit, the remedial investigation begun in September 1996 will serve as the basis for further studies and actions. Total cost estimates for Fridley increased from fiscal year 1993 to 1995. DOD’s estimate increased from $13 million in fiscal year 1993, to about $37.9 million in 1994, and $49 million in 1995. The Navy’s estimate increased from about $17 million in 1993 to $30.7 million in 1994, and $52 million in 1995. Navy officials attributed the 1993 and 1994 increases to changes in estimates of future cleanup activities, completion dates, and related costs. Also, the Navy used a projection model in July 1994 to estimate future cleanup costs, based on such factors as the type and degree of contamination. For 1995, Navy officials attributed the large increase to additional investigations that revealed more extensive contamination needing cleanup. Navy officials indicated that the latest difference between DOD and Navy estimates resulted from a reevaluation of the cleanup program between the time the Navy and DOD estimates were prepared. We found additional costs of about $4 million not reported by either DOD or the Navy. Neither included the following: Contractors were paid $3.1 million for off-site cleanup. (The Navy reimbursed FMC $1.9 million that FMC had paid to clean up its private facility next to Fridley. The Navy also reimbursed FMC about $1.3 million for costs incurred to clean up three municipal landfills where it had disposed of waste from the Navy-owned Fridley sites. The reimbursements total $3.1 million, with rounding. According to a DOD official, the state of Minnesota may reimburse some of the money to FMC and thus to the Navy.) EPA was paid $481,000 through the Superfund for oversight and technical assistance. Approximately $106,000 was paid by Army Corps of Engineers to the state of Minnesota for regulatory oversight and technical assistance. The Navy paid $269,000 for cleanup before DERA funds were available. A study funded by the Naval Facilities Engineering Command cost $60,000. Costs beyond 1994 for EPA oversight are expected to exceed $1.78 million. DOD’s report for fiscal year 1994 did not show any projected cleanup costs for 1995 and 1996. This was corrected in the 1995 report, which showed total reported cost of about $8 million. Navy officials said the Navy will clean up the Fridley facility and then determine whether to pursue cost sharing with FMC. According to Navy officials, the Navy has never sent a letter to FMC requesting financial participation in the cleanup, but did request the contractor to review and comment on the Navy’s new cleanup policy in September 1989. In its October 1989 response, FMC disagreed with the Navy’s policy to “require current GOCO contractors to pay for any and all cleanup costs associated with their operation of Navy facilities.” According to the FMC response, the nature of the company’s relationship with the Navy and related contractual obligations does not justify it paying for cleaning up the hazardous waste sites associated with its operations. FMC stated that under its contract, it is required to perform only normal maintenance on the facility: “Remediation of hazardous waste sites at the facility would clearly fall in the category of maintenance over and above normal maintenance that would either be performed by the Navy or by FMC at Navy expense.” However, according to a Navy official, the contractor was free to make operational decisions at the facility, including those involving disposal. He stated the Navy never had significant presence at Fridley. For example, in the 1970s, about 70 or 80 government employees worked onsite with about 2,000 contractor personnel, and in the 1990s, about 60 government employees worked with 1,500 contractor personnel. As noted above, the Navy reimbursed FMC $1.9 million for costs to clean up the contractor’s facility adjacent to Fridley. Following a contracting officer’s final decision to deny FMC its requested reimbursement of $2.2 million, FMC appealed to the Armed Services Board of Contract Appeals. According to a Navy legal official, after extensive discussion, the decision to pay FMC was based on litigation-related risk and cost. The reimbursement was reduced to $1.9 million because FMC recovered $275,000 through an action against Northern Pump Company, the former parent company of the subsidiary that FMC purchased in 1964. FMC filed a claim with its insurance company to recover some of the private facility’s cleanup costs. In addition to the previously noted $1.3 million Navy reimbursement to FMC for the company’s cleanup costs at the three municipal landfills, FMC has requested another $1.3 million for these facilities. A DOD official indicated that part of these past costs may be recovered because the state of Minnesota is reimbursing companies involved in settlements to pay for cleaning up the landfills. If FMC receives such a payment, DOD is to be reimbursed its share. We visited two active Air Force manufacturing facilities: Air Force Plant 4 in Fort Worth, Texas, and Plant 44 near Tucson, Arizona. The 2 plants are among 4 the Air Force plans to retain following divestiture, thereby reducing Air Force GOCO plants from a post-World War II high of over 100 to the current 9. Cleanup at the nine remaining Air Force GOCOs is expected to exceed $245 million. The Air Force Deputy General Counsel issued guidance in December 1995 that deals with cost-sharing arrangements with other potentially responsible parties, including plant operators. The guidance states that there is substantial legal rationale for negotiating shared responsibility for environmental remediation costs, based on the facts of the situation, especially where the contractor may have liability insurance. The guidance recognizes that CERCLA “contemplates that potentially responsible parties, including both the owner and the operator, are responsible and will share the costs of environmental remediation.” It states that “there should be neither an assumption that the government is responsible for and will pay 100 percent of a company’s environmental remediation costs, nor an assumption that the government would not pay for any of these costs under other contracts or continuing liability under the GOCO contract.” According to an Air Force memorandum, the Air Force now begins cost-sharing negotiations by proposing equal sharing of costs between the Air Force and plant operators unless evidence shows that the government or operator had a greater responsibility, or other responsible parties were identified. The memorandum noted that equal sharing is an appropriate starting place for negotiations because the Air Force has never exercised day-to-day control over the work of GOCO plant operators and thus has had little or no ability to control contractors’ compliance with environmental laws and regulations. The Air Force recently completed cost-sharing negotiations with a GOCO operator. Thiokol, the former operator of Plant 78 in Utah, has agreed to equally share with the Air Force the costs related to cleaning up contamination at the plant. According to Air Force officials, the decision to pursue cost-sharing at other locations will ultimately depend on whether the service identifies other responsible parties at each plant. Air Force officials stated that the Air Force’s cost recovery efforts have been hindered by indemnifications of other DOD contractors and other factors. Budget cuts have delayed searches for other responsible parties, and the Air Force does not have the financial management systems needed to track all environmental cleanup costs for recovery purposes. Contractor officials at the two plants we visited believe they are not liable for environmental cleanup costs and cited various contract provisions. They also stated that contractor reimbursements by the government for environmental cleanup costs are not prohibited by law or regulation. According to Air Force officials at the two sites visited, the Air Force intends to pay for cleanup and then recover costs from other responsible parties. Air Force Plant 4, Fort Worth, Texas, began operations in 1942, when Consolidated Aircraft manufactured B-24 bombers. General Dynamics operated the plant from 1953 until 1993, when Lockheed acquired General Dynamics’ Fort Worth operations. These Lockheed operations now produce F-16 fighter jets, spare parts, radar units, and missile components. Manufacturing at Plant 4 generated hazardous waste, including waste oils, fuels, paint residues, solvents, heavy metals, and process chemicals. Groundwater and soil were contaminated, primarily with trichloroethylene, chromium, and petroleum byproducts. Four major plumes of groundwater contamination originate at the plant and extend offsite, including two plumes that are contaminating the drinking water aquifer that serves as a municipal water source for the City of White Settlement. In addition, the contaminated drinking water aquifer is near a creek that borders the plant. This creek discharges into the Lake Worth Reservoir, which is the primary drinking water source for Fort Worth. Plant 4 was placed on EPA’s National Priorities List in August 1990. Site investigations began in 1984, and the Air Force has begun six ongoing remedial actions since 1992. These actions consist primarily of groundwater pump-and-treat, extraction of vapors from soil, and excavation and disposal of contaminated soil. Based on a November 1994 Air Force Material Command review of the cleanup program at Plant 4, the Air Force canceled its plans to build a $25-million groundwater treatment system because monitoring indicated that the contaminants in the groundwater are slowly biodegrading. According to the Air Force remedial project manager for Plant 4, remedial actions will be taken only for sites that present an immediate risk, such as contaminated soils or areas where contaminated groundwater is affecting drinking water. The remedial project manager expects regulatory approval of a record of decision, documenting the final plan for cleaning up the site, in 1997. According to Air Force field estimates, cleanup at Air Force Plant 4 will cost $79.6 million, which is over $16 million more than the nearly $63 million reported in DOD’s fiscal year 1994 and 1995 reports to Congress. Most of the difference between the two estimates related to future costs. The field estimates were prepared by the Aeronautical Systems Center in Dayton, Ohio, which is responsible for managing Air Force GOCO plants and the associated environmental cleanup activities. DOD’s estimate was based on Air Force headquarters information from an automated cost-estimating program that considers, among other things, historical information from similar sites where cleanup has been completed. Regardless of which estimate is more accurate, both excluded some cleanup costs, although the total excluded is unknown. According to Air Force officials, these included such expenses as those incurred prior to 1984, costs claimed through overhead, projects paid for with compliance funds, and reimbursements to state regulatory agencies for oversight. For example, the field estimate included nearly $4 million that was used for preliminary assessments, site investigations, and interim remedial actions in 1983 and 1984. DERA funds were not available prior to 1984. A Center official said that costs can be estimated only roughly until a record of decision has been signed, confirming the cleanup remedy decision. For example, DOD’s fiscal year 1993 estimate of $113 million was reduced in 1994 to $63 million partly because of the previously cited decision to cancel a major groundwater treatment facility. The facility became unneeded when the Air Force found that the hydrogeologic conditions at the affected site were conducive to natural biodegradation. The Air Force has paid all the costs of the plant’s cleanup to date. A decision about whether to pursue recovery of any of those costs depends on the Air Force’s search for responsible parties, which will be conducted in fiscal year 1997. General Dynamics and Lockheed officials believe that existing and former contracts obligate the Air Force to pay for all environmental cleanup costs. Lockheed officials believe that cleanup costs incurred by contractors are normal costs of doing business and thus generally allowable, as long as they are reasonable, allocable, and meet other provisions of contracts. According to General Dynamics, the agreement between General Dynamics and Lockheed for the sale of the Fort Worth Division set forth how the parties would allocate the environmental liability if costs were not reimbursed by the Air Force. Contractor officials noted that this agreement did not constitute an admission of liability. Air Force Plant 44, in Tucson, Arizona, has been operated by Hughes Missile Systems Company since its 1951 construction. Hughes currently produces electronic and tactical missile systems at the plant. Manufacturing at Plant 44 generated hazardous waste that contaminated soil and groundwater. Contaminants included trichloroethylene as well as chromium and other metals. The Tucson International Airport area, contiguous to Plant 44, was placed on the National Priorities List in 1983, and Plant 44 is a unit within that site because it is one of four source areas that contributed to a large groundwater contamination plume. Site investigations began in 1981, when the Air Force initiated a groundwater monitoring program. Based on a 1986 record of decision, a groundwater remediation program began with a pump-and-treat system and numerous extraction and recharge wells. The contaminated plume has since been reduced by nearly 70 percent and has broken into several smaller plumes, according to Air Force and contractor officials, but contamination still exceeds that allowed by EPA for drinking water. The Air Force submitted a separate Plant 44 feasibility study to EPA in January 1995 and is developing several cleanup strategies, including a cleanup remedy to accelerate the soil cleanup. At the time of our review, Air Force field estimates indicated total cleanup at Plant 44 would cost about $90.9 million by 2002, which is higher than either the $61.3 million reported in DOD’s fiscal year 1994 report to Congress, or the $73.6 million in DOD’s subsequent 1995 report. According to Air Force officials, the database used to prepare the DOD estimate in both years was missing nearly $19 million in historical DERA costs. Air Force headquarters officials believed that field data are more accurate for historical costs because the records of actual obligations reside in the field. Air Force headquarters officials told us that projected costs differ because headquarters used an automated cost-estimating system. Headquarters officials believe their projections, which were lower than the field’s in both the 1994 and 1995 reports, will prove to be more realistic. According to the Plant 44 remedial project manager, his estimates are more accurate because they are based on contracted studies and historical cost figures for operating a groundwater treatment plant. Historical cost estimates from DOD and the field excluded costs funded by sources other than DERA, such as costs incurred prior to the account’s establishment in 1984, costs claimed by contractors through overhead charges, more than $50,000 paid to state regulators for oversight, and cleanup costs paid out of compliance funds. For example, the plant has spent over $3 million in compliance funds on cleanup projects and may similarly use another $3 million that is currently obligated to compliance projects. In accordance with the December 1995 Air Force guidance for cost sharing at its GOCO plants, Air Force officials plan to search for responsible parties in the future at Plant 44, depending on the availability of DERA funds. Hughes officials disclaim responsibility for sharing the cleanup costs, saying the Air Force is contractually obligated to pay for all historical environmental cleanup costs. We reported in July 1994 that a 1987 memorandum from the former Air Force Systems Command said that Hughes was indemnified from responsibility for past groundwater contamination. Our November 1994 report noted that Air Force officials did not believe that the memorandum indemnified Hughes. According to an Air Force attorney, Air Force officials will not make a formal decision about Hughes’ potential liability until cost recovery becomes an issue. Hughes entered into a new lease agreement with the Air Force that makes Hughes liable for all environmental claims resulting from releases that arise from acts or omissions occurring on or after the effective date of the lease. Hughes and the Air Force are to be each equally liable for claims resulting from unknown conditions after the lease’s effective date, up to a dollar ceiling for Hughes. According to an Air Force attorney, the dollar total is proprietary information. We visited two Defense Fuel Support Points managed by DLA at Norwalk, near Los Angeles, California, and Ozol, near Oakland, California. These fuel support points, among 25 worldwide, are operated by contractors for DLA’s Defense Fuel Supply Center. The center purchases bulk refined petroleum products, coal, natural gas, and synthetic fuels for the military services and federal civilian agencies around the world. The Defense Fuel Supply Center policy and practice have been to recover most cleanup costs for past contamination through a fuel surcharge assessed to its customers, rather than with DERA funds. This surcharge, according to a Center official, is about 1 cent per barrel. We found no evidence that the Center has recovered environmental cleanup costs from its former operators. Current operators are to be held responsible for a fuel spill if they are negligent in attending to a leak on the facility. The center does not have a written policy that directs the investigation of cost-sharing opportunities with potentially responsible parties such as former owners, lessees, or neighboring properties. A complicating factor for DLA’s cost sharing in fuel-related cleanups is that CERCLA excludes certain petroleum products from the definition of hazardous substances. In such cases, joint and several liability under CERCLA may not apply, and DLA may need to either negotiate with other responsible parties or bring legal action against them to recover contamination-related damages at its facilities. In discussing this issue, a center official stated that the center has considered developing a cost-sharing policy to encourage cost recovery and consistency in cost-sharing approaches. According to center officials, the center has an unwritten policy to pursue cost recovery. In addition, they believe that the existing general guidance on property damage should have the same effect, if followed. DLA’s Norwalk facility is a 50-acre fuel storage depot in Los Angeles County, about 20 miles southeast of the city of Los Angeles. From 1923 until 1951, the Norwalk site was owned by a number of private oil companies. In 1951, the site was purchased by the Air Force. DLA has operated the facility since 1968. Tenco Services, Inc., has been the operating contractor of the facility since 1992. Santa Fe Pacific Pipeline leases about 2 acres of land at the facility and has operated a fuel pump station there for over 25 years. Contamination exists both on and off the site in the form of oil-contaminated soils and underground fuel plumes resulting from fuel leaks. Three contamination plumes have been identified on site; one stemmed largely from the lessee’s activities. A fourth plume is off site and resulted from a 200,000-gallon leak from a center pipeline under an intersection in the nearby town of Tustin. From 1991 through 1994, several assessments were performed at the facility, and monitoring wells and soil borings were installed and drilled. In 1992 and 1993, a total of about 3,300 gallons of liquid hydrocarbons were removed by a recovery system that was installed for the Santa Fe plume within the southern portion of the facility. Another project removed 4,713 gallons of liquid hydrocarbons from seven off-site wells adjacent to the site during 1992 and 1993. Delays have slowed investigations at the off-site location, and damage has not yet been fully characterized. According to Norwalk officials, gaining access to the surrounding properties to install test wells has been the major obstacle. According to Center officials, the Norwalk facility’s on- and off-site cleanup will cost about $16.5 million, about half for each portion, and will be completed in 2010. The estimate was submitted to DOD and was accurately reflected in DOD’s annual reports to Congress for fiscal year 1994, but was excluded from its 1995 report. A factor that could affect DLA costs includes private party leasing of part of the facility. The lessee was expected to contribute $7.5 million toward the facility’s cleanup cost. Additional costs that could arise include four claims totaling about $1.6 million that nearby property owners have filed against the center. The claims allege that contamination from the site has reduced the owners’ property values or prevented them from developing or selling their properties. Center officials have not included any amount for claims in their estimate because the claims have not been decided. The center also prepared a worst-case estimate for Norwalk, with total costs of about $34.7 million. According to officials, the higher estimate reflects not a more expensive cleanup remedy, but potential increases in the cost of testing, monitoring, operations and maintenance, system installation, pump replacement, and other such activities. Completion would still be expected in 2010. The center is not attempting to recover any cleanup costs from present or former contractors at the site. An investigation performed at this site identified Santa Fe Pipeline, the lessee, as a potentially responsible party for this site. The center and Santa Fe are currently negotiating cost sharing for cleanup, and center officials believe that the company will fund about $7.5 million in cleanup costs. Center officials have not identified any possible cost recovery options for Norwalk’s off-site cleanup at the Tustin intersection, a cleanup that is expected to cost between $8 million and $13 million. Officials of the operating contractor believe that a third party may have damaged the center’s pipeline by digging in the intersection to install a separate pipeline. A Defense Fuel Region West official believes that DLA center officials could have been more aggressive in attempting to identify the responsible party when the leak was first discovered. Center officials stated that little, if any, evidence was gathered to prove that another party damaged the pipeline. The Ozol facility is a fuel storage depot near the town of Martinez, California, about 25 miles northeast of Oakland. The facility was constructed in 1959 by the Holley Corporation and leased to the federal government until the Air Force purchased the facility in 1980. DLA has managed the facility since 1980, and Tenco Services, Inc., operated it from 1990 until now. Aviation gasoline and jet fuel are present in soil and groundwater around and beneath the storage tanks, apparently from leaks in the tanks and pipes. Four distinct groundwater fuel plumes have been identified. In 1985, a pilot recovery system was installed to remove fuel and its byproducts southwest of the lower tank area. This recovery system consisted of a collection trench/recovery well, air stripper, and recovered fuel holding tank. In addition, a small, low-volume, passive oil/water separator was installed to remove fuel north of the upper tank field. However, both of these systems have been taken out of use pending establishment of the selected final remedy. According to center officials, cleanup at the Ozol facility will cost about $6.4 million. This estimate was accurately reflected in DOD’s annual report to Congress for fiscal year 1994, but was excluded from the 1995 report. Officials expected that the cleanup will be completed in 2002. The center’s worst case estimate totals about $37 million, with cleanup completed in 2017. The differences in treatment costs would arise if active pump-and-treat and vapor-removal systems were required, rather than the current plan to allow contaminated soils and groundwater to naturally biodegrade. The center is not attempting to recover any cleanup costs from present or former contractors at the site because center officials do not believe that contractor action caused the contamination. According to a DLA legal official, DLA is not pursuing cost recovery from the former owner of the site because it believed the contamination involved occurred after transfer of the property in 1980. Gary W. Kunkle Nancy Merlino The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO examined the Department of Defense's (DOD) policies and practices regarding cleanup of environmental contamination at government-owned, contractor-operated (GOCO) plants, as a followup to its previous reports that showed inconsistent policies and practices on cost sharing. GAO reviewed nine higher-cost case studies at the Defense Logistics Agency (DLA) and the military services to: (1) assess the consistency of cost-sharing practices across DOD; and (2) compare the service cleanup estimates against DOD's. GAO noted that: (1) the services' policies and practices for having contractors share cleanup costs still vary widely; (2) notwithstanding GAO's recommendations to do so, DOD has not given the services adequate guidance for making decisions on whether and when to seek recovery of environmental cleanup costs incurred by DOD from contractors and other parties at GOCO facilities; (3) the Army authorized indemnifying its operating contractors from cleanup costs at ammunition plants; (4) the Navy policy requires cost-recovery efforts, but has not initiated timely requests for cost sharing or followed up; (5) the Air Force is beginning to seek participation in cleanup costs from its operating contractors; (6) regarding cleanup at GOCO facilities GAO visited, DOD's fiscal year (FY) 1994 report to Congress included cleanup costs that were closer to the military services' supporting data than DOD's reported FY 1993 estimates; (7) DOD's estimates for cleaning up the 78 GOCO facilities increased from $1.4 billion in FY 1993 to $3.6 billion in 1994, but decreased somewhat to $3.3 billion in 1995; (8) although DOD and the services have addressed GAO's recommendations to improve cost information, their estimates of past and projected costs still differ, and not all costs were included; (9) for example, the 1995 estimate decreased in part because DOD excluded $19.1 million in unfunded Navy cleanup requirements that should have been reported, and DLA cleanup costs totalling $101 million in FY 1994 that would be funded by customer surcharges; (10) GAO also found many additional expenses that were not included in either DOD or service cost estimates; (11) because Superfund holds parties liable for the billions of dollars needed to remediate past contamination regardless of wrongdoing, it is important that DLA and the services deal with potentially responsible parties on the basis of consistent policy and accurate data; (12) however, the lack of DOD guidance on cost sharing has permitted inconsistencies in approaches to cost-sharing, and the potential for some parties to be held responsible for cleanup costs, while others in similar situations are not; and (13) if cost-sharing agreements are reached, omissions in historical information and cost data may inhibit the recovery of all appropriate costs.
This section provides information on the characteristics of RDD and IND attacks, major cities considered at high risk of terrorist act, core capabilities for all hazards preparedness, and response planning and associated federal guidance in the national preparedness system. A radiological attack is defined as an event or series of events leading to the deliberate release, or potential release into the environment, of radioactive materials in sufficient quantity to require consideration of protective actions. Such an act would probably be executed with no advance warning. The typical means of dispersing radioactive material in an RDD is through a conventional explosion. There is a wide range of possible consequences from an RDD incident depending on the type and size of the device and the extent of dispersal. According to FEMA officials, the most likely RDD attack would impact a small area and not result in acutely harmful radiation doses to exposed individuals but could result in latent effects increasing the risk of cancer to exposed individuals. In contrast, an IND attack would produce a nuclear explosion from fissile material, which releases extreme heat and powerful shock waves, and disperses radiation that would be lethal for a significant distance. It also produces radioactive fallout, which would deposit radioactive material over a large area. If the fission of the radioactive material is not achieved, the effects of the explosion may resemble the impacts of an RDD. A 2011 Congressional Research Service report states that the use of an RDD is more likely than an IND because the radioactive material to construct an RDD is more accessible, and it would be more difficult for a terrorist to make an IND. In both cases, early response within the first hours includes initial actions to protect public health and welfare. In 2003, DHS established an Urban Areas Security Initiative program to allocate homeland security grants to enhance and sustain the capacity to prevent, protect against, mitigate, respond to, and recover from acts of terrorism in high density urban areas, particularly the urban centers. The program identifies these high density urban areas by their major city. For example, the Chicago area includes 3 states, 14 counties, and 10 principal cities. Figure 1 shows the 31 major cities in the Urban Areas Security Initiative program in fiscal year 2012 within the 10 FEMA regions. In the National Preparedness Goal, DHS identified core capabilities needed for each of the five national preparedness mission areas. These core capabilities are considered necessary for an all hazards, capability- based approach to preparedness planning across all levels of government, although each level of government does not have to possess all capabilities. The five mission areas have in common three core capabilities—planning, public information and warning, and operational coordination—in addition to other capabilities specific to each mission area. For the response mission area, there are 11 additional core capabilities, for a total of 14. In compiling the list of core response capabilities, DHS based them on those capabilities that would be needed to respond to a large earthquake, major hurricane, and a weapon of mass destruction attack. Table 1 describes the activities for each of the 14 core capabilities in the response mission area that DHS considers necessary to save lives, protect property, and meet basic human needs after a catastrophic incident, such as an RDD or IND attack. Under the national preparedness system, FEMA has issued guidance to help planners at all levels of government develop and maintain viable, all hazards emergency operations plans. This guidance describes how to develop and maintain emergency operations plans and how to implement these plans. While the basic emergency operations plan is oriented around an all hazards approach, FEMA guidance states that special policies may be necessary to respond to catastrophic incidents, such as an RDD or IND attack. According to FEMA guidance, local governments have the discretion to address these attacks in specific plans that are annexed to a city’s emergency operations plan, and the inclusion of these annexes will vary based on a jurisdiction’s assessment of the risks it faces. DHS guidance establishing the national preparedness system recognizes that since local governments will focus their planning efforts on the more likely risks, federal planning must complement these planning efforts for low-probability, high-consequence risks, such as a terrorist attack using an RDD or IND. DHS issued preliminary guidance for RDD and IND response planning to federal, state, and local governments in 2008, through a Federal Register announcement, that was followed in 2010 with additional federal guidance on responding to an IND attack. Some professional organizations have also published guidance covering measures that state and local government should consider in responding to RDD and IND attacks. Figure 2 illustrates the conceptual response planning framework, including possible nuclear or radiological attack annexes as supplements to all hazards operational or emergency operations plans supporting national preparedness. Many major city emergency managers, although not all, responded to our questionnaire that their city had assessed the risks of RDD and IND attacks and had ranked the risk of these attacks as lower than the risk of other hazards their city faces. The results of our questionnaire also show that fewer than half of the major cities that responded had developed specific RDD and IND response plans. Most of the major cities that reported having RDD and IND response plans also reported having conducted exercises to validate those plans. Emergency managers of many of the major cities responding to our questionnaire reported that their city assessed the risk of RDD and IND attacks and ranked those risks as lower than other hazards their city faces. We asked emergency managers to refer to their city’s most recently completed Hazard Identification and Vulnerability Assessment, and report whether they assessed the risk of RDD or IND attacks and, if so, where those risks ranked relative to the other hazards assessed by their city, such as hurricanes, tornadoes, and flooding. All 27 cities responded to our question regarding their assessment of the risks of RDD and IND attacks. Three major cities reported that they had not completed a Hazard Identification and Vulnerability Assessment, or a similar assessment, and 6 cities reported that while they did have a recent assessment, they did not include either RDD or IND attacks in this assessment. Of the remaining 18 cities, 7 combined RDD and IND attacks into a single risk in their assessments, 9 assessed the risk of RDD and IND attacks separately, and 2 assessed the risk of an RDD attack but did not assess the risk of an IND attack. Of the 11 cities that assessed the risk of an RDD attack separately, 7 ranked the risk as lower than most or all other hazards their city faces. Of the 9 cities that separately assessed the risk of an IND attack, 7 ranked the risk as lower than most or all other hazards their city faces. In general, most cities that conducted a separate risk assessment for both RDD and IND reported that the risk of an RDD attack was higher than the risk of an IND attack. Table 2 shows the approach taken by the major cities responding to our questionnaire for assessing the risks of RDD and IND attacks, as well as the percentage of cities for each approach that ranked these risks lower than most or all other hazards they face. According to the responses to our questionnaire, fewer than half of the major cities have response plans that specifically address RDD and IND attacks, although some emergency managers indicated that their city had these plans in development. Of the 27 major cities that responded to our questionnaire, 11 (41 percent) of the emergency managers reported that their city had completed RDD response plans, and 8 (30 percent) had completed IND response plans. Some emergency managers for cities that did not have specific RDD and IND response plans reported that they would rely on other plans in the event of such an attack, including their city’s emergency operations plan or hazard management plan. Table 3 identifies the extent to which major cities have hazard-specific RDD or IND response plans. The questionnaire results regarding the number of cities with specific response plans for RDD and IND attacks are generally consistent with prior analyses conducted by FEMA. In 2010, FEMA conducted a national review of the contents of state and urban area emergency operations plans. FEMA found that more than 80 percent of urban areas reported that their emergency operations plans were well-suited to meet the challenges presented during large-scale or catastrophic incidents; however, fewer than half expressed confidence that specific RDD and IND response plans annexed to their emergency operations plans were adequate to manage such attacks. Forty percent of the urban areas had confidence in their RDD response plans, with 10 percent providing no response. Thirty percent said they had confidence in their IND response plans, with 20 percent providing no response. Most emergency managers responding to our questionnaire who reported having specific RDD or IND response plans also reported having conducted exercises to validate those plans based on federal guidance. According to FEMA, a response plan should not be considered complete until exercises are conducted to validate it. Of the 11 cities that have specific RDD response plans, 9 of their emergency managers reported that their city had participated in RDD exercises from 2010 to 2012. Of the 8 cities that have specific IND response plans, 5 of their emergency managers reported that their city had participated in IND exercises over this same time period. These results are comparable to FEMA’s 2010 national review of emergency operations plans that found that plans were frequently exercised. Specifically, 95 percent of all states and urban areas (including the major cities in our questionnaire) had conducted exercises using their basic plans, an increase from the previous review in 2006, and the response planning annexes subject to the most exercises included those involving the response to the release of hazardous materials, which can include the dispersal of radioactivity from RDD and IND attacks. Major city emergency managers responding to our questionnaire varied widely in their perception of their cities’ abilities to respond within the first 24 hours (early response) to an RDD or IND attack. Limited DHS guidance exists that is applicable to major cities on the capabilities needed for early response to an RDD attack, but more such guidance exists for the early response to an IND attack. According to FEMA officials, the agency is considering developing additional guidance on nuclear and radiological incidents to be annexed to the forthcoming FIOPs for the response and recovery mission areas that may help guide the preparation of specific response plans to supplement the all hazards emergency operations plans of cities interested in doing so. Our analysis of the questionnaire responses from major city emergency managers showed a wide variation in their perceptions regarding their cities’ abilities to respond within the first 24 hours to the RDD or IND attack depicted in the National Planning Scenarios, but most perceived that their city has more ability to conduct the early response for an RDD attack than for an IND attack. To gather this information, we obtained the emergency managers’ self-assessments of their cities’ abilities for early response to the national planning scenarios for RDD and IND attacks, but we did not ask them to assess their level of ability for each of the 14 federal core response capabilities. We also asked them to consider mutual aid from other jurisdictions in estimating their early response abilities, but not to consider assistance from federal sources. Our analysis of emergency manager responses showed a wide variation in perceived early response abilities across the major cities. For example, 7 of 27 cities were perceived by their emergency managers as being able to conduct all of the activities needed for early response to an RDD attack without federal assistance—such as treating casualties—while 2 cities were perceived as being able to conduct all of the activities needed for an IND attack, without federal assistance. Moreover, all cities were perceived by their emergency managers as being able to conduct at least a few early response activities for an RDD attack, while 10 cities were perceived as not being able to conduct any early response activities for an IND attack. Overall, our analysis concluded that more emergency managers perceived that their city was able to conduct some, almost all, or all of the early response activities needed for an RDD attack (22 of 27 cities) compared with an IND attack (7 of 27 cities). Ten major cities reported not having any ability to conduct the early response after an IND attack, even considering assistance from the surrounding jurisdictions and their states, which would suggest a high expectation for federal assistance during early response. For RDD, emergency managers from 20 major cities reported perceiving that their city was not able to conduct all of the necessary early response activities, which may also suggest some expectation for federal assistance. Figure 3 shows the distribution of major cities among five categories of early response activities following an RDD or IND attack based on emergency manager perceptions of their city’s early response abilities including mutual aid from other jurisdiction but not federal assistance. The wide variation among emergency manager perceptions of their cities’ abilities to conduct the early response to RDD and IND attacks is not necessarily related to whether or not a city had specific response plans. Our analysis found that 10 of the 17 major cities perceived by their emergency managers as being able to conduct all or almost all the necessary response activities for an RDD attack did not have specific RDD response plans. For IND attacks, three of the four major city perceived by their emergency managers as able conduct all or almost all of the necessary response activities for an IND attack did not have specific IND response plans. Regarding the wide differences in emergency managers’ perceptions of their cities’ abilities to respond, FEMA officials told us that some cities will tend to overestimate the risks of an RDD attack due to their lack of understanding about dispersed radioactive material and underestimate their actual abilities to conduct responses across the federal core response capabilities. They told us that cities in states with nuclear power plants are likely to have a greater understanding of the possible effects of a radiological attack and thus might be able to assess the risks and their cities’ abilities to respond better than other cities. These cities would have access to state technical and resource assistance developed and exercised to respond to a radiological dispersal incident at a nuclear power plant, which would have some characteristics of the dispersal of radioactive material by an RDD. Moreover, FEMA officials told us that cities closer to federal offices and facilities tend to have more interaction with FEMA and NNSA subject matter experts and are likely to have a greater understanding of the nature of an RDD attack. In regard to IND attacks, FEMA officials told us that they would expect that emergency managers would claim that such an attack would overwhelm their city response resources and that their city would need federal assistance across most federal core response capabilities. DHS has provided limited guidance on the early response capabilities needed by cities for a large RDD attack based on the planning assumptions contained in the National Planning Scenarios, but more such guidance exists for the IND attack substantially based on the planning assumptions contained in the National Planning Scenarios. DHS has identified the core capabilities needed to respond to any catastrophic incident but generally not the specific capabilities needed by cities for early response to these attacks. DHS guidance contained in an annex to the 2008 National Response Framework states that an RDD or IND attack directed at a major city can have consequences that overwhelm the capabilities of state and local governments and may also seriously challenge existing federal response capabilities. In regard to RDD response, this DHS guidance states that major cities should be able to respond to small radiological releases with only occasional federal assistance but does not address the large RDD attack depicted in the National Planning Scenarios. According to FEMA and NNSA officials, additional federal guidance may not be necessary because they expect major cities to have the abilities to respond to a more likely smaller scale RDD attack than the large RDD attack, as they would a hazardous materials spill. If needed, the federal response to a hazardous materials release is described in an emergency support function covering oil and hazardous materials releases that is annexed to the National Response Framework. DHS has also issued guidance on protective actions that should be taken at various phases of response, including early response to the dispersal of radioactive materials, such in an RDD attack. However, the only detailed planning assumptions in current federal guidance for an RDD attack are those in the National Planning Scenarios and this is for a large RDD attack. DHS has not provided guidance on the early response capabilities needed by major cities for such an attack. According to NNSA officials, cities are likely to reach out for federal support in the case of either a large or small-scale RDD attack due to the rarity of the event and the high profile of any radiological emergency. The federal government has issued more guidance pertaining to early response to an IND attack substantially based on the National Planning Scenario. In 2009, DHS issued an interim concept of operations plan for the federal response to the IND attack. This federal operations plan states that the federal priority in the first 24 hours is to assist in saving lives and reducing casualties, while providing advice to those in the incident area to shelter in the nearest structure and listen for instructions from authorities. This federal operations plan also directs the states and local governments to delineate control zones, coordinate evacuations, make shelter-in-place decisions, issue protective action recommendations, initiate decontamination procedures, and use the National Guard to assist with environmental monitoring, but it provides limited information on the capabilities needed to complete these actions. In 2010, a federal interagency task force issued planning guidance to all levels of government that expanded on the 2008 DHS planning guidance by addressing gaps in IND response, expanding the discussion of needed capabilities, and examining other IND scenarios beyond the one identified in the National Planning Scenario. This 2010 guidance presents general background information that builds a foundation for specific planning recommendations on response to an IND attack during the first 24 to 72 hours prior to the arrival of significant federal resources. This guidance states that other recommendations would be forthcoming, such as for establishing critical communications among first responders. This guidance recognizes that response planning must be done on a city- specific basis using city-specific impact assessments. However, this guidance also points out that response to an IND will largely be provided from neighboring jurisdictions, which would require advanced planning to establish mutual aid and response protocols. Notwithstanding the specific planning recommendations, the 2010 planning guidance does not detail the early response capabilities needed by major cities to an IND attack in relation to other sources of assistance. Without greater awareness of and additional federal guidance on the capabilities needed by cities for early response to both RDD and IND attacks, cities may not have the information they need to adequately prepare for and respond to them. Any gaps in response capabilities could lead to complications that result in greater loss of life and economic impacts. Figure 4 provides a simple illustration of the capability requirements for increasing levels of incident effects, with an IND attack likely to be the highest level of incident effect. FEMA is considering the need to develop a nuclear and radiological annex, as depicted in figure 2, to help guide federal response activities and possibly assist in the development of specific response plans for RDD and IND attacks as supplements to city emergency operations plans. This federal nuclear and radiological annex would be attached to the forthcoming FIOPs—currently under review for approval—for the all hazards planning framework for the response and recovery mission areas. FEMA officials told us that a nuclear and radiological annex may be needed to supplement these FIOPs because their all hazards orientation would not address several unique requirements and concepts of operations specifically tailored to the needs for nuclear and radiological incidents. The need for such an annex is also supported by a 2012 DHS report that found the response and recovery needs after a radiological attack differ from traditional all hazards incidents due to the need for decontamination activities, heightened public anxiety, long-term risk management, and substantial disruption to citizen’s lives and the economy. FEMA officials said that if they decide to develop a nuclear and radiological annex it could help guide adjustments to FEMA regional operational plans. They also told us that these adjustments to the regional operational plans may help encourage major cities in FEMA regions to develop annexes to their all hazards emergency operations plans covering specific RDD and IND response plans. FEMA has not determined what it might include in the nuclear and radiological annex or how to address RDD and IND response planning. FEMA officials told us that this annex is expected to address RDD and IND attacks, as well as a broader spectrum of radiological dispersal incidents, such as nuclear power plant accidents. According to FEMA guidance, separate hazards can be grouped under a more general category, such as terrorist acts, but FEMA recognizes that problems can arise that will affect subsequent analysis when grouping hazards with a wide range of consequences under a single category, such as might be the case with RDD and IND attacks. FEMA officials provided information to compare the characteristics of RDD and IND attacks, as shown in table 4. One of the characteristics is the magnitude of the RDD and IND attacks. FEMA officials told us that if they decided to develop the nuclear and radiological annex, they would also consider the need to clarify the planning assumptions for these incidents, particularly the RDD attack scenario. An additional FEMA consideration in developing the nuclear and radiological annex to the FIOPs is the information recently gained from the agency’s participation in a multigovernmental initiative to develop an IND regional operations plan for Chicago, which is intended to guide development of other regionally based IND operations plans. For example, FEMA found that the development of this IND regional operations plan provided information on needed early response capabilities, coordination of stakeholder groups, the type and timing of federal assistance, and the level of effort to complete the plan. The IND planning team determined that the most feasible course of action to save the greatest number of lives during early response involved concentrating on a limited number of activities around public information and warning, operational coordination and communications, on-scene security and protection, situational assessment, and shelter-in-place and evacuation. These activities are covered by 7 of the 14 federal core response capabilities in the national preparedness goal. In addition, IND planning team members told us that the planning effort gave them a greater appreciation of the communication and coordination activities needed across stakeholder groups to respond to an IND attack. The planning effort involved more than 300 local, state, and federal emergency management offices and private entities. Moreover, the IND planning team was able to develop a detailed spreadsheet containing the type and timing of assistance that might be available and needed—at the three response phases—for an IND attack on Chicago. The development of the plan has taken time and substantial funding. The IND planning process has been under way since 2010, costing about $7.6 million, as of 2012, when the plan was completed. This cost includes the project work of the city, the state of Illinois, neighboring states, and federal agencies that contributed to the development of the overall plan. As a result of the IND planning effort in Chicago, FEMA officials told us that they plan to use the information gained to assist other major cities seeking to develop similar operations plans with regional partners. FEMA officials told us that they also plan to undertake this planning initiative in Boston, the District of Columbia, and Houston during fiscal year 2013, with planning initiatives in Los Angeles, New York and Philadelphia to follow. In addition, FEMA officials told us that they plan to look for geographic and infrastructure similarities, such as common building structures and transportation systems in each region in order to expedite the planning process and reduce planning costs for other cities in a region. FEMA’s Response Planning Division has allocated about $3.8 million for IND planning activities for fiscal year 2013. FEMA officials also told us that they thought that an IND response plan would be sufficient to address most of the response needs after an RDD attack as well. Emergency managers of major cities responding to our questionnaire reported varying levels of need for federal support in early response to RDD and IND attacks in the form of technical and resource assistance, procedures and information for early response activities, and preparedness funding. Emergency managers identified a number of areas for federal technical and resource assistance, but we found limitations in the federal guidance applicable to major cities on the type and timing of this assistance. Emergency managers of major cities also reported the need for federal government research that could improve procedures and information for their early response to RDD and IND attacks. DHS has supported working groups of subject matter experts to help mitigate shortcomings in response capabilities for IND attacks, which may have applications for improving RDD response capabilities. In addition, emergency managers reported that a decrease in federal funding would affect their abilities to conduct early response to RDD and IND attacks. Most emergency managers from major cities responding to our questionnaire reported that they need federal technical and resource assistance to support their early response to RDD and IND attacks, but federal guidance on the type and timing of this assistance is not found in a single document and may not be well understood by emergency managers. Nineteen of 27 emergency managers perceive a need for federal technical and resource assistance for early response to an RDD attack, and 21 of them perceive a need for this assistance in early response to an IND attack. Our analysis of questionnaire responses determined that of the 14 core response capabilities, emergency managers indicated that the capability most needing federal technical and resource assistance for both RDD and IND attacks (11 of 27 cities each) was situational assessment. Situational assessment provides decision makers with technical information such as the nature and extent of the hazard, its cascading effects, and the status of the response. For RDD attacks, after situational assessment, the emergency managers’ next most frequently cited federal assistance needs were the following: public health and medical services (8 of 27 cities), operational coordination (5 of 27 cities), and on-scene security and protection (5 of 27 cities). For IND attacks, after situational assessment, the emergency managers’ next most frequently cited federal assistance needs were as follows: on-scene security and protection (5 of 27 cities), and public health and medical services (5 of 27 cities). We also obtained several responses from emergency managers regarding actions, such as planning, that the federal government should take to help sustain and improve early response capabilities. For example, one emergency manager commented that integrated RDD and IND plans of local, state, and federal government roles and responsibilities are nonexistent. Another emergency manager stated that that the federal government should provide a model RDD and IND response plan and templates to assist local jurisdictions’ efforts. The type and timing of federal assistance to major cities during the early response to an RDD or IND attack may not be well understood by all major city emergency managers, even though some guidance is available but in different documents. For example, in 2008, DHS issued guidance on federal agency responsibilities for responding to incidents involving the release of nuclear, radiological, and hazardous materials in the National Response Framework, and introduced the concept of phases of response in planning guidance for RDD and IND attacks. However, these and other guidance documents do not contain complete information on the type and timing of federal technical and resource assistance by response phases. For RDD attacks, DHS has not provided specific operational guidance on the type and timing of federal assistance that might be made available to cities for early response, although some information is available in an emergency support function annex to the National Response Framework. For IND attacks, DHS identified the technical and resource activities that federal agencies could provide to respond to an IND attack in a 2009 federal interim concept of operations plan and more recently in the 2012 IND regional operations plan for Chicago. In addition, federal agencies have provided various descriptions of the type and timing of the federal assistance that might be available on their websites. However, FEMA officials told us that the type and timing of the federal response would depend on the proximity of the city to federal offices. Confusion over the type and timing of assistance, such as federal assistance in the case of an IND attack, could produce a disjointed and untimely early response to the attack that might increase its consequences. Based on information from a variety of sources, some of which may not be readily available to major cities, we developed an illustration of the federal agencies most likely to assist these cities during early response with activities associated with the core capabilities they claim they can support after an IND attack. While other federal agencies are involved through their emergency support functions under the National Response Framework, a senior FEMA official told us that the four federal agencies we identified in figure 5 would be the most involved during the first 24 hours after an IND attack. Figure 5 provides an illustration of the federal technical and resource support for the core capabilities necessary for major city early response to an IND attack. Most emergency managers responding to our questionnaire indicated that their cities perceive a need for federal government research that could improve procedures or information for their early response to RDD and IND attacks. Using DHS guidance, we developed a list of 10 topic areas for federal government research initiatives that could improve procedures or information used by cities during the first 24 hours after the detonation of an RDD or IND. We asked emergency managers for their opinions on how much impact, if any, each topic area might have on improving their city’s capability for early response to an RDD or IND attack. For example, emergency managers from two-thirds of the major cities (18 of 27 cities) identified communicating a sheltering-in-place strategy to the public and communicating potential impacts of radiation exposure to the public as the topic areas having the highest impact. Figure 6 shows emergency managers’ responses identifying the topic areas their city considers important for improving procedures or information necessary for early response to RDD and IND attacks. We compared the emergency managers’ responses on the impact that improved procedures and information might have on their city’s early response to RDD and IND attacks with the research initiatives being considered by six FEMA IND focus working groups to determine if they align. FEMA has established these working groups of subject matter experts to mitigate shortcomings in response capabilities for an IND attack, such as clarifying responsibilities and coordinating efforts among government levels and federal agencies. We found that the areas addressed by the working groups through their initiatives are generally the same as those topic areas emergency managers reported as having a high impact on procedures and information needed for early response to an RDD and IND attack. FEMA does not have current plans to identify or attempt to fill potential gaps in capabilities for early response specifically to an RDD attack. However, IND focus area working group experts and a senior FEMA official told us that their efforts to fill gaps in IND capabilities and all hazards plans would have application for other catastrophic hazards, including RDD attacks. For example, they told us that RDD and IND attacks share some common attributes, such as (1) the release of radiological materials, (2) the need for decontamination and radiation treatment, and (3) prioritization of response resources and personnel based on ethical, philosophical, legal, and practical decision tools. In addition, both types of attacks also require communication of consistent information about radiation effects on general health outcomes and protective measures. Further, NNSA officials told us they also have several programs that might apply to RDD, as well as IND planning and capability enhancements. Most emergency managers responding to our questionnaire indicated that their cities need federal funding to maintain current early response capabilities to an RDD or IND attack. Almost all emergency managers (24 of 27 cities for RDD and 23 of 27 cities for IND) indicated that their city needs federal funding to maintain current early response capabilities. According to the 2008 National Response Framework, response capabilities are developed within the national preparedness system through effective planning, coordinating, training, equipping, and exercising activities. These activities are essential elements of an integrated, capability-based approach to preparedness. Emergency managers reported that a decrease in federal funding would affect the degree to which each of these activities builds the capabilities needed for early response to an RDD or IND attack. Our analysis of questionnaire results indicated that about a third of 27 cities identified equipment, training, and planning activities important to the capabilities that would be most affected by a decrease in federal preparedness funding, with fewer cities indicating that coordination and exercising would be affected. Federal funding to support preparedness against terrorist attacks and other catastrophic incidents such as RDD and IND attacks currently comes from seven DHS grant programs. In fiscal year 2013, DHS allocated more than $1.5 billion for these seven grant programs, but officials in charge of these programs were unable to determine how much of the funding was used by major cities to improve early RDD and IND response capabilities. Two of the DHS grant programs that have been most relevant to response preparation for an RDD or IND attack are the Homeland Security Grant Program and the temporary (fiscal years 2008 to 2011) Regional Catastrophic Planning Grant Program. In fiscal year 2013, DHS allocated more than $968 million to the Homeland Security Grant Program, and more than half (roughly $560 million) was allocated to Urban Areas Security Initiative grants, a portion of which goes to law enforcement terrorism prevention activities. The Regional Catastrophic Planning Grant Program awarded $14 million in grants in fiscal year 2011, the last year it made awards, to support regional planning efforts to address catastrophic incidents. For example, these funds were used by New York and New Jersey to develop RDD and IND response plans and were also combined with other federal funding to support development of the IND regional operations plan for Chicago. According to members of the IND planning team for Chicago, the Regional Catastrophic Planning Grant Program provided the funding to bring together the many stakeholder groups that would be involved in responding to an IND attack and, without continued funding, it would be difficult to maintain the same level of collaboration. Appendix III provides a detailed breakdown of the seven federal grant programs for fiscal year 2013. DHS has recognized that the early response to catastrophic incidents such as an RDD or IND attack on a major city is critical and has to come first from the city and surrounding jurisdictions. While cities are assumed to be preferred targets for an RDD or IND attack, in response to our questionnaire, many emergency managers indicated that their cities ranked the risk of these attacks as lower than other hazards their cities face, and fewer than half of the cities have specific response plans for such attacks. City emergency managers rely on federal guidance to prepare all hazards emergency operations plans and also specific response plans for hazards of concern that can be annexed to these emergency operations plans at the discretion of the city. However, we found limitations in the federal planning guidance applicable to the early response capabilities needed by cities for an RDD attack of the size depicted in the National Planning Scenarios. More federal planning guidance applicable to major cities has been developed for IND response, primarily based on the event depicted in the National Planning Scenarios, but this guidance does not detail the early response capabilities needed by major cities in relation to other sources of assistance. Perceptions of emergency managers varied widely on their cities’ abilities to conduct the activities needed for early response to the type of RDD attacks described in the National Planning Scenarios—with assistance from surrounding jurisdictions but not the federal government—from being able to conduct all activities for early response to being able to conduct few early response activities. Less variation was evident for perceived early response abilities to an IND attack—considering this same source of assistance—with many cities indicating that such an attack would overwhelm their response abilities. Most cities indicated the need for federal technical and resource assistance—among other areas of federal support—for early response to RDD and IND attacks, but we found that complete guidance on the type and timing of this assistance is not readily available in a single document and is not well understood by all major city emergency managers. Any confusion over the type and timing of federal assistance could produce a disjointed and untimely early response to an attack that might increase its consequences. Without greater awareness of existing federal guidance and continued actions to close gaps in the guidance applicable to cities’ early response to RDD and IND attacks, some cities may not have the information they need to adequately prepare for and respond to them. Lack of adequate response planning could lead to complications that result in greater loss of life and economic impacts. To provide assistance to major cities in planning for early response to RDD and IND attacks, we recommend that the Secretary of Homeland Security direct the Administrator for the Federal Emergency Management Agency to promote greater awareness of existing federal guidance and develop additional guidance where appropriate to clarify the capabilities needed by cities for these attacks, including the planning assumptions for an RDD attack and the type and timing of federal assistance for early response. We provided a draft of this report to DHS and to DOE through NNSA for review and comment. DHS did not concur with our recommendation and provided written comments, which are reproduced in appendix IV. In addition, in an e-mail received August 27, 2013, the Director, Audit Coordination and Internal Affairs Office of Management and Budget (NA- MB-1.1) for NNSA stated that as the recommendations in the report are directed to DHS and FEMA, NNSA would not be preparing a formal response. DHS and NNSA provided technical comments, which we incorporated as appropriate. In the comment letter, DHS states that FEMA program officials and subject matter experts are concerned that our survey may have resulted in the receipt of skewed data and information that affected our analysis and conclusions. For example, the letter stated that some respondents may have believed that an IND/RDD event would or could be fully handled at the local level and therefore provided inputs partial toward taking on an inordinate level of responsibility. They also stated that our recommendation runs contrary to the survey results which illustrate a trend of grouping RDD and IND attacks for analysis and planning. We disagree. Our questionnaire explicitly asked city emergency managers to consider response assistance from surrounding jurisdictions in their assessment of response abilities, and only exclude federal assistance. In this way, we were able to isolate the perceived need for federal support, which we found for technical and resource assistance, improved procedures and information, and funding. Further, emergency managers did not provide trend information through their questionnaire responses on assessing RDD and IND risks, and cities more often separated the assessment of RDD and IND risks than combined them as DHS indicated in its comment letter. In addition, we took a number of steps to develop the questionnaire and to identify the best source for a response. For example, we conducted extensive pretesting and obtained comments on a draft questionnaire from officials at FEMA’s National Preparedness Assessment Division. We also determined that city emergency managers were in the best position to provide a city-wide perspective on this issue, but we allowed them to seek advice from other city officials as necessary. In its comment letter, DHS stated that neither the department nor FEMA believes that our recommendation that FEMA develop additional guidance in a single document to clarify the capabilities needed by cities for these attacks, including the planning assumptions for an RDD attack and addressing the type and timing of federal assistance for early response takes into sufficient account advances made to the preparedness system. In addition, the comment letter states that FEMA program officials and subject matter experts believe the recommendation does not align with our survey results or all-hazard risk management for worst case catastrophic scenarios. It also states that FEMA has concerns with the report’s characterization of the nation’s ability to respond to a nuclear and/or radiological attack. Furthermore, the letter states that additional RDD response guidance in a single document would be counterproductive to the existing planning and guidance structure for IND and all hazards incidents. As we note in the report, FEMA is already considering development of a nuclear and radiological incident annex to the FIOPs for the response and recovery mission areas based on the recognition that the all hazards approach may be insufficient to cover the unique response needs for nuclear and radiological incidents. While our recommendation did not specify how FEMA should provide this additional guidance, we have added language to the recommendation to clarify that that we are not recommending a single guidance document to cover only RDD response. In addition, to address the DHS/FEMA concern that city emergency managers may not be fully informed about available guidance, we added language to the recommendation for FEMA to promote greater awareness of existing federal guidance. More generally, DHS’ comment letter states that FEMA officials do not believe we provided adequate context for the National Preparedness System as defined by Presidential Policy Directive 8, which may cause confusion for cold readers not familiar with how the directive has been implemented. In addition, the comment letter states that the operational frameworks, structures, and ongoing efforts that have been developed in support of the directive’s comprehensive approach to national response are not fully outlined in the report; specifically, the letter states that the Nuclear and Radiological Incident Annex to the National Response Framework and Emergency Support Functions as the nation’s coordinating structures are not accurately portrayed and cites figure 5 as an inaccurate and misleading depiction of the federal response for an IND/RDD event. While the purpose of this report was not to conduct a detailed assessment of federal guidance or implementation of the directive, we added additional information about FEMA’s leadership role in coordinating the federal response to nuclear and radiological incidents within the context of the National Response Framework and its efforts to develop IND response planning guidance. We also added that the illustration in figure 5 of federal agency support for core response capabilities that might be available to cities during the first 24 hours after an IND attack does not include all federal agencies’ activities but only those four agencies confirmed by a senior FEMA official as being most present during this time period. We are sending copies of this report to the Secretary of Homeland Security and the Secretary of Energy, the appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202)512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made contributions to this report are listed in appendix V. In our review, we examined major cities’ (1) assessment of the risks of RDD and IND attacks and the extent to which they have developed plans for responding to them, (2) perceptions of their abilities to respond to RDD and IND attacks in the first 24 hours (early response), and (3) perceptions of their need for federal support in the early response to RDD and IND attacks. To address these questions we sent a questionnaire to major city emergency managers, and we conducted interviews with outside experts and federal, state, and local officials. To gather information from major U.S. cities relevant to all three of our objectives, we developed a questionnaire for the directors of emergency management for each of the 31 major cities that were in the Urban Areas Security Initiative (UASI) program in fiscal year 2012. We chose the major cities within each UASI region because in our document review, as well as in interviews with the Federal Emergency Management Agency (FEMA), UASI regions were identified as higher risk jurisdictions for terrorist acts, including those using RDDs or INDs. As FEMA guidance states that local jurisdictions should plan and develop capabilities to respond to incidents based on risk, these jurisdictions are in need of developing plans and preparing for the response to RDD and IND attacks. Each of the 31 UASI locations covers a large metropolitan area that includes many local governments. For example, the Chicago UASI includes 3 states, 14 counties, and 10 principal cities. It did not serve our purpose to send the questionnaire directly through the UASI structure itself as the number of jurisdictions involved could introduce issues of reliability in the answers, as well as consistency in terms of the process used by each UASI to fill out our questionnaire. FEMA officials told us that the largest metropolitan area within each UASI constitutes the area at the highest risk for attack in each jurisdiction. In addition, we selected city emergency managers to receive the questionnaire because they were in the best position to provide a city-wide perspective on the level of preparedness to respond to RDD and IND attacks. Therefore, we chose to send questionnaires to only the emergency managers of these large metropolitan areas. The emergency management offices in Atlanta and Newark did not respond to our contact attempts, so we sent out questionnaires to the 29 cities that did reply. In developing our questionnaire, we developed questions that addressed all three of the report objectives and had these reviewed both internally, as well as by staff of FEMA’s National Preparedness Assessment Division. We conducted seven cognitive pretests with emergency management officials and first responders from major cities selected for their geographic location and population size in order to minimize errors that might occur from respondents who interpreted our questions differently than we intended. During these pretests, we also interviewed these emergency management officials and first responders to gain additional context regarding their city’s preparedness for responding to either an RDD or IND attack. The questionnaire was implemented as a self-administered Microsoft Word form e-mailed to respondents. We sent e-mail notifications to emergency managers beginning on December 11, 2012. We then sent the questionnaire and a cover e-mail to officials on December 12, 2012, and asked them to fill in the questionnaire form and e-mail it back to us within 3 weeks. To encourage emergency managers to complete the questionnaire, we sent e-mail reminders and a replacement questionnaire to nonrespondents approximately 1 week after, and again 3 weeks after, the initial questionnaire was distributed. We also made follow-up phone calls to nonrespondents from January 24, 2013, to February 8, 2013. We closed the questionnaire on February 20, 2013. We received 27 completed questionnaires for an overall response rate of 87 percent— Phoenix and San Antonio did not return the questionnaire. Because we attempted to collect data from each of the UASI major cities rather than a sample of major cities, there was no sampling error. However, the practical difficulties of conducting any questionnaire may introduce errors, commonly referred to as nonsampling errors. For example, differences in how a particular question is interpreted, the sources of information available to respondents, how the responses were processed and analyzed, or the types of people who do not respond can influence the accuracy of the questionnaire results. We took steps in the development of the questionnaire, the data collection, and the data analysis to minimize these nonsampling errors and help ensure the accuracy of the answers that were obtained. For example, a GAO social science questionnaire specialist designed the questionnaire, in collaboration with GAO staff with subject matter expertise. The draft questionnaire was pretested to ensure that questions were relevant, clearly stated, and easy to comprehend. The questionnaire was also reviewed by external experts and a second GAO questionnaire specialist. Data were electronically extracted from the Microsoft Word form questionnaires into a comma- delimited file that was then imported into a statistical program for analyses. No manual data entry was performed, thereby removing an additional potential source of error. We examined the questionnaire results and performed computer analyses to identify inconsistencies and other indications of error and addressed such issues as were necessary. Additionally, we contacted respondents to clarify ambiguous responses when necessary. Quantitative data analyses and the compilation of open- ended responses were conducted by the first GAO questionnaire specialist using statistical software and working directly with GAO staff with subject matter expertise. An independent GAO data analyst checked the statistical computer programs for accuracy. Responses to closed-ended (e.g., Yes/No) questions were summarized as standard descriptive statistics. Responses to open-ended (i.e., narrative) questions were analyzed through content analysis. In conducting the content analysis, one GAO analyst reviewed each open- ended response from each emergency manager to identify recurring themes. Using the identified themes, the analyst then developed categories for coding the responses. A second GAO analyst reviewed the responses from each emergency manager and reviewed the first analyst’s themes and categories to reach concurrence on the themes and categories. Each of the two GAO analysts then independently reviewed the answers to all open-ended questions and placed them into one or more of the categories. The analysts then compared their coding to identify any disagreements and reached agreement on all items through discussion. For the analysis of the open-ended responses on the city’s ability to respond to either an RDD or IND, we developed six categories based on the number of early response activities a city stated it could provide. Specifically, we reviewed the responses looking for whether the city would be overwhelmed, the number of specific activities the city stated it would conduct, and significant challenges it would face after the attack. To provide important context regarding current federal activities that relate to our second and third objectives on RDD and IND response planning and federal response capabilities, we traveled to Chicago to meet with federal, regional, state, and city planners who had participated in the interagency IND regional planning effort. In addition, we met with Department of Homeland Security and FEMA officials to learn how they may use insights gained from the interagency IND regional planning effort in Chicago for use in other major cities and for developing a potential nuclear and radiological annex to the draft federal interagency operational plans for the response and recovery mission areas. To obtain additional information for our third objective on the need and availability of federal early response support, we interviewed officials involved with federal research initiatives to close gaps in response capabilities, as well as those who oversaw planning funds and federal technical and resource assistance activities for RDD and IND attacks. Specifically, we interviewed FEMA emergency management interagency working groups, response planning, and grants officials; National Nuclear Security Administration emergency management response operations officials; Department of Energy national laboratory officials; and subject matter experts. We also reviewed relevant federal guidance documents. We conducted this performance audit from June 2012 to September 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix III: Fiscal Year 2013 DHS Preparedness Grant Programs Funding Awarded Grant programs purpose For states and urban areas to prevent, protect against, mitigate, respond to, and recover from acts of terrorism and other threats. FY 2013 funding awarded Over $968 million (HSGP) Over $354 million (SHSP) Over $558 million (UASI) $55 million (OPSG) Over $968 million (HSGP) To support the implementation of state homeland security strategies to build and strengthen preparedness capabilities at all levels. To enhance regional preparedness and capabilities in 31 high-threat, high-density areas. To enhance cooperation and coordination among federal, state, territorial, tribal and local law enforcement agencies to jointly enhance security along the United States land and water borders. To assist state and local governments in enhancing and sustaining all hazards emergency management capabilities. Tribal Homeland Security Grant Program To implement preparedness initiatives to help strengthen the nation against risk associated with hazards including terrorist attacks. To support physical security enhancements for nonprofit organizations determined to be at high risk of a terrorist attack and located within one of the FY 2012 UASI-eligible urban areas. Intercity Passenger Rail (Amtrak) Program To protect critical surface transportation infrastructure and the traveling public from terrorism and increase Amtrak rail system resilience. To protect critical port infrastructure from terrorism, enhance maritime domain awareness, and strengthen risk management capabilities to protect against improvised explosive devices and other nonconventional weapons. To protect critical surface transportation and the traveling public from acts of terrorism and to increase the resilience of transit infrastructure. In addition to the individual named above, Ned Woodward, Assistant Director; Cheryl Arvidson; Thomas Laetz; Eli Lewine; David Lysy; Mike Silver; Kathy Smith; and Kiki Theodoropoulos made key contributions to this report.
A terrorist attack in a major city using an RDD or an IND could result not only in the loss of life but also have enormous psychological and economic impacts. Major cities are assumed to be preferred targets of such attacks, and local governments, along with their states, have primary responsibilities for early response (within the first 24 hours), with assistance from federal sources, as necessary, coming later. A disjointed or untimely response could increase the impact and undermine public confidence in federal, state and local governments' ability to respond to an attack. GAO was asked to review issues related to response preparedness for RDD and IND attacks. This report examines major cities' (1) assessment of RDD and IND risks and development of response plans, (2) perceptions of their abilities to respond within the first 24 hours, and (3) perceptions of the need for federal support in early response to these attacks. GAO primarily relied on questionnaire responses from emergency managers of 27 of the 31 major cities that the Department of Homeland Security considers to be at high risk for terrorist attack, the review of pertinent federal guidance, and interviews with FEMA officials and others. Many emergency managers from the 27 major cities responding to GAO's questionnaire, although not all, reported that their city had assessed the risks of a terrorist attack using a radiological dispersal device (RDD) or improvised nuclear device (IND) and had ranked the risk of these attacks as lower than the risk of other hazards they face. Also, 11 of the 27 reported that they had completed RDD response plans, and 8 of the 27 reported that they had completed IND response plans. Some emergency managers for cities without specific RDD and IND response plans reported that they would rely on their city's all hazards emergency operations plan or hazard management plan if attacked. Most cities that had RDD and IND response plans reported conducting exercises to validate the plans based on federal guidance. Major cities varied widely in perceptions of their abilities to respond within the first 24 hours of RDD and IND attacks (early response). For example, all 27 cities were perceived by their emergency managers as being able to conduct at least a few of the early response activities after an RDD attack, such as treating casualties, with assistance from other jurisdictions but not federal assistance. Ten of those cities were perceived as not being able to conduct any of the response activities for an IND attack without federal assistance. GAO analysis found that these perceptions were not necessarily related to a city having RDD and IND response plans but rather related to their understanding of nuclear and radiological incidents and the capabilities needed for response according to information obtained from Federal Emergency Management Agency (FEMA) officials. GAO found limited federal planning guidance related to the early response capabilities needed by cities for the large RDD attack depicted in the national planning scenarios. Federal guidance may not be needed, according to FEMA officials, because they expect cities to address a more likely but smaller RDD attack--as they would a hazardous materials spill--with limited federal assistance. More federal planning guidance applicable to cities has been developed for IND response, but this guidance does not detail the early response capabilities needed by cities in relation to other sources of assistance. Without greater awareness of and additional federal guidance on the capabilities needed by cities for early response to these attacks, cities may not have the information they need to adequately prepare for and respond to them. This could lead to complications that result in greater loss of life and economic impacts. Most emergency managers reported perceived needs for federal technical and resource assistance to support their cities' early response to RDD (19 of 27 cities) and IND (21 of 27 cities) attacks. However, GAO found that federal guidance on the type and timing of such assistance is not readily available or understood by all emergency managers. This condition could lead to a disjointed and untimely response that might increase the consequences of either kind of attack. Emergency managers also reported a need for improved procedures and more information that FEMA is addressing. In addition, most emergency managers reported their city needed federal funding to maintain current capabilities to respond to RDD and IND attacks. According to DHS guidance, response capabilities are developed through planning, training, equipping, and exercising, which are essential elements of an integrated, capability-based approach to preparedness. GAO recommends that FEMA develop guidance to clarify the early response capabilities needed by cities for RDD and IND attacks. FEMA did not concur with this recommendation. GAO believes that gaps in early response abilities warrant federal attention and has clarified its recommendation.
Information systems can be complex undertakings consisting of a multitude of pieces of equipment and software products, and service providers. Each of these components may rely on one or more supply chains. Obtaining a full understanding of the sources of a given information system can also be extremely complex. According to the Software Engineering Institute, the identity of each product or service provider may not be visible to others in the supply chain. Typically, an acquirer, such as a federal agency, will only know about the participants directly connected to it in the supply chain. In addition, the complexity of corporate structures, in which a parent company (or its subsidiaries) may own or control companies that conduct business under different names in multiple countries, presents additional challenges to fully understanding the sources of an information system. As a result, the acquirer will have little visibility into the supply chains of its suppliers. Federal procurement law and policies promote the acquisition of commercial products when they meet the government’s needs. Commercial providers of IT use a global supply chain to design, develop, manufacture, and distribute hardware and software products throughout the world. Many of the manufacturing inputs required for those products— whether physical materials or knowledge—are acquired from various sources around the globe. Figure 1 depicts the potential countries of origin of common suppliers of various components within a commercially available laptop computer. The Federal Information Security Management Act of 2002 (FISMA) establishes federal agency information security program requirements that support the effectiveness of information security controls over information resources that support federal operations and assets. Its framework creates a cycle of risk management activities necessary for an effective security program, and it assigns responsibilities to the National Institute of Standards and Technology (NIST) for providing standards and guidelines on information security. In its August 2009 revision of Special Publication (SP) 800-53 (Revision 3), which provides recommended security controls for federal agencies and organizations, NIST included for the first time a security control for supply chain protection (SA-12). SA-12 identified several specific measures organizations could use to provide additional supply chain protections, such as conducting due diligence reviews of suppliers; using trusted shipping and warehousing; and employing independent analysis and penetration testing of IT systems, components, and products. In addition, SP 800-53, Revision 3, includes a security control for system and service acquisition policies and procedures (SA-1). Thus, for systems where both controls are selected, agencies should develop, disseminate, and review acquisition policy and implementing procedures that help protect against supply chain threats throughout the system development life cycle. Further, in March 2011, NIST published SP 800- 39, an approach to organizationwide management of information security risk, which states that organizations should monitor risk on an ongoing basis as part of a comprehensive risk management program. Reliance on a global supply chain introduces multiple risks to federal information systems and underscores the importance of threat assessments and risk mitigation. Supply chain threats are present at various phases of a system’s development life cycle. Key threats that could create an unacceptable risk to federal agencies include the following: installation of hardware or software containing malicious logic, which is hardware, firmware, or software that is intentionally included or inserted in a system for a harmful purpose; installation of counterfeit hardware or software, which is hardware or software containing non-genuine component parts or code; failure or disruption in the production or distribution of critical products resulting from manmade or natural causes; reliance on a malicious or unqualified service provider for the performance of technical services; and installation of hardware or software that contains unintentional vulnerabilities, such as defects in code that can be exploited. Such threats can have a range of impacts, including allowing attackers to take control of systems and read, modify, or delete sensitive information; decreasing the reliability of IT equipment; decreasing the availability of material needed to develop systems; or allowing remote attackers to cause a denial of service, among other things. Threat actors can introduce these threats into federal information systems by exploiting vulnerabilities that could exist at multiple points in the global supply chain. In addition, supply chain vulnerabilities can include weaknesses in agency acquisition or security procedures, controls, or implementation related to an information system. Examples of types of vulnerabilities that could be exploited include acquisition of IT products or parts from sources other than the original manufacturer or authorized reseller, such as independent distributors, brokers, or on the gray market; applying untested updates and software patches to information acquiring equipment, software, or services from suppliers without understanding their past performance or corporate structure; and using delivery or storage mechanisms that are not secure. If a threat actor exploits an existing vulnerability, it could lead to the loss of the confidentiality, integrity, or availability of the system and associated information. Although the four agencies in our review—the Departments of Energy, Homeland Security (DHS), Justice, and Defense—have acknowledged the risks presented by supply chain vulnerabilities, they varied in the extent to which they have addressed these risks by (1) defining supply chain protection measures for department information systems, (2) developing implementing procedures for these measures, and (3) establishing capabilities for monitoring compliance with and the effectiveness of such measures. Three of the four departments have made limited progress in addressing supply chain risk: In May 2011, the Department of Energy revised its information security program, which requires Energy components to implement provisions based on NIST and Committee on National Security Systems guidance. However, the department was unable to provide details on implementation progress, milestones for completion, or how supply chain protection measures would be defined. Because it had not defined these measures or associated implementing procedures, the department was also not in a position to monitor compliance or effectiveness. Although its information security guidance mentions the NIST control related to supply chain protection, DHS has not defined the supply chain protection measures that system owners should employ. The department’s information security policy manager stated that it was in the process of developing policy that would address supply chain protection, but did not provide details on when it would be completed. In addition, in the absence of such a policy, DHS was not in a position to develop implementation procedures or to monitor compliance or effectiveness. The Department of Justice has defined specific security measures for protecting against supply chain threats through the use of provisions in vendor contracts and agreements. Officials identified (1) a citizenship and residency requirement and (2) a national security risk questionnaire as two provisions that address supply chain risk. However, Justice has not developed procedures for ensuring the effective implementation of these protection measures or a mechanism for verifying compliance with and the effectiveness of these measures. By contrast, the Department of Defense has made more progress. Specifically, the department’s supply chain risk management efforts began in 2003 and include a policy requiring supply chain risk to be addressed early and across a system’s entire life cycle and calling for an incremental implementation of supply chain risk management through a series of pilot projects; a requirement that every acquisition program submit and update a “program protection plan” that is to, among other things, help manage risks from supply chain exploits or design vulnerabilities; procedures for implementing supply chain protection measures, such as an implementation guide describing 32 specific measures for enhancing supply chain protection and procedures for program protection plans identifying ways in which programs should manage supply chain risk; and a monitoring mechanism to determine the status and effectiveness of supply chain protection pilot projects, as well as monitoring compliance with and effectiveness of program protection policies and procedures for several acquisition programs. In addition, the four national security-related agencies participate in interagency efforts to address supply chain security, including participation in the Comprehensive National Cybersecurity Initiative,development of technical and policy tools, and collaboration with the intelligence community. In support of the cybersecurity initiative, Defense and DHS jointly lead an interagency initiative on supply chain risk management to address issues of globalization affecting the federal government’s IT. Also, DHS has developed a comprehensive portfolio of technical and policy-based product offerings for federal civilian departments and agencies, including technical assessment capabilities, acquisition support, and incident response capabilities. Further, the four national security-related departments participate in an Office of the National Counterintelligence Executive-led initiative to (1) develop a common methodology for conducting threat assessments on entities that do business with the national security community and (2) request from agencies and centrally store copies of threat assessments for future use by components of the national security community. To assist the three national security-related agencies in better addressing IT supply chain-related security risks for their departmental information systems, we made several recommendations to the Secretaries of Energy and Homeland Security and the Attorney General. Specifically, we recommended that Energy develop and document departmental policy that defines which security measures should be employed to protect against supply chain threats; develop, document, and disseminate procedures to implement the supply chain protection security measures defined in departmental policy; and develop and implement a monitoring capability to verify compliance with, and assess the effectiveness of, supply chain protection measures. In commenting on our report, Energy stated that it concurred with the spirit of our recommendations. Energy also expressed concern that the recommendations are not fully aligned with the administration’s initiatives and stated that it believes policies and standards to address IT supply chain risk management must be coordinated at the national level, not independently through individual agencies. We agree that national or federal policies and standards should be coordinated and promulgated at the national or federal level. However, we also believe–as intended by our recommendations—that federal departments are responsible for developing departmental policies and procedures that are consistent and aligned with federal guidance. Our recommendations to Energy are based on and consistent with federal guidance on supply chain risk management. In addition, we recommended that DHS develop and document departmental policy that defines which security measures should be employed to protect against supply chain threats; develop, document, and disseminate procedures to implement the supply chain protection security measures defined in departmental policy; and develop and implement a monitoring capability to verify compliance with, and assess the effectiveness of, supply chain protection measures. In commenting on a draft of our report, DHS concurred with our recommendations and described steps the department is taking to address them, including developing departmental policy to define supply chain protection measures, examining risk management procedures, and exploring options for verifying compliance with and effectiveness of its supply chain protection measures. We also recommended that Justice develop, document, and disseminate procedures to implement the supply chain protection security measures defined in departmental policy; and develop and implement a monitoring capability to verify compliance with, and assess the effectiveness of, supply chain protection measures. Justice concurred with the recommendations. In summary, the global IT supply chain introduces a myriad of security vulnerabilities to federal information systems that, if exploited, could introduce threats to the confidentiality, integrity, and availability of federal information systems. Thus the potential exists for serious adverse impact on an agency’s operations, assets, and employees. These risks highlight the importance of national security-related agencies fully addressing supply chain security by defining measures and implementation procedures for supply chain protection and monitoring compliance with and the effectiveness of these measures. Until these agencies develop comprehensive policies, procedures, and monitoring capabilities, increased risk exists that they will be vulnerable to IT supply chain threats. Chairman Stearns, Ranking Member DeGette, and Members of the Subcommittee, this completes my statement. I would be happy to answer any questions you have at this time. If you have any questions regarding this statement, please contact Gregory C. Wilshusen at (202) 512-6244 or [email protected]. Other key contributors to this statement include Michael W. Gilmore (Assistant Director), Bradley W. Becker, Kush K. Malhotra, and Lee McCracken. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Information technology (IT) systems and the products and services that support them are essential to the operations of the federal government. These products and services are delivered through a complex global supply chain, and the exploitation of vulnerabilities in the IT supply chain is an emerging threat. Federal law requires establishment of information security programs, and implementing standards and guidelines provide for managing supply chain risk. GAO was asked to testify on its recently issued report that, among other things, identified key risks associated with the supply chains used by federal agencies to procure IT equipment, software, and services, and assessed the extent to which four national security-related agencies have addressed such risks. In producing that report, GAO analyzed federal acquisition and information security laws, regulations, standards, and guidelines; examined departmental policies and procedures; and interviewed officials from four national security-related departments, the intelligence community, and nonfederal entities. Reliance on a global supply chain introduces multiple risks to federal information systems and underscores the importance of threat assessments and mitigation. Supply chain threats are present at various phases of a system’s development life cycle and could create an unacceptable risk to federal agencies. Key supply chain-related threats include installation of intentionally harmful hardware or software (i.e., containing “malicious logic”); installation of counterfeit hardware or software; failure or disruption in the production or distribution of critical products; reliance on malicious or unqualified service providers for the performance of technical services; and installation of hardware or software containing unintentional vulnerabilities, such as defective code. These threats can have a range of impacts, including allowing attackers to take control of systems or decreasing the availability of critical materials needed to develop systems. These threats can be introduced by exploiting vulnerabilities that could exist at multiple points in the supply chain. Examples of such vulnerabilities include acquisition of products or parts from unauthorized distributors; application of untested updates and software patches; acquisition of equipment, software, or services from suppliers without knowledge of their past performance or corporate structure; and use of insecure delivery or storage mechanisms. These vulnerabilities could by exploited by malicious actors, leading to the loss of the confidentiality, integrity, or availability of federal systems and the information they contain. The four national security-related agencies in GAO’s review—the Departments of Energy, Homeland Security, Justice, and Defense—varied in the extent to which they have addressed supply chain risks. Specifically, Energy and Homeland Security had not yet defined supply chain protection measures for department information systems and are not in a position to develop implementing procedures and monitoring capabilities. Justice has defined supply chain protection measures but has not developed implementation procedures or monitoring capabilities. Until these agencies develop comprehensive policies, procedures, and monitoring capabilities, increased risk exists that they will be vulnerable to IT supply chain threats. By contrast, the Department of Defense has made greater progress: it has defined supply chain protection measures and implementing procedures and initiated efforts to monitor compliance and effectiveness. In addition, various interagency efforts are under way to address supply chain risks affecting federal IT. In its report, GAO recommended that the Departments of Energy, Homeland Security, and Justice take steps, as needed, to develop and document policies, procedures, and monitoring capabilities that address IT supply chain risk. In commenting on a draft of the report, the departments generally concurred with the recommendations.
Homeland Security Presidential Directive 7, issued in December 2003, designates the Secretary of Homeland Security as the principal federal official responsible for leading, integrating, and coordinating the overall national effort to protect the nation’s critical infrastructure and key resources. Homeland Security Presidential Directive 7 also requires all federal departments and agencies to identify, prioritize, and coordinate the protection of critical infrastructure and key resources from terrorist attacks. ASD(HD&ASA), within the Office of the Under Secretary of Defense for Policy, serves as the principal civilian advisor and the Chairman of the Joint Chiefs of Staff serves as the principal military advisor to the Secretary of Defense on critical infrastructure protection. The Transportation Defense Sector is made up of a worldwide network of DOD and non-DOD surface, sea, and air assets that the U.S. military relies on to move personnel and equipment. Currently, the Transportation Defense Sector consists of 300 critical air bases, seaports, and commercial airports worldwide and owned by DOD, other U.S. governmental organizations, private companies, and foreign governments. According to TRANSCOM officials, the Transportation Defense Sector is highly resilient because of significant redundancy among the various modes of transportation, particularly as it relates to surface transportation. For example, the size and capabilities of the U.S. rail and highway networks afford ability to reroute shipments via alternate roads and rail lines in the event of disruptions, a key reason why surface transportation assets were not identified as critical. In addition to DCIP, DOD has established other complementary programs that help assure critical assets, including the Antiterrorism Program and the Defense Continuity Program. The Antiterrorism Program is intended to establish protection standards for DOD assets against terrorist attacks. The Defense Continuity Program is intended to ensure that DOD mission- essential functions continue under all circumstances, such as a man-made or natural disaster. DCIP supports a risk-management process that seeks to ensure defense critical infrastructure availability. The risk-management process is comprised of a risk assessment component that identifies critical assets and infrastructure interdependencies that support DOD missions. Applicable follow-on threat and vulnerability assessments are then conducted on those assets to complete the risk assessment. The risk response component ensures that limited resources are optimally allocated towards those assets deemed most important to overall mission success for DOD, and for which it has been determined that the identified level of risk is unacceptable. Several DOD organizations have key roles in helping assure the availability of DOD’s transportation critical assets. The military services, defense agencies, and the combatant commands are responsible, in coordination with the sector lead agents, for identifying and assessing critical assets. The military departments, in their role as executive agent for the combatant commands, provide funding and resources for combatant command critical infrastructure programs. DOD Directive 3020.40 also states that sector lead agents are responsible for collaborating with other defense sector lead agents and DOD DCIP stakeholders to identify cross- sector interdependencies. According to ASD(HD&ASA) officials, TRANSCOM’s methodology for identifying, prioritizing, and assessing its critical transportation assets is inconsistent with the intent of DOD’s DCIP guidance and with the approach adopted by some of the other combatant commands and military services. TRANSCOM officials stated in May 2008 that they now plan to leverage the draft DOD Critical Asset Identification Process manual to reevaluate its currently identified critical transportation assets; however, a timeline to complete this reevaluation has not yet been established. Further, until recently, TRANSCOM relied on its vulnerability assessments to identify critical transportation assets, an action that also conflicted with established DOD guidance and practice. While TRANSCOM officials stated that they will discontinue the use of vulnerability assessment for identification purposes, they were unable to provide any documentation to ASD(HD&ASA) or us to confirm this decision officially. Moreover, its memorandum of understanding with the Joint Staff to participate as transportation subject matter experts on Joint Staff DCIP vulnerability assessments is still in draft. At the time of our review, TRANSCOM had identified 300 Tier 1 and Tier 2 critical transportation assets linked to its global mobility mission. TRANSCOM officials told us that they identified larger systems of assets— categorized as air bases, seaports, and commercial airports—based on their interpretation of the definition of an asset as outlined in DOD Directive 3020.40. TRANSCOM officials explained that these types of installations are part of its worldwide Defense Transportation System that is necessary to carry out TRANSCOM’s missions. This broad list of assets has been submitted to the Joint Staff for inclusion in DOD’s overall draft critical asset list. Because of TRANSCOM’s interpretation of the guidance, its critical asset list lacks the specificity of the critical asset lists prepared by some of the other combatant commands and military services. Moreover, according to ASD(HD&ASA) officials, TRANSCOM’s decision to identify entire installations was inconsistent with the intent of DCIP guidance. While TRANSCOM is not the only combatant command or military service to identify an entire installation as critical, it is the only organization that has done so for its entire list. DOD guidance requires combatant commands to first identify their missions, the critical assets that support those missions, and the threats and hazards to those critical assets, and then assess the vulnerability of the critical assets to the threats and hazards identified (see fig. 2). TRANSCOM skips steps two and three listed in figure 2 and instead has been using Transportation Infrastructure Vulnerability Assessments to identify specific critical assets. According to TRANSCOM officials, the identification of threats and hazards to critical assets (step 3) is incorporated in the conduct of vulnerability assessments (step 4), since Transportation Infrastructure Vulnerability Assessments specifically address vulnerability to all threats and hazards. ASD(HD&ASA) officials stated that when they began developing an overall DOD critical asset list, they told the combatant commands and military services that stopping the identification process for critical assets at the installation level is insufficient for the purposes of DCIP. As a result of continued submission of entire installations as critical assets, ASD(HD&ASA) published in March 2008 the Strategy for Defense Critical Infrastructure to reiterate the need for greater specificity in critical asset identification. Further, ASD(HD&ASA) is developing the DOD Critical Asset Identification Process manual, which is still in draft, but also notes that stopping the asset identification process at the system level (e.g., an air base, seaport, or commercial airport) does not meet the needs of DCIP, and that rarely is an entire system essential to mission success. For example, it is insufficient to identify an air base as a critical asset; rather, more specific assets, such as a runway, should be identified as appropriate. Figure 3 illustrates the DCIP critical asset identification process and where TRANSCOM’s previous efforts have stopped. TRANSCOM officials stated that because the DOD Critical Asset Identification Process manual was still in draft, they had initially chosen not to implement its contents until its formal publication. According to TRANSCOM officials, beginning in May 2008, TRANSCOM began the process to develop coordination methods to facilitate the use of the criteria in the draft DOD Critical Asset Identification Process manual for the identification and validation of assets prior to submitting them to the Joint Staff. TRANSCOM has recognized that this process will require time to complete a meaningful critical transportation asset list; however, a timeline to complete this process has not yet been established. Complicating the process of identifying and prioritizing critical assets has been TRANSCOM’s use of Transportation Infrastructure Vulnerability Assessments. Though contrary to DCIP guidance, TRANSCOM has been using its vulnerability assessments to identify specific critical assets rather than using the process outlined in DCIP guidance to identify specific critical assets. As a result, TRANSCOM officials could not tell us what specific transportation assets at a given site were critical, stating that in the absence of a Transportation Infrastructure Vulnerability Assessment it could be, though not necessarily, assumed that what was identified as critical at one location might be critical at another. For example, if a Transportation Infrastructure Vulnerability Assessment identified specific critical assets (such as a runway, navigation aids, or a fuel depot) at an air base as critical, it could be reasonably assumed that the same assets would probably be critical at other air bases. However, while TRANSCOM officials have stated that they will discontinue the use of vulnerability assessment for identification purposes, they were unable to provide any documentation to ASD(HD&ASA) or us to confirm this decision officially. Additionally, TRANSCOM’s memorandum of understanding with the Joint Staff to serve as transportation subject matter experts for the enhanced DCIP module to the Joint Staff’s Integrated Vulnerability Assessment when transportation assets are assessed remains in draft. At the behest of ASD(HD&ASA) in 2006, the Joint Staff began the process of creating a list of Tier 1 critical assets based on assets nominated and submitted by DOD organizations, including the combatant commands and the military services using DCIP-approved criteria. The Joint Staff’s list has gone through several iterations and a subset of Tier 1 critical assets, known as Defense Critical Assets, will be selected by ASD(HD&ASA). These Defense Critical Assets are of such extraordinary importance to DOD operations in peace, crisis, and war that their incapacitation or destruction would have a very serious, debilitating effect on the ability of DOD to fulfill its missions. TRANSCOM has not yet established a timeline to reevaluate critical transportation assets using the approved DCIP methodology. Until this reevaluation is completed, ASD(HD&ASA)’s ability to formulate a comprehensive Defense Critical Asset list that includes transportation assets and effectively targets spending for risk reduction efforts will be impeded. Figure 4 illustrates the types of specific critical transportation assets that TRANSCOM could identify below the installation (air base, seaport, and commercial airport) level. TRANSCOM plans to reevaluate its critical asset list using the DCIP- approved criteria, which is expected to result in a “significant reduction” of critical transportation assets. Although DOD established DCIP to help assure the availability of mission- critical infrastructure—including transportation assets—installation personnel were often unfamiliar with DCIP and unaware of the critical role specific transportation assets play in TRANSCOM’s missions. This lack of awareness contributed to a singular focus on protecting personnel and did not consider mission-critical assets. Installation officials responsible for critical transportation assets at the 22 sites we visited were often unaware of asset criticality because they were unfamiliar with DCIP and thus DCIP’s impact at these installations was negligible. While some efforts have been made to coordinate with both DOD and non-DOD entities, including the private sector, state and local governments, and foreign governments to assure the availability of critical transportation assets at home and abroad, these coordination efforts have been conducted despite a lack of service-specific DCIP implementation guidance. According to officials at 17 of the 22 installations we visited, efforts at installations have mostly focused on protecting people through such actions as antiterrorism protection rather than focusing on specific mission-critical transportation assets. At 18 of the 22 installations we visited, we found numerous complementary programs, such as the Antiterrorism and Chemical, Biological, Radiological, Nuclear, and high-yield Explosive Programs; and continuity of operations and emergency management planning. Officials responsible for assuring the availability of critical transportation assets at 20 of the 22 installations we visited, told us that they had not heard of DCIP prior to our visit because (1) there is an absence of service-specific guidance that explains how to implement DCIP and (2) the frequent rotation of installation commanders (typically every 2 years), which can limit leadership continuity over DCIP at the installation level. Officials at 16 of the 22 installations we visited told us that they would have more vigorously advocated for resources to fund protection of critical assets had they been aware of an asset’s criticality to TRANSCOM’s mission. Without service-specific guidance to ensure that mission-critical assets are being protected, installations rely on other complementary programs in lieu of the all-hazards approach that DCIP requires. Nearly all of the installations (18 of 22) we visited had coordinated with both DOD and non-DOD entities, including the private sector, state and local governments, and foreign governments to help assure the availability of critical transportation assets at home and abroad. However, these coordination efforts have been performed independent of DCIP and, therefore, focus on protecting people and not on assuring the availability of mission-critical transportation assets. DOD DCIP guidance requires the combatant commands to coordinate with one another and with the military services and sector lead agents to identify and assess critical assets. At 21 of the 22 sites we visited, installation officials had taken steps to coordinate such efforts with DOD organizations on the installation and/or with the private sector, state and local communities, or with host nation officials. For example, at one air base we visited in Europe, installation officials conducted joint security patrols with host nation military officials and trained jointly with military and civilian firefighting personnel. Further, at 10 DOD installations we visited in the Pacific region, installation officials routinely coordinated with state, local, and foreign governments on emergency management planning or scenarios, such as typhoons and earthquakes. Such coordination efforts, however, do not directly assure the availability of specific critical assets in the wake of a natural or man-made disaster. To mitigate public works disruptions, personnel at 18 of the 22 installations we visited were coordinating with DOD organizations on the installation, as well as local, state, or host nation officials. Specifically, these installations had developed resiliency in supporting public works infrastructure, such as fuel and electric power sources, so that critical transportation assets remained operational in the event of an installation- wide disruption. For example, 18 of these installations have developed backup or alternative capabilities to mitigate the loss of electricity and fuel. For 17 of the 22 critical transportation assets we visited, installation personnel were coordinating with DOD tenant organizations on the installation and with host governments to maintain and sustain public works support for its assets located on the facility. Most of the installations we visited (17 of 22) had emergency management plans and continuity of operations plans that accounted for the loss or degradation of supporting public works infrastructure located on or within the installation, although none of the plans specifically identified the critical transportation assets as high-priority assets vis-à-vis the installation’s other assets. We also found that installation personnel at 18 of the 22 locations we visited frequently tested and maintained backup fuel and electric power sources and often included them in their emergency management planning exercises. Seventeen of these installations had developed prioritized facilities lists to determine which facilities or assets would receive priority for power restoration when power to the installation was interrupted. DOD has allocated approximately $283.3 million for critical asset assurance through DCIP from fiscal years 2004 to 2008. DCIP guidance requires combatant commands and sector lead agents to provide adequate resources to implement their DCIP responsibilities. TRANSCOM has received approximately $8.6 million over this period to carry out its DCIP responsibilities, both as a combatant command and as a sector lead agent for the Transportation Defense Sector. In addition to these funds, critical transportation assets also have benefited indirectly from other DOD programs, such as the Antiterrorism Program, and from funding from foreign governments in countries where the United States maintains a military presence. Of the $8.6 million TRANSCOM has received in total DCIP funding from fiscal years 2004 to 2008, approximately $5.7 million has been used for carrying out its combatant command responsibilities and approximately $2.9 million has been used for implementing its transportation defense sector responsibilities. TRANSCOM, which is funded by the Air Force, as TRANSCOM’s executive agent, has requested DCIP funding for fiscal years 2009 to 2013 totaling $9.4 million for its combatant command responsibilities and $4.1 million for its defense sector responsibilities. Although the Air Force has not established a dedicated funding account for DCIP for itself, according to TRANSCOM officials, the Air Force has budgeted DCIP funding for TRANSCOM to perform its combatant command and defense sector responsibilities. Figure 5 depicts TRANSCOM’s DCIP allocated and planned funding for its combatant command and defense sector responsibilities from fiscal years 2004 to 2013. The assurance of critical transportation assets also benefits, indirectly, from other DOD sources, such as the Antiterrorism Program and the Combating Terrorism Readiness Initiative Fund. Among other things, the Antiterrorism Program provides a source of funding for installations to remediate vulnerabilities to transportation assets. Typically, remediation actions, such as improved security at entry control points or the hardening of a building to withstand an explosive blast, are done to counter a perceived terrorist threat—and do not explicitly consider other threats and hazards. Nonetheless, critical assets located within the installation or within a hardened building will benefit as a result of these other efforts. Further, the Combating Terrorism Readiness Initiative Fund provides another mechanism to fund antiterrorism measures, which tangentially affects the assurance of critical transportation assets. In addition to other DOD programs, foreign countries that host the U.S. military fund initiatives that indirectly help assure critical transportation assets. For example, U.S. embassy officials estimate that one country we visited in U.S. Central Command’s area of responsibility provides over $1 billion annually and one country we visited in U.S. Pacific Command’s area of responsibility contributes about $4.1 billion annually in support of the U.S. military presence in its country. In both instances, a portion of the funding contributed by these countries is used to safeguard installations containing critical transportation assets. Until now, TRANSCOM’s practice of designating entire air bases, seaports, and commercial airports as critical transportation assets has been inconsistent with DCIP guidance and the approach adopted by some of the other combatant commands and military services to identify specific mission-critical assets. Recently, however, TRANSCOM decided to discontinue its current critical asset identification process in favor of the draft critical asset identification methodology. TRANSCOM’s decision will necessitate reevaluating the approximately 300 installations on its existing critical asset list—an undertaking that could potentially delay ASD(HD&ASA)’s issuance of the department’s approved Defense Critical Asset List. Consequently, it is important for TRANSCOM to establish a timeline and key dates associated with the reevaluation process so that ASD(HD&ASA) can account for transportation assets in future iterations of the Defense Critical Asset List. Once this process is completed, ASD(HD&ASA) should have greater visibility over the full complement of mission-critical infrastructure and be better positioned to effectively remediate vulnerabilities to its most critical assets. While TRANSCOM officials have stated that they will discontinue the practice of using Transportation Infrastructure Vulnerability Assessments to identify specific critical transportation assets on the installations, they were not able to provide ASD(HD&ASA) or us with any documentation to confirm this decision officially. Lastly, until TRANSCOM finalizes its memorandum of understanding with the Joint Staff, it will not be able to define the roles and responsibilities of transportation subject matter experts to participate in the Joint Staff vulnerability assessments with a DCIP module. Although OSD issued department-wide guidance on critical infrastructure in 2005, knowledge of the program at the installation level—where critical transportation assets are located—is minimal because the military services have not yet developed their own implementation guidance. This lack of awareness has led installation officials to rely on other, more established programs to protect critical assets. While programs, such as DCIP and the Antiterrorism Program, do share some precepts, there are significant differences in the types of threats and hazards each program focuses on and in their emphasis on protection, resilience, and restoration of operations and assets. Until the military services issue guidance that installation personnel can use to implement local critical infrastructure programs, mission-critical assets may incur unintended risk. We are making the following four recommendations to help assure the availability of critical assets in the Transportation Defense Sector. To enable decision makers within DOD to more effectively prioritize and target limited resources to reduce critical asset vulnerabilities and allow ASD(HD&ASA) to formulate a complete and accurate list of Defense Critical Assets, we recommend that the Secretary of Defense, through ASD(HD&ASA) and the Chairman of the Joint Chiefs of Staff, direct the Commander of TRANSCOM to take the following three actions: Fully implement the criteria, methodology, and process in the draft DOD Critical Asset Identification Process manual to reevaluate and update the identification of all critical transportation assets, and develop a timeline for doing so. Discontinue the use of Transportation Infrastructure Vulnerability Assessments as its primary tool for identifying its critical assets. Finalize its memorandum of understanding with the Joint Staff to enable TRANSCOM transportation subject matter experts to participate in the DCIP module of a Joint Staff vulnerability assessment. To facilitate DCIP implementation at the installation level, we recommend that the Secretary of Defense direct the secretaries of the military departments to develop and implement service-specific guidance based on published DOD DCIP guidance. In written comments on a draft of this report, which included three draft recommendations, DOD partially concurred with our recommendations. Also, TRANSCOM and U.S. Central Command provided us with technical comments, which we incorporated in the report where appropriate. DOD’s comments are reprinted in appendix II. In its written comments, DOD stated that it partially concurred with our recommendation that TRANSCOM fully implement the criteria, methodology, and processes outlined in the draft DOD Critical Asset Identification Process manual to reevaluate and update the identification of all critical transportation assets, and develop a timeline for doing so. DOD agreed with the recommendation and noted that TRANSCOM already has initiated implementation of the current draft manual as a means to reevaluate identification of critical transportation assets. DOD stated that, consequently, TRANSCOM does not require additional ASD(HD&ASA) direction to do so. However, while TRANSCOM officials agreed during our review to begin reevaluating their critical assets using established criteria in the draft manual, our recommendation also calls for TRANSCOM to develop a timeline for completing this action. DOD acknowledged in its written comments that while the draft manual provides a process for critical asset identification, it has not yet provided timelines for the various milestones. DOD’s comments stated that ASD(HD&ASA) will work with the various components to establish timelines, but estimated that the manual will require approximately 1 year to complete, and will require timely cooperation and participation by numerous stakeholders. We believe that establishing these timelines is essential so that TRANSCOM can reevaluate and update the identification of all critical transportation assets in a timely manner. DOD partially concurred with our draft recommendation that TRANSCOM finalize the memorandum of understanding with the Joint Staff to discontinue the use of Transportation Infrastructure Vulnerability Assessments as its primary tool for identifying its critical assets. In its written comments, DOD noted that this recommendation contained two separate issues: (1) the discontinuation of the Transportation Infrastructure Vulnerability Assessments as means to identify critical assets and (2) the finalization of a memorandum of understanding between TRANSCOM and the Joint Staff. DOD noted in its written comments that the purpose of the memorandum of understanding is to define the roles and responsibilities of transportation subject matter experts to augment the enhanced DCIP module rather than to discontinue the use of the Transportation Infrastructure Vulnerability Assessments. In response to DOD’s comments and to reflect this distinction, we made this two recommendations rather than one. DOD also stated that no additional direction on ASD(HD&ASA)’s part is required because TRANSCOM has already taken steps to address both of these issues. As noted in our report, however, TRANSCOM officials were unable to provide ASD(HD&ASA) or us with any documentation to confirm that they have discontinued the use of the Transportation Infrastructure Vulnerability Assessments. TRANSCOM’s discontinuation of the Transportation Infrastructure Vulnerability Assessments as a means of identifying critical transportation assets and its adoption of the manual’s methodology are both key to TRANSCOM’s ability to provide DOD with an accurate list of critical transportation assets. Further, while we recognize that TRANSCOM has taken steps to coordinate with the Joint Staff to define its roles and responsibilities for the DCIP module to the Joint Staff Integrated Vulnerability Assessment, the memorandum of understanding remains in draft. Timely completion of the draft memorandum of understanding is important so that TRANSCOM’s expertise can be adequately leveraged on future vulnerability assessments of critical transportation infrastructure. Therefore, we believe this recommendation remains valid. Finally, DOD partially concurred with our recommendation to develop and implement service-specific guidance based on published DOD DCIP guidance. In its written response, DOD stated that the Army has already developed and is implementing service-specific guidance, and it noted that the military departments prefer to wait for the official publication of the draft DOD Critical Asset Identification Process manual before implementing service-specific guidance. We acknowledge the Army’s efforts and recognize that other military services may prefer to wait until the manual is published before they implement service-specific guidance. However, our recommendation is based on the entire body of DOD’s DCIP guidance—not just the draft DOD Critical Asset Identification Process manual, which is focused primarily on identification of critical assets and will take at least another year to complete. In our view, service-specific DCIP guidance should be issued promptly based on DOD Directive 3020.40 and DOD Instruction 3020.45, which have been finalized at the OSD level. In the absence of timely service-specific DCIP guidance, installation personnel will continue to rely primarily on antiterrorism plans instead of on an all-hazards approach to remediate, mitigate, or otherwise reduce the vulnerabilities to critical transportation infrastructure. As agreed with your offices, we are sending copies of this report to the Chairmen and Ranking Members of the Senate and House Committees on Appropriations, Senate and House Committees on Armed Services, and other interested congressional parties. We also are sending copies of this report to the Secretary of Defense; the Secretary of Homeland Security; the Secretary of State; the Chairman of the Joint Chiefs of Staff; the Secretaries of the Army, the Navy, and the Air Force; the Commandant of the Marine Corps; the Combatant Commanders of the functional and geographic combatant commands; the Commander, U.S. Army Corps of Engineers; and the Director, Office of Management and Budget. We will also make copies available to others upon request. If you or your staff have questions concerning this report, please contact me at (202) 512-5431 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. To conduct our review of the Department of Defense’s (DOD) efforts to assure the availability of critical assets in the Transportation Defense Sector, we obtained relevant documentation and interviewed officials from the following DOD organizations: Office of the Secretary of Defense Under Secretary of Defense (Comptroller)/Chief Financial Officer Assistant Secretary of Defense for Homeland Defense and Americas’ Security Affairs (ASD) Joint Staff, Directorate for Operations, Antiterrorism and Homeland Defense Threat Reduction Agency, Combat Support Assessments Department of the Army, Asymmetric Warfare Office, Critical Office of the Chief Information Officer Mission Assurance Division, Naval Surface Warfare Center, Dahlgren Division, Dahlgren, Virginia Department of the Air Force, Air, Space and Information Operations, Plans, and Requirements, Homeland Defense Division Headquarters, U.S. Marine Corps, Security Division, Critical Headquarters, U.S. Central Command, Critical Infrastructure Program Office, MacDill Air Force Base, Florida Headquarters, U.S. European Command, Critical Infrastructure Protection Program Office, Patch Barracks, Germany Headquarters, U.S. Pacific Command, Antiterrorism and Critical Infrastructure Division, Camp H.M. Smith, Hawaii U.S. Forces Japan Headquarters, U.S. Transportation Command (TRANSCOM), Critical Infrastructure Program, Scott Air Force Base, Illinois Headquarters, Air Mobility Command, Homeland Defense Branch, Scott Air Force Base, Illinois Headquarters, Military Sealift Command, Force Protection Office Headquarters, Surface Deployment and Distribution Command, Scott Air Force Base, Illinois Headquarters, Transportation Engineering Agency, Scott Air Force Defense Infrastructure Sector Lead Agents Headquarters, U.S. Transportation Command, Critical Infrastructure Program, Scott Air Force Base, Illinois Headquarters, U.S. Army Corps of Engineers, Directorate of Military Selected critical assets in the continental United States, Hawaii, the U.S. Territory of Guam, Germany, Greece, Kuwait and another country in U.S. Central Command’s area of responsibility, and Japan We also met with officials from the Department of Homeland Security, Infrastructure Information Collection Division, to discuss the extent to which DOD was coordinating with the Department of Homeland Security on the protection of non-DOD-owned defense critical assets in the Transportation and Public Works Defense Sectors. Further, to become more familiar with additional work being conducted on defense critical infrastructure, we met in Arlington, Virginia, with officials from the George Mason University School of Law’s Critical Infrastructure Protection Program and in Washington, D.C., with the Congressional Research Service (Resources, Science, and Industry Division). We drew a nonprobability sample of critical transportation assets located in the United States and abroad, using several critical asset lists developed by the Joint Staff, each of the four military services, and TRANSCOM. The assets we selected for review were initially drawn from the Joint Staff’s list of Tier 1 critical transportation assets; however, the list includes only 4 Tier 1 critical transportation assets worldwide. To increase the size of our sample, we used TRANSCOM’s Tier 1 and Tier 2 critical asset lists, which together total 300 critical assets. Further, we analyzed critical asset lists from each of the four military services for overlap with TRANSCOM’s critical asset list. From this, we selected 22 assets for review that included geographic dispersion among two countries in each geographic region (Europe, the Middle East, and the Pacific). We also selected assets from each military service and that were representative of the three principal types of assets identified by TRANSCOM—air base, seaport, commercial airport. Our cases for review included two of the four Tier 1 critical transportation assets. The specific assets we reviewed, their locations, and the missions that they support are omitted from this appendix, since that information is classified. Figure 6 shows the methodology we used to select the critical transportation assets for review. Table 1 shows a breakout of critical transportation assets selected by geographic combatant command. Because the Joint Staff list of Tier 1 critical assets does not include critical assets from the Public Works Defense Sector, for the purposes of this report, we are treating public works assets as supporting infrastructure. For the critical transportation assets that we selected, we also spoke with the asset owners and operators about their reliance on public works assets that support the critical assets. To evaluate TRANSCOM’s identification and assessment efforts of its critical transportation assets, we reviewed documentation and guidance and met with officials from ASD(HD&ASA), the Joint Staff, the military services, and TRANSCOM. We analyzed critical asset identification criteria and guidance and compared the guidance with current asset identification efforts. In addition, we spoke with DOD installation and U.S. embassy personnel to discuss their involvement with various DOD critical asset data calls and other efforts they participated in to identify critical assets. We reviewed TRANSCOM’s Transportation Infrastructure Vulnerability Assessments for assets we selected for review to determine if specific critical transportation assets below the installation level were identified. We also attempted to match these critical assets identified through the TRANSCOM’s vulnerability assessments with assets listed on TRANSCOM’s critical asset list. To determine the extent to which DOD installation personnel have taken actions to help assure the availability of critical transportation assets, both within and independent of DCIP, we reviewed DOD guidance on risk management and other complementary programs. In addition, we reviewed and analyzed installation emergency management plans and continuity of operations plans to determine how, if at all, critical assets were incorporated. We also interviewed combatant command, subcomponent, and installation personnel responsible for assuring the availability of critical transportation assets to ascertain the adequacy of guidance, assessments, inspections, funding, and other processes to enhance asset availability. Finally, we assessed the supporting public works infrastructure for the 22 assets we selected for review to determine their impact on the availability of the critical asset. To determine how DOD is funding critical transportation asset assurance, we reviewed and analyzed DCIP funding data and we interviewed officials from the Office of the Under Secretary of Defense (Comptroller)/Chief Financial Officer. Additionally, we interviewed officials from ASD(HD&ASA) and TRANSCOM to verify that the funding data were comprehensive and reflected DCIP funding from all sources. Further, we interviewed installation officials; personnel from U.S. Forces Japan, U.S. European Command, U.S. Central Command, and U.S. Pacific Command; and U.S. embassy officials in Kuwait and another country in U.S. Central Command’s area of responsibility, and Japan regarding other sources of funding. These sources include funding from other complementary programs or host nation contributions that provide an indirect contribution to the assurance of critical transportation assets. We found the data provided by DOD to be sufficiently reliable for representing the nature and extent of the DCIP funding. We conducted this performance audit from May 2007 through July 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Mark A. Pross, Assistant Director; Jon K. Bateman; Gina M. Flacco; James P. Krustapentus; Kate S. Lenane; Danielle Pakdaman; Terry L. Richardson; Marc J. Schwartz; John S. Townes; Cheryl A. Weissman; and Alex M. Winograd made key contributions to this report. Defense Critical Infrastructure: Additional Air Force Actions Needed at Creech Air Force Base to Ensure Protection and Continuity of UAS Operations. GAO-08-469RNI. Washington, D.C.: April 23, 2008 (For Official Use Only). Defense Critical Infrastructure: DOD’s Risk Analysis of Its Critical Infrastructure Omits Highly Sensitive Assets. GAO-08-373R. Washington, D.C.: April 2, 2008. Defense Infrastructure: Management Actions Needed to Ensure Effectiveness of DOD’s Risk Management Approach for the Defense Industrial Base. GAO-07-1077. Washington, D.C.: August 31, 2007. Defense Infrastructure: Actions Needed to Guide DOD’s Efforts to Identify, Prioritize, and Assess Its Critical Infrastructure. GAO-07-461. Washington, D.C.: May 24, 2007.
The Department of Defense (DOD) established the Defense Critical Infrastructure Program (DCIP) to assure the availability of mission-critical infrastructure, including surface, sea, and air transportation assets to carry out its missions. GAO was asked to evaluate (1) the extent to which the U.S. Transportation Command (TRANSCOM) has identified, prioritized, and assessed critical transportation assets; (2) the extent to which DOD installation personnel have taken actions to help assure the availability of critical transportation assets, both within and independent of DCIP; and (3) how DOD is funding critical transportation asset assurance. GAO examined a nonprojectable sample of 22 critical transportation assets, reviewed relevant DOD guidance and documents, and interviewed cognizant officials. TRANSCOM has taken some actions to identify, prioritize, and assess its critical transportation assets but, according to officials from the Office of the Assistant Secretary of Defense for Homeland Defense and Americas' Security Affairs (ASD[HD&ASA]), its methodology for doing so, until recently, has been inconsistent with the intent of DOD's various DCIP guidance and with the approach adopted by some of the other combatant commands and military services. TRANSCOM considers entire installations--military air bases, seaports, and commercial airports--as critical assets, rather than identifying assets with greater specificity, such as individual runways, navigation aids, and fuel storage facilities. This methodology diminishes the reliability of the critical transportation asset list, a condition that impedes DOD's ability to prioritize its critical assets departmentwide and effectively target spending on risk-reduction efforts. Further, TRANSCOM was using its vulnerability assessments to identify specific critical transportation assets on the installations. This practice conflicts with DOD's DCIP guidance not to use vulnerability assessments to identify critical assets. Though TRANSCOM officials stated that they now plan to discontinue this practice, they were unable to provide ASD(HD&ASA) or GAO with any documentation to confirm that this decision had occurred officially. Further, TRANSCOM's memorandum of understanding with the Joint Staff to participate as transportation subject matter experts on the Joint Staff's vulnerability assessments with a DCIP module is still in draft. In May 2008, TRANSCOM officials told GAO that they now plan to use the draft DCIP critical asset identification process to reevaluate its 300 identified critical transportation assets; however, a timeline to complete this has not yet been determined. DOD installation personnel at the 22 sites GAO visited have taken actions to help assure the availability of critical transportation assets; however, these actions have routinely occurred independent of DCIP. Consequently, they do not consider the full spectrum of threats and hazards and they tend to focus on preventing mass personnel casualties instead of critical asset assurance. DCIP's impact at the installations where the assets are located was negligible because of the lack of service-specific guidance. This gap in guidance hinders installation personnel's ability to make informed risk management decisions based on asset criticality. Coordination efforts between installation personnel and non-DOD owners of critical transportation assets and supporting public works infrastructure were substantial, but have been focused on the protection of people and not on asset assurance. DOD has allocated approximately $283 million for DCIP from fiscal years 2004 to 2008, including $8.6 million to TRANSCOM for its combatant command and defense sector responsibilities. Critical infrastructure assurance efforts also have been funded through other DOD complementary programs, such as the Antiterrorism Program, and through foreign government contributions. Although existing DCIP funding does not include funding for remediating asset vulnerabilities, remediation has been funded from these other sources.
Helium is an inert element that occurs naturally in gaseous form and has a variety of uses (see table 1). Helium’s many uses arise from its unique physical and chemical characteristics. For example, helium has the lowest melting and boiling point of any element and, as the second lightest element, gaseous helium is much lighter than air. Certain natural gas fields contain a relatively large amount of naturally occurring helium, which can be recovered as a secondary product. The helium is separated from the natural gas and stored in a concentrated form that is referred to as crude helium because it has yet to go through the final refining process.helium that is stored in the ground in an area of a natural gas field that has a naturally occurring underground structural dome near Amarillo, Texas. In addition to the federal government’s reserve of crude helium, private companies that are connected to BLM’s pipeline and pay a storage fee, are also able to store and retrieve their own private crude helium reserves from the same storage area. The federal government has a reserve of crude The federal government has been extensively involved in the production, storage, and use of helium since the early part of the twentieth century. The federal government and private sector cooperatively produced helium before 1925, specifically for military uses. The Helium Act of 1925, as amended, assigned responsibility for producing helium for federal users to Interior’s Bureau of Mines. From 1937 until 1960, the Bureau of Mines was the sole producer of helium. The act provided that funds from helium sales be used to finance the program by establishing a revolving fund known as the helium production fund. Such revolving funds are used to finance a cycle of business-type operations by charging for the sale of products and then using the proceeds to finance their spending. In the federal budget, this fund is referred to as the Helium Fund, and it is used to account for the program’s revenues and expenses. The Helium Act Amendments of 1960 stipulated that the price of federal helium cover all of the helium program’s costs, including interest on the program’s debt. The 1960 act required the Secretary of the Interior to determine a value for net capital and retained earnings, establish this value as debt in the Helium Fund, and add subsequent program borrowings to that debt. The program’s borrowings were authorized by subsequent appropriations acts and recorded as outlays in the federal budget in the years in which they were expended. In addition, the interest was added to the debt in the Helium Fund. However, this interest is simply a paper transaction, not a government outlay. The Bureau of Mines determined that the value of the program’s net capital and retained earnings was about $40 million in 1960. Subsequent borrowings from the U.S. Treasury totaling about $252 million were used to purchase helium for storage. By September 30, 1991, the debt had grown to about $1.3 billion, of which more than $1 billion consisted of interest because the interest accrued faster than the program could repay the debt. The Helium Privatization Act of 1996 significantly changed the objectives and functions of Interior’s helium program. For example, the 1996 act made the following key changes: Interior was required to close all government-owned refined helium production facilities and to terminate the marketing of refined helium within 18 months of enactment (50 U.S.C. § 167b(b),(c)); the helium program’s debt was frozen as of October 1, 1995 (50 U.S.C. § 167d(c)); Interior was required to offer for sale all but 600 million cubic feet of the crude helium in storage on a straight-line basis—a depreciation method that spreads out the cost of an asset equally over its lifetime—by January 1, 2015 (50 U.S.C. § 167f(a)(1)); Interior was required to set sale prices to cover the crude helium reserve’s operating costs and to produce an amount sufficient to repay the program’s debt. The price at which Interior sells crude helium was required to be equal to or greater than a formula that incorporates the amount of debt to be repaid divided by the volume of crude helium remaining in storage, with a consumer price index adjustment (50 U.S.C. §§ 167d(c), 167f(a)(3)). Furthermore, when the debt is fully paid off, the revolving Helium Fund shall be terminated (50 U.S.C. § 167d(e)(2)(B)); Interior was allowed to maintain its role in the helium storage business (50 U.S.C. § 167b(a)); and established a modified “in-kind” program to meet federal needs for helium. Rather than purchasing refined helium directly from Interior, federal agencies were required to purchase their major helium requirements from persons who have entered into enforceable contracts to purchase an equivalent amount of crude helium from Interior (50 U.S.C. § 167d(a)). As directed by Congress, the National Academies’ National Research Council reviewed the helium program and released a report in 2000 that evaluated the changes made in the program, the effects of these changes on the program, and several scenarios for managing the federal government’s reserve of helium in the future.changes in price and availability of helium, in 2008, the National Research Council convened a committee to determine if the current implementation of the helium program was having an adverse effect on U.S. scientific, technical, biomedical, and national security users of helium. The committee reported on these effects in early 2010 and concluded that the current implementation of the program has adversely affected critical users of helium and was not in the best interest of the U.S. taxpayers or the country. Since our reports in the early 1990s, the Helium Privatization Act of 1996 has caused considerable changes to the helium program and addressed or altered our prior concerns. In October 1992, we reported on various aspects of the federal helium program including the helium debt, pricing, and alternatives for meeting federal helium needs. In October 1992, we recommended that Congress cancel the helium program’s debt. As of September 1991, the debt had grown to about $1.3 billion, over $1 billion of which was interest that had accrued on the original debt principal of about $290 million. At that time, the deadline for paying off the debt was 1995. We reported that the only way to pay off the debt by that deadline would be to charge federal agencies with major requirements for helium over $3,000 per thousand cubic feet of helium, compared to the price at that time of $55. We recommended that Congress cancel the debt in the Helium Fund because it was no longer realistic to expect the debt to be repaid by the statutory deadline of 1995, and because canceling the debt would not adversely affect the federal budget as the debt consisted of outlays that had already been appropriated and interest that was a paper transaction. The 1996 act did not cancel the debt, as we had recommended, but because the 1996 act effectively froze the debt at $1.37 billion, and interest no longer accrued, BLM has been able to pay off a large portion of its debt. As of the end of fiscal year 2012, BLM had $44 million in debt remaining, which according to BLM officials it expects to pay off this year (see fig. 1). The helium debt was also a factor in setting the price of federal helium. In 1992, GAO recognized that if the helium debt was cancelled, Congress may wish to propose a new pricing scheme. The 1996 act did not cancel the debt, as we had recommended, but it did require a specific method for pricing crude helium. The initial minimum BLM selling price for crude helium after the act was passed was almost double the price for private crude helium at that time. However, after BLM started to sell its crude helium, according to the method specified in the act, the market price for crude and refined helium began to change. According to the National Research Council, the private sector began using the BLM crude price as a benchmark for establishing its price and, as a result, privately sourced crude helium prices increased and now they meet or exceed BLM’s price. Increases in the price of crude helium have also led to increases in the price of refined helium (see fig. 2). Refined helium prices have more than tripled from 2000 through 2012 pursuant to demand trends. In 1992, GAO recommended that Congress reassess the conservation objectives of the helium program and consider other alternatives to meet federal helium needs. As part of the resetting of the helium program’s objectives, the 1996 act established a revised approach for meeting federal needs for helium. In 1998, BLM began using in-kind sales to federal agencies. The in-kind regulations established procedures for BLM to sell crude helium to authorized helium supply companies and required federal agency buyers to purchase helium from these approved suppliers.agencies have fluctuated, primarily due to the National Aeronautics and Space Administration’s unique requirement for large volumes of helium on a sporadic basis. Total federal in-kind sales for fiscal year 2012 were 160.67 million cubic feet (see fig. 3). As we testified in 2010, changes in helium prices, production, and demand have generated concerns about the future availability of helium for the federal government and other critical purposes. The Helium Privatization Act of 1996 does not provide a specific direction for the helium program past 2015. As a result of these factors, in 2010, we identified three areas of uncertainty about the program’s direction after 2015. The same three areas are even more urgent today because 3 years have passed since our 2010 testimony, and BLM’s schedule for paying off the program’s debt has accelerated. Specifically, the three urgent issues are as follows: How will the helium program be funded after 2013? If the helium program’s debt is paid off this year, as expected, and the revolving helium fund is terminated, it is not clear how the operations of the helium program will be paid for. Currently the helium program does not receive any appropriated funds for its operations. The revenues generated by the program go into the Helium Fund, and the program has access to those funds to pay for its day-to-day operations. It is uncertain at this point how the helium program’s operations will be funded after 2013. BLM is still evaluating possible options, but it may have to undertake an orderly shutdown of the helium reserve unless the revolving fund is not terminated or appropriated funds are available for crude helium sales and the operations of the reserve. When we last testified on this issue, the estimated payoff date was 5 years away in 2015, and it was more closely aligned with the 1996 act’s requirement to sell down the helium reserve by January 1, 2015. The debt payoff schedule has accelerated primarily because of improved sales of the crude helium offered for sale. As a result, BLM’s helium program will not have a funding mechanism for its continued operation until 2015. Furthermore, because of some years of slow sales, BLM estimates that it will need to continue helium sales from the reserve until sometime between 2018 and 2020 to reach the 1996 act’s requirement to draw down to 600 million cubic feet. At what price should BLM sell its crude helium? Since the Helium Privatization Act of 1996 was passed, BLM has set the price for federal crude helium at the minimum price required by the act. However, because federal crude helium reserves provide a major supply of crude helium, we expect BLM’s prices will continue to affect private industry market prices for crude and refined helium. When BLM first set its price after the 1996 act, its price was estimated to be significantly higher than the market price, but now the reverse is true—BLM’s price for crude helium is estimated to be at or below the market price for refined helium. The 1996 act, like the Helium Act Amendments of 1960 before it, tied the price to the program’s operating expenses and debt. If the debt is paid off in 2013, as projected, the debt will no longer be a factor in setting helium prices. BLM officials told us that the 1996 act sets a minimum selling price and that the Secretary of the Interior has the discretion to set a higher price. In response to a recommendation in the National Research Council’s 2010 report, beginning in fiscal year 2011, BLM implemented a new two-tiered pricing system. Under the new pricing system, in-kind sales involving federal agencies continued to be based on the minimum selling price set in the 1996 act, while other sales to nongovernmental entities are charged a higher price based on debt repayment and cost recovery factors. The new pricing system, however, is still not a market-based pricing system. In November 2012, Interior’s Office of Inspector General recommended that BLM implement a new helium pricing process by the end of 2013 to ensure a fair return on the sale of helium. How should the helium remaining in storage after 2015 be used? The Helium Privatization Act of 1996 required BLM to offer for sale substantially all of the helium in storage by January 1, 2015. While the required amounts have been offered for sale, only 79 percent of the amounts offered for sale have actually been sold (see table 2). BLM will likely still have significantly more crude helium in storage than the 600 million cubic feet required by the 1996 act. As of September 30, 2012, there were 11.44 billion cubic feet of conservation helium in storage. According to the 2010 report by the National Academies’ National Research Council, the United States could become a net importer of helium within the next 7 to 12 years, and the principal new sources of helium will be in the Middle East and Russia. Given these circumstances, the National Academies’ report recommended that Congress may want to reevaluate how the domestic crude helium reserve is used or conserved. It is uncertain at this point how the helium still remaining in storage after January 1, 2015, will be used. In conclusion, Mr. Chairman, there have been a number of changes in the market for helium since Congress passed the Helium Privatization Act of 1996. As the deadline for the required actions to be taken under this act approaches, Congress may need to address some unresolved issues such as how the helium program will operate once the Helium Fund expires at the end of this year, how to set the price for the helium owned by the federal government, and how to use the remaining helium in storage. Chairman Hastings, Ranking Member Markey, and Members of the Committee, this concludes my prepared statement. I would be pleased to answer any questions that you may have at this time. For further information about this testimony, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. In addition, Jeff Malcolm (Assistant Director), Carol Bray, Leslie Pollock, and Jeanette Soares made key contributions to this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The federal government has been extensively involved in the production, storage, and use of helium since the early part of the twentieth century. The federal helium program is currently managed by the Department of the Interior's BLM. During the 1960s and early 1970s, Interior purchased about 34 billion cubic feet of crude helium for conservation purposes and to meet federal helium needs, such as for the space program and scientific research. Crude helium is a gas of 50 to 85 percent helium. While some of this helium was used to meet federal needs, most of it was retained in storage. The funds used to purchase this helium became a debt owed by the program. BLM now sells crude helium from the reserve, and the proceeds go into the revolving Helium Fund, which is used to finance the program and payoff the program's debt. GAO reported on the management of the helium program in the 1990s (GAO/RCED-92-44 and GAO/RCED-93-1). Since GAO's reviews of the program in the 1990s, key changes have affected the program, and a 2010 report by the National Academies' National Research Council concluded that it is time to reassess the program. GAO testified on the helium program in May 2010 (GAO-10-700T). This testimony is an update of GAO's May 2010 testimony and discusses (1) how the Helium Privatization Act of 1996 addressed issues raised by GAO in the 1990s and (2) three urgent issues facing the helium program in the near future. GAO is not making any recommendations in this testimony. Since GAO's reports in the early 1990s, the Helium Privatization Act of 1996 has caused considerable changes to the helium program and addressed or altered GAO's prior concerns. In 1992, GAO reported on various aspects of the federal helium program including the helium debt, pricing, and alternatives for meeting federal helium needs. Helium debt. In 1992, GAO recommended that Congress cancel the helium program's debt since doing so would not adversely affect the federal budget, as the debt consisted of outlays that had already been appropriated and interest that was a paper transaction. As of September 1991, this debt had grown to about $1.3 billion, over $1 billion of which was interest that had accrued on the original debt principal of about $290 million. The 1996 act did not cancel the debt as GAO had recommended, but it did freeze the growth of the program's debt and, as a result, the debt should be paid off this year. Helium pricing. The helium debt was also a factor in setting the price of federal helium. In 1992, GAO recognized that, if the helium debt was cancelled, Congress might need to propose a new pricing scheme. The 1996 act requires a specific method for pricing helium. This, along with other changes in the supply and demand for helium, has resulted in the Bureau of Land Management's (BLM) price to be at or below the market price. Alternatives for meeting federal helium needs. In 1992, GAO recommended that Congress reassess the conservation objectives of the helium program and consider other alternatives to meet federal helium needs. In resetting the program's objectives, the 1996 act directed Interior to stop refining helium and established a modified in-kind approach for meeting federal helium needs. Agencies must purchase helium from refiners that then purchase an equivalent amount of crude helium from BLM. Changes in the helium market have generated concerns about the future availability of helium for federal and other needs. The Helium Privatization Act of 1996 did not provide a specific direction for the federal helium program past 2015. Three urgent issues facing the program are as follows: How will the helium program be funded after 2013? If the helium program's debt is paid off this year, as expected, the revolving Helium Fund will be terminated as required by the 1996 act. When GAO last testified on this issue, the estimated payoff date was 5 years away in 2015. The schedule has accelerated primarily because of improved crude helium sales. At what price should BLM sell its helium? In the past, the debt has been a factor in the price, and the price has been above the market price. After 2013, the debt will be paid off, and the current price is at or below market. How should the helium owned by the federal government be used? BLM's effort to sell off the excess helium in storage will not be completed by January 1, 2015, as required by the 1996 act. As of September 30, 2012, there were 11.44 billion cubic feet of conservation helium in storage. After BLM is finished drawing down the reserve, some believe that the United States could become a net importer of helium.
OPAP was established by the Secretary of State following the August 1998 bombings of U.S. embassies in Nairobi, Kenya, and Dar es Salaam, Tanzania. The panel was formed to consider the future of U.S. overseas representation, to appraise its condition, and to develop practical recommendations on how best to organize and manage embassies and consulates. Citing weaknesses in security, infrastructure, technology, human capital, and management, OPAP concluded that the U.S. overseas presence was “perilously close to the point of system failure.” OPAP made recommendations in eight areas, including that of creating the right size and location for U.S. overseas presence. A key OPAP theme stressed that a rightsizing process should consider the relationship between embassy size and security. Specifically, OPAP recommended that rightsizing be used to reduce the number of people at risk overseas. OPAP made five additional recommendations regarding the size and location of overseas posts: Rightsize the U.S. overseas presence; reduce the size of some posts, close others, reallocate staff and resources, and establish new posts where needed to enhance the American presence where the bilateral relationship has become more important. Form a new Interagency Overseas Presence Committee—a permanent committee to regularly adjust U.S. presence to U.S. goals and interests. Adopt explicit criteria to guide size and location decisions. Support the concept of small posts. Encourage ambassadors to initiate rightsizing. OPAP also recommended that some administrative services be performed at regional centers or in the United States—actions that would lessen the need for administrative staff at some posts, thereby reducing security vulnerabilities. In February 2000, President Clinton directed the Secretary of State to lead an interagency effort to implement OPAP’s recommendations. In a March 2000 report to the Congress, the Department of State said that the interagency committee planned to complete pilot studies by June 2000 to assess staffing levels, to recommend necessary changes at the study posts, and to develop decision criteria applicable to subsequent rightsizing reviews to be conducted at all overseas posts over a 5-year period. State anticipated that reviews at half the posts (about 130 posts) would be completed within 2 years. In early 2000, State organized an interagency rightsizing committee representing key agencies, including the Departments of Agriculture, Commerce, Defense, Transportation, Energy, Justice, the Treasury, and State; the intelligence community; and the U.S. Agency for International Development (USAID). Pilot studies were conducted at six embassies— Amman, Jordan; Bangkok, Thailand; Mexico City, Mexico; New Delhi, India; Paris, France; and Tbilisi, Georgia, from March to May 2000. Teams with representatives from State, the intelligence community, Defense, Justice, USAID, and the Treasury visited all six posts; officials from other agencies made some of the trips. These embassies were selected because of the complexity of their missions and because they represented broad geographical and agency coverage. The Department of State told us that the interagency teams did not have written guidelines. Moreover, according to agency representatives who participated in the studies, the teams did not systematically assess staffing at the pilot posts. According to the former interagency committee leader, the teams attempted to use the criteria that OPAP suggested for making staffing decisions, but found that the criteria were too broad to guide determinations on specific post size. Prior to travel, the teams reviewed each embassy’s Mission Performance Plan describing objectives and priorities. In addition, the Department of State directed the teams to draft a list of general questions that linked staffing to the goals and objectives laid out in each embassy’s Mission Performance Plan, as a discussion guide. At each embassy, the teams received a briefing from the ambassador and then concentrated on interviewing key agency representatives, to obtain information and opinions on agencies’ staffing levels and workload. The teams spent a few days at each post. For example, a team was in Tbilisi for 2 days, Paris for about 3 days, and Mexico City for 5 days. Some team members and representatives of the interagency rightsizing committee told us that 2 to 5 days at an embassy was too little time to permit detailed analysis of workload or to fully explore alternative ways of conducting business, such as regionalizing operations or outsourcing administrative functions. This is partly attributable to the size and complexity of embassy operations at the posts visited. Four of the embassies—Bangkok, Mexico City, New Delhi, and Paris—are among the largest and most complex in the world. Though smaller, the remaining two embassies both have substantial numbers of U.S. and foreign national employees, from multiple agencies. The ambassador who led three of the pilot studies told us that a comprehensive review of staff levels would take much longer than the 2 to 5 days the teams spent at the embassies, and that the pilot studies were not designed for that purpose. However, he believed that the length of visit was sufficient to identify potential functions that warranted additional study to determine if staffing levels should be adjusted. The interagency committee’s June 2000 report to the Under Secretary of State summarizing results of the pilot studies concluded that it was impractical to develop a staffing methodology that would be applicable to all posts, as OPAP had recommended, because no two posts are sufficiently similar. In addition, the report questioned the need for additional rightsizing of overseas posts, stating that agencies had adjusted staff levels during the 1990s in response to budget constraints to ensure that only the most essential overseas functions were performed. As a result, the report concluded that agencies had already performed rightsizing. The report also concluded that planned rightsizing reviews of additional posts over 5 years should not be conducted, as the benefits of rightsizing may not outweigh the costs of conducting the reviews. Regarding OPAP’s recommendation to establish an interagency board to review staff levels at overseas posts, the committee’s report concluded that an interagency advisory board could be helpful as a forum to discuss programmatic issues with major overseas staffing implications and to provide informal and nonbinding advice to agencies and ambassadors. However, some agencies opposed the establishment of an interagency board, even on an advisory basis, because they believed it was unnecessary and would limit agency independence in making staffing decisions. Although the interagency committee did not recommend major changes in staff levels as a general theme in its June 2000 report, it did recommend that the regional financial service centers in Bangkok and Paris be relocated to the United States, and that several other potential opportunities for staff level reductions be explored. In addition, the report raised concerns about heavy embassy staff workloads, an issue not specifically addressed by OPAP. According to the committee’s report, an expanded American role in promoting and protecting U.S. interests overseas has imposed a dramatic and often overwhelming burden of work and responsibility on embassy staff. The committee found a common perception at each post that “Washington’s demands for reports, demarches, and other initiatives are numerous, un-prioritized, unrealistic, and insatiable.” The report also noted concerns about the ambassador’s ability to manage embassy staff and resources, noting that several ambassadors had indicated reluctance to challenge staffing levels of non- State agencies. The summary report also endorsed the initiation of separate interagency law enforcement pilot studies that the Attorney General had recommended in April 2000. These studies were intended to determine a methodology for deciding the appropriate type and number of law enforcement personnel to be assigned overseas, and to review the law enforcement policy role and staffing requirements at U.S. diplomatic missions. As part of this pilot, the law enforcement working group visited Mexico City, Bangkok, and Paris. State officials are unclear as to how the results of the working group will eventually affect staffing levels or rightsizing efforts. They noted, however, that law enforcement agencies have significantly increased their presence at a number of overseas posts in recent years. Table 1 summarizes the observations and conclusions for each post contained in the summary report on the pilot studies. Regarding staffing in Paris, the interagency committee’s report noted that the ambassador had testified to the Congress that staff could be significantly reduced, but had not recommended which specific positions should be eliminated. The report recommended that the ambassador identify specific positions for elimination by September 2000. In addition, an informal “lessons learned” paper, prepared by the study team, suggested that staffing in Paris should be the subject of urgent, interagency review with a view toward reducing work demands, privatizing some administrative positions, and moving some functions to the United States. The ambassador who led the pilot study team said that reduction of work demands could be achieved if the White House, through the Office of Management and Budget, established relative policy priorities and questioned, and perhaps overrode, staffing decisions made by individual agencies. The study team also cited examples of work that may not need to be performed in Paris, or that could be privatized, including some translation services and reporting on information available in public sources. In addition, the team noted that there may be ways to reduce the amount of embassy staff time spent in supporting the large number of official visitors. After the pilot studies were completed, the ambassador at the U.S. Embassy in Paris asked headquarters agencies to review workload requirements, with a view toward reducing workload so that rightsizing could take place. In October 2000, State provided guidance to the ambassador on work requirements and priorities for the embassy. In November 2000, the ambassador said that this guidance would not permit him to reduce staff, as it would not be fair to cut staff and ask the remaining staff to take on an undiminished workload. Although the ambassador expressed disappointment in this effort to identify potential workload and staff reductions, he reiterated his position that staff reductions were needed in view of security concerns at the post, and in the interest of achieving operational efficiencies. The concern regarding embassy security in Paris was attributable to the absence of “setback” from public streets, making the embassy highly vulnerable to terrorist attack. According to Department of State officials, the departure of the ambassador in late 2000, the November 2000 U.S. elections, and the change in administrations detracted from follow-up on the potential rightsizing actions in Paris, as well as on the rightsizing committee’s observations and conclusions concerning the other pilot posts. However, the current administration has made the embassy rightsizing process a priority by including it as one of the President’s management initiatives, and it may revisit the observations of the pilot studies as a part of this process. State’s August 2001 Final Report on Implementing the Recommendations of the Overseas Presence Advisory Panel agreed with the recommendations of OPAP to rightsize the overseas presence, rather than with the positions taken in the interagency committee’s report on the pilot studies. State’s final report also stated that the administration will analyze and review overall U.S. government presence and will develop a credible and comprehensive overseas staffing allocation process. However, it did not include a timetable for implementation or indicate whether more reviews of staffing issues at specific posts will be conducted. State’s report mentioned only one specific action taken that would directly affect staff levels at the pilot posts—the relocation of the Paris Regional Financial Service Center to Charleston, South Carolina, proposed by Congress prior to the pilot studies. State did not indicate any additional rightsizing actions taken or planned for the embassy in Paris, nor did it comment on any of the other five pilot posts. On August 25, 2001, the President announced that the rightsizing of embassies and consulates would be one of 14 initiatives in the President’s Management Agenda. The Office of Management and Budget is currently formulating a strategy for leading this initiative. In view of the September 11 terrorist attacks, the rightsizing of embassies and consulates has become more important than ever. Regrettably, the pilot studies conducted in 2000 do not provide a strong basis upon which the administration can pursue rightsizing, as they did not result in a methodology or blueprint for rightsizing around the world. Nevertheless, the studies did suggest that there may be opportunities to reduce embassy size, for example by moving some activities to the United States or to regional centers. If these suggestions prove feasible, their implementation could reduce security vulnerabilities at some overseas posts and could potentially free up resources to meet foreign policy needs elsewhere. We are currently planning work to further examine the suggestions raised by the pilot studies, as well as other issues to be considered as the administration implements the embassy rightsizing initiative. The Director of the Department of State’s Office of Management Policy and Planning, which has overall responsibility for rightsizing initiatives in the department, provided oral comments on a draft of this report. He said that the department agrees with the report’s conclusion and, on the whole, agrees with the report’s observations regarding the pilot studies. He said that the department is working closely with the Office of Management and Budget on rightsizing activities. We contacted officials in the Departments of State, Defense, the Treasury, Justice, and Commerce, and in the USAID, who participated in the interagency rightsizing committee effort, to discuss how the pilot studies, were carried out and the studies’ observations and results. We also obtained internal reports on the studies from some of these agencies. We interviewed Department of State personnel involved in the rightsizing studies, including the former Under Secretary of State for Management; the Director of the Office of Management Policy and Planning, which had responsibility for the pilot studies; and the former ambassador who led the pilot studies in Mexico City, Paris, and Tbilisi, and who was a co-chair for the overall pilot study exercise. We were unable to interview the other co- chair who prepared the June 2000 interagency report summarizing results of the pilot studies, as she is retired and unavailable. To explore the relationship between rightsizing and embassy security in OPAP’s report, we interviewed the Chairman of OPAP. We conducted our review from April to September 2001, in accordance with generally accepted government auditing standards. We are sending copies of this report to interested congressional committees and to the Secretary of State. We will make copies available to others upon request. Please contact me at (202) 512-4128 if you or your staff have any questions about this report. Major contributors to this report are John Brummet and Lynn Moore.
The Department of State is leading an interagency assessment of staffing needs in U.S. embassies and consulates to improve mission effectiveness and reduce security vulnerabilities and costs. This process, called "rightsizing," was begun in response to the recommendations of the Overseas Presence Advisory Panel. In the aftermath of the August 1998 bombings of U.S. embassies in Africa, the Panel determined that overseas staffing levels had not been adjusted to reflect changing missions and requirements; thus, some embassies and consulates were overstaffed, and others were understaffed. The Panel recommended a rightsizing strategy to improve security by reducing the number of embassy staff at risk. The Panel also recommended the establishment of a permanent committee to regularly adjust the U.S. presence, and the adoption of explicit criteria to guide decisions on the size and location of posts. A State-led interagency committee conducted pilot studies at six embassies in 2000 to (1) develop a methodology for assessing staffing at embassies and consulates during the next five years and (2) recommend adjustments to staffing levels at the embassies studied. The interagency committee formed teams that visited U.S. embassies in Amman, Jordan; Bangkok, Thailand; Mexico City, Mexico; New Delhi, India; Paris, France; and Tbilisi, Georgia. The pilot studies did not result in a staffing methodology at all embassies and consulates, as had been anticipated. The interagency committee said that it was impractical to develop explicit criteria for staffing levels at all posts because each post has unique characteristics and requirements. Contrary to the Panel's recommendations, the committee's report also questioned the need for rightsizing and establishing a permanent committee to adjust U.S. presence. The report did recommend the relocation of the regional finance centers in France and Thailand, and it identified instances in which additional study was needed.
Before discussing our preliminary findings in detail, it will be helpful to review the key features of Executive Order 12612 and some recent initiatives related to federalism. The executive order establishes a set of fundamental principles and criteria that executive departments and agencies should use when formulating and implementing policies that have federalism implications. For example, the executive order says that federal agencies should refrain to the maximum extent possible from establishing uniform, national standards for programs with federalism implications and that, when national standards are required, they should consult with appropriate officials and organizations representing the states in developing those standards. The order says that regulations and other policies have federalism implications if they “have substantial direct effects on the States, on the relationship between the national government and the States, or on the distribution of power and responsibilities among the various levels of government.” Executive Order 12612 also contains specific requirements for agency implementation and governmentwide coordination and review. For example, the head of each executive department and agency is required to designate an official to be responsible for ensuring the implementation of the order, and for determining which proposed policies have sufficient federalism implications to warrant preparation of a federalism assessment. If an assessment is prepared, it must accompany any proposed or final rule submitted to OMB for review under Executive Order 12866. OMB, in turn, is required to ensure that agencies’ rulemaking actions are consistent with the policies, criteria, and requirements in the federalism executive order. and procedural steps during the rulemaking process for certain rules that involve a mandate. On May 14, 1998, President Clinton issued Executive Order 13083 on “Federalism,” which was intended to replace both Executive Order 12612 and Executive Order 12875. The new executive order was to take effect in mid-August 1998, and would have made a number of changes to the specific requirements in Executive Order 12612. For example, agencies would no longer have been required to designate an official to ensure implementation of federalism requirements, and would not have been required to prepare federalism assessments for regulations and other policies with federalism implications. However, the President suspended Executive Order 13083 before it became effective in response to concerns raised by the National Governors’ Association and other interested parties. Many of the commentators objected to the new order because they believed it expanded the federal government’s authority to make national policies and standards. There was also criticism that the new order was issued without consulting affected state and local government representatives. With the suspension of Executive Order 13083, Executive Order 12612 remains the primary presidential directive to federal agencies on how they are to develop and implement regulations that have federalism implications. Executive Order 12612 does not require agencies to mention the order in the preamble to their final rules or to note in those preambles whether a federalism assessment was prepared. Therefore, our review of the rule preambles does not show whether agencies considered the executive order or whether the agencies prepared federalism assessments. However, mentioning the executive order in the preamble to a final rule is a clear indication that the agency was aware of and considered its requirements in some way. Also, if an agency prepared a federalism assessment for a final rule, the agency is likely to describe the assessment in the preamble to the rule. database of major rules issued since the passage of the SBREFA. SBREFA defines a rule as “major” if the Administrator of OMB’s Office of Information and Regulatory Affairs concludes that the rule is likely to result in (1) an annual effect on the economy of $100 million or more; (2) an increase in costs or prices; or (3) significant adverse effects on (among other things) competition, employment, investment, or productivity. To summarize the 3 years of data depicted in figure 1, nonindependent regulatory agencies published 11,414 final rules in the Federal Register between April 1996 and December 1998. The agencies indicated in the preambles that they had conducted federalism assessments for 5 of these 11,414 rules—2 in 1996 and 3 in 1997. In 3,016 rules (26 percent of the total), the agencies stated that no federalism assessment was conducted because the rules did not have federalism implications. Nearly all of these statements were standard, “boilerplate” certifications with little or no discussion of why the rule did not trigger the executive order’s requirements. In the remaining 8,393 rules (74 percent), the agencies did not mention Executive Order 12612. requirements. As table 1 shows, the five rules for which federalism assessments were prepared were issued by four agencies (DOC, DOT, HHS, and the Department of Labor) in either 1996 or 1997. Many of the final rules that federal agencies issue are administrative or routine in nature, and are therefore unlikely to have significant federalism implications. As a result, it is not particularly surprising that agencies would not prepare federalism assessments for many of those rules. However, rules that are “major” under SBREFA (e.g., those that have a $100 million impact on the economy) and that involve or affect state and local governments are more likely to have federalism implications that would warrant preparation of an assessment. Of the 11,414 final rules that nonindependent agencies issued between April 1996 and December 1998, 117 of them were identified as “major” rules by the agencies and OMB. The agencies issuing the rules indicated in the Unified Agenda of Federal Regulatory and Deregulatory Actions that 37 of them would affect state and local governments. The agencies indicated in the preambles to 21 of the rules that the rules would take precedence in the event they conflicted with state or local laws or regulations. rules they issued between April 1996 and December 1998 (about 25 percent of the total). However, only one of these preambles indicated that a federalism assessment had been prepared for the rule—an HHS rule issued in 1996 restricting the sale and distribution of cigarettes and smokeless tobacco to protect children and adolescents. The other 29 rule preambles that mentioned the executive order stated that the rules did not have sufficient federalism implications to warrant the preparation of a federalism assessment. Most of these statements were “boilerplate” certifications with little or no explanation of why the executive order’s requirements were not applicable to the rules. representatives from seven major state and local government interest groups (known as the “Big Seven”) to review descriptions of the 116 rules without federalism assessments and to indicate whether they believed any of the rules should have had an assessment. Four of these organizations provided us with comments on at least some of the rules. At least one of the four organizations indicated that 79 of the 116 rules should have had a federalism assessment. The agencies with the largest number of rules that the four organizations considered in need of assessments were HHS (26 rules), USDA (18 rules), and EPA (10 rules). Two or more of the organizations indicated that 30 of the rules should have had an assessment. We then contacted officials in each of these three agencies to determine whether federalism assessments had been prepared for these rules (but not mentioned in the preambles to the rules) or why they believed that no assessment was needed. The agencies did not indicate that any other assessments had been prepared, and generally said that their rules did not have sufficient federalism implications to trigger the executive order’s requirements. In some cases, the agencies indicated that they had substantively complied with the executive order by taking other actions to address intergovernmental concerns during the rulemaking process. Federal departments and agencies are primarily responsible for implementing Executive Order 12612. Section 6 of the executive order delineates the agencies’ responsibilities, requiring them to (1) designate an official to be responsible for ensuring implementation of the order, (2) have the designated official determine which proposed regulations have sufficient federalism implications to warrant the preparation of a federalism assessment, and (3) send each federalism assessment to OMB as a part of the regulatory review package sent pursuant to Executive Order 12866. However, Executive Order 12612 provides the agencies with broad discretion to determine how to meet these requirements. Each of the three agencies we visited—EPA, HHS, and USDA—has some kind of written guidance on how to implement Executive Order 12612. All three of the agencies’ guidance documents identify a designated official or office responsible for ensuring compliance with the executive order. EPA issued its “Guidelines for Implementing Executive Order 12612: Federalism” in June 1988. The guidelines identified the Assistant Administrator of the Office of Policy, Planning, and Evaluation as the designated EPA official for federalism. However, in 1992, the EPA Administrator made the agency’s General Counsel responsible for carrying out the functions of the designated official. The General Counsel was authorized to delegate the authority to the Deputy General Counsel, who could redelegate it to the Associate General Counsel level. EPA officials said that all agency regulations are to be reviewed by the General Counsel before being submitted to OMB and published in the Federal Register. USDA’s guidance on “Regulatory Decisionmaking Requirements” was last updated in March 1997, and the requirements that are related to Executive Order 12612 are part of that overall guidance. The guidance indicates that the department’s Office of the General Counsel (OGC) is responsible for carrying out the responsibilities of the designated official. For example, it says that OGC will “eview regulations and notices of proposed rulemaking for compliance with Executive Order 12612…and determine whether the preparation of a federalism assessment by an agency is required.” All USDA regulations are to be reviewed centrally by the department’s OGC before being submitted to OMB and published in the Federal Register. In March 1988, HHS’s Assistant Secretary for Planning and Evaluation (ASPE) issued a memo on “Compliance with Executive Orders on The Family and Federalism.” The memo indicated that the Secretary had assigned the ASPE lead responsibility for guidance, compliance, and technical assistance related to the executive order. HHS officials said that, with the exception of certain delegated regulations issued by the Food and Drug Administration (FDA), the ASPE is responsible for reviewing and clearing all departmental regulations. One facet of the ASPE’s review is to determine whether the rules comply with Executive Order 12612. Many nonmajor FDA regulations (as determined by FDA) are issued directly by the Commissioner without formal departmental review and clearance. For these regulations, HHS officials said that the FDA Commissioner exercises the responsibilities of the designated official under the executive order. federalism implications. At least one of the agencies’ criteria seems to establish a high threshold for preparing an assessment. USDA’s written guidance on Executive Order 12612 does not establish any specific criteria that the department’s OGC should use to determine whether a particular rule or other policy has sufficient federalism implications to warrant the preparation of a federalism assessment. Neither has USDA’s OGC established any written criteria to guide these determinations. USDA officials said that OGC attorneys make their own determinations regarding federalism implications in the context of each rulemaking action. The HHS guidance on the executive order lists “threshold criteria” that can be used to determine whether a rule’s federalism effects are significant and thus require a federalism assessment. The guidance indicates that a rule should be considered to have significant federalism implications if it (1) has a direct causal effect on the states; (2) primarily relates to the structure and role of states (e.g., not just a reduction in funding of grant programs); (3) has effects within a reasonably foreseeable time frame (e.g., within the next 5 years); and (4) has a significant incremental effect (e.g., requiring states to do something that they are not already doing). The guidance also says that an assessment must be prepared if an action will directly create significant effects on states even if the action is mandated by law or the department otherwise has no discretion. Finally, it says that rules and other policies with either a positive or negative significant effect on the states require a federalism assessment. traditional State responsibilities, or decrease the ability of States to make policy decisions with respect to their own functions” in order to have a “substantial” effect. The rule must affect all or most states, “not simply one state or a small cluster of States.” The rule must have a “direct, causal effect” on the states. If a rule creates federalism effects as a side effect, the guidance says the rule would not trigger the requirement for a federalism assessment. These criteria seem to establish a high threshold for what constitutes “sufficient” federalism implications to require an assessment. For example, the executive order defines “state” to “refer to the States of the United States of America, individually or collectively.” (Emphasis added.) EPA’s guidance, on the other hand, indicates that federalism assessment should be prepared only if a regulation or other policy affects all or most states. However, EPA’s actions appear to be allowable because the executive order does not define what is meant by “sufficient” federalism implications, leaving that determination up to the agencies. Section 7 of Executive Order 12612 indicates that, in implementing Executive Order 12866, OMB should, to the extent permitted by law, “take action to ensure that the policies of Executive departments and agencies are consistent with the principles, criteria, and requirements” of the federalism executive order. As noted previously, the order requires agencies to submit federalism assessments (if they were prepared) along with any rules being submitted to OMB for review. OMB officials told us that reviews of agencies’ actions in the federalism area have been part of the standard regulatory reviews conducted by OMB staff pursuant to Executive Order 12866. They said that agencies have rarely submitted separate federalism assessments to OMB but have addressed federalism considerations, when appropriate, as a part of the cost-benefit analysis and other analytical requirements. These officials also noted that there were few federalism assessments filed with OMB during the Reagan and Bush administrations. House web site indicates that Executive Order 13083 (the suspended Clinton order), not 12612, is the applicable executive order on federalism. One OMB official told us that Executive Order 12612, Executive Order 12866, Executive Order 12875, and UMRA all substantively address the same idea regarding federalism. They all require that, if a proposed rule is likely to have a significant impact on other levels of government, the impact should be considered in analyzing the costs and benefits of the rule and the agency should consult with appropriate officials at the state and local level. Executive Order 12612 gives agencies substantial discretion to determine which regulations and other policies have “sufficient” federalism implications to warrant preparation of a federalism assessment. Using that discretion, the agencies have prepared federalism assessments for very few rules. One of the agencies we visited had no written criteria to make those determinations. Although the other two agencies had written criteria, they had prepared only one federalism assessment and had mentioned the executive order in only 10 out of nearly 3,000 rules. The two agencies’ criteria were also inconsistent regarding whether statutorily mandated regulations required a federalism assessment. Also, other than including federalism as part of its regulatory reviews, OMB has taken no other specific actions to carry out its responsibility to ensure that agencies’ regulations and other policies are consistent with the executive order. The fact that agencies have prepared federalism assessments for only 5 of the more than 11,000 final rules issued in recent years suggests that the agencies are not implementing the order as vigorously as they could. We will be exploring the implications of this situation as we complete the work on this issue that you have requested of us. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch-tone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed the implementation of Executive Order 12612 on federalism, focusing on: (1) how often the preambles to covered agencies' final rules issued between April 1, 1996, and December 31, 1998, mentioned Executive Order 12612 and how often they indicated that the agencies had conducted federalism assessments under the order; (2) what selected agencies have done to implement the requirements of Executive Order 12612; and (3) what the Office of Management and Budget (OMB) has done to oversee federal agencies' implementation of Executive Order 12612 in the rulemaking process. GAO noted that: (1) federal agencies covered by Executive Order 12612 mentioned the order in about 27 percent of the more than 11,000 final rules they issued between April 1996 and December 1998; (2) the agencies indicated, however, that they had prepared federalism assessments for only five of these rules; (3) of the 117 major rules issued by these agencies during this period, the preambles indicated that only 1 had a federalism assessment; (4) state and local representatives that GAO consulted said that certain federal agencies should have done assessments for more of these major rules; however, the agencies said that their rules did not have sufficient federalism implications to trigger the executive order's requirements; (5) all three of the federal agencies GAO visited had some kind of written guidance on the executive order and had designated an official or office responsible for ensuring its implementation; (6) however, the methods the agencies use to determine whether federalism assessments are needed varied among the agencies; and (7) OMB officials told GAO that they have taken no specific actions to implement the executive order, but said the order is considered along with other requirements as part of their regulatory review process under Executive Order 12866.
Compared to the nearly 6,600 children placed in ORR custody in fiscal year 2011, more recent migration represents a historic increase in unaccompanied children entering the United States. In fiscal years 2014 and 2015, more than 57,000 and 33,000 unaccompanied children, respectively, were apprehended by DHS and transferred to ORR custody. The majority of the children were from El Salvador, Guatemala, and Honduras. Compared to prior years, there were significantly more young and female children apprehended by DHS. For example, as we previously found, in fiscal year 2011, 414 children under the age of 11 were apprehended by DHS, compared to 7,266 in fiscal year 2014. Also in fiscal year 2011, 2,333 female unaccompanied children were apprehended, compared to 21,881 in fiscal year 2014. We also previously found that children from El Salvador, Guatemala, and Honduras often leave their home country due to crime, violence, and lack of economic opportunity, among other reasons. While in ORR custody, children are placed in facilities with care providers in 12 states (as of December 2015). ORR care providers are generally non-profit organizations operating under cooperative agreements and must be licensed by the state to provide residential, group, or foster care services for dependent children. Care providers are required by ORR policy to provide children with a variety of services, including an individual needs assessment, classroom education, health care, counseling, and recreation. In addition, ORR care providers are to identify and assess relatives or other individuals as sponsors to whom children can be safely released. ORR federal field specialists, referred to as ORR field staff in this report, are local liaisons with ORR care providers and other stakeholders and approve the transfer and release of unaccompanied children in ORR custody. ORR also employs case coordinators, who are contract staff in the field that work with care providers and provide ORR field staff with transfer and release recommendations. The Trafficking Victims Protection Reauthorization Act directs ORR to place children in the least restrictive setting that is in the best interests of the child. ORR care facilities include: Shelters—The majority of children going through ORR care are placed in shelters, the least restrictive shelter environment. Foster care—Transitional (or short-term) foster care is an initial placement option for young children, sibling groups, pregnant and parenting teens, or children with special needs. Long-term foster care is for children expected to be in ORR custody for 4 months or longer and meet other criteria. Staff-secure facilities—These facilities maintain stricter security measures than a shelter in order to control disruptive behavior and prevent escape. Security measures could include a higher staff-to- child ratio for supervision. Secure care—These facilities have a physically secure structure and staff able to control violent behavior. ORR uses a secure facility as the most restrictive placement option for children who pose a danger to themselves or others. Residential treatment centers—These facilities offer therapeutic programs for children diagnosed with a mental health disorder. These centers provide services in a highly structured clinical program. Group home—A group home specializes in caring for specific populations (e.g., teen mothers). Extended care group homes are for children who may be in ORR custody for an extended period. The average number of days children remain in ORR shelters varies from month-to-month as different children rotate in and out of care. According to ORR data, the length of stay for children in shelters decreased from an average of 72 days in fiscal year 2012 to 32 days in fiscal year 2015. Unaccompanied children are also required to appear in immigration court for removal proceedings, which are adjudicated by immigration judges from the Department of Justice’s Executive Office for Immigration Review. These proceedings may begin while the children are in ORR custody, but often continue after their release to a sponsor. ORR provides funds to legal services providers for certain legal services that include an introduction to the U.S. legal system (known as “Know Your Rights” presentations), screening for potential immigration relief, and direct representation in some instances. The Young Center (formerly known as the Immigrant Child Advocacy Center), a nonprofit organization, developed a pilot child advocate program, and in 2004 the Center began serving children housed at ORR care facilities in Chicago. The Trafficking Victims Protection Reauthorization Act authorized HHS to appoint independent child advocates for trafficking victims and other vulnerable unaccompanied children. The Young Center expanded its child advocate program in fiscal year 2012 and opened an office to serve children in Brownsville, Tex., where a large number of unaccompanied children have traditionally been placed until released to sponsors. VAWA 2013 directed HHS to establish child advocate programs at six new locations—three initial locations by March 2015 and three additional locations by March 2016. The expansion required by VAWA 2013 largely predates the historic numbers of unaccompanied children entering the United States since fiscal year 2012. While certain services such as education and health care are provided to all children in ORR care, the child advocate program provides services to a small number of vulnerable unaccompanied children who meet ORR’s criteria. ORR defines vulnerable children eligible for advocate services as those who are victims of abuse or trafficking, children age 12 and under, pregnant and parenting children, those expected to be in ORR custody for 4 months or longer, and children who speak a different language than their care provider, among other criteria. Any stakeholder involved in a vulnerable child’s case may refer them to the child advocate program. Stakeholders who commonly refer children to the advocate program include their ORR care provider, ORR field staff, or legal services provider. The Young Center recruits, screens, and trains volunteers, such as law and social work students, teachers, retired attorneys, and community members, to serve as child advocates. Volunteers are matched with vulnerable unaccompanied children referred to the program, and are expected to meet with them regularly to develop relationships, gather information regarding their individual circumstances, and accompany them to immigration court and other important meetings. Information that advocates learn during these sessions is shared with Young Center staff attorneys and social workers. Young Center staff then advocate on behalf of the child based on information learned from the volunteer advocates, as well as information obtained from other sources such as ORR case files. To perform their advocate role, Young Center staff assesses the child’s circumstances and promotes what it views as the child’s best interests—safety, well-being, and the child’s expressed wishes—to various stakeholders. This includes developing best interest recommendations that are provided to ORR, immigration courts, asylum officers, legal services providers, and other decision makers (see fig. 1 below). According to the Young Center, the role of the child advocate is distinct from that of a child’s attorney. The child advocate is to represent what he or she views as the child’s best interests, and in rare cases, best interests differ from the child’s expressed wishes. For example, a child may want to return to his or her home country despite previously expressing a credible fear of returning due to unsafe conditions. In such cases, the Young Center can urge the child and relevant decision makers to consider other options given concerns about the child’s safety in his or her country of origin. The Young Center served 904 children from fiscal year 2012 through fiscal year 2015, which accounts for a small percentage of the unaccompanied children who entered the United States during this time. While the increases in the overall population of unaccompanied children since fiscal year 2012 have been accompanied by changes in the demographic characteristics of those children, the demographic characteristics of children served by the Young Center have remained relatively stable over a similar period of time (see table 1). In addition to demographic differences, children assigned an advocate experience a longer stay in ORR custody when compared to the overall population of unaccompanied children. For example, in fiscal year 2015 the overall population of unaccompanied children was released to sponsors in an average 32 days, but many of the children who receive advocate services are expected to stay in ORR care for 4 months or longer. The child advocate program serves children from all over the world. Over 70 percent of children served by the Young Center from fiscal year 2012 through fiscal year 2015 are from one of four countries: El Salvador, Guatemala, Honduras, or Mexico. About 10 percent of children are from China or India. The remaining children are from a diverse collection of countries, such as Bangladesh, Ghana, Romania, and Somalia (see fig. 2). ORR expanded the child advocate program to three locations in fiscal year 2015, and selected an additional three locations for expansion in fiscal year 2016 that each held more than 50 children in ORR custody, as required by VAWA 2013. Additionally, VAWA 2013 required that ORR give priority to locations with the largest numbers of unaccompanied children and the most vulnerable populations of unaccompanied children. ORR officials reported using two factors to determine the locations with the largest numbers of unaccompanied children: (1) its bed capacity (i.e. the space ORR care providers have to house and care for children) and (2) locations where large numbers of children are released to sponsors. However, ORR officials noted that they could not base expansion decisions on the most vulnerable populations of unaccompanied children because children are not assessed until they arrive at a care provider’s facility. As a consequence, ORR officials said they do not know which ORR care provider locations have more vulnerable children than others until after children arrive. Using its two selection factors, ORR first expanded the program to Houston, Tex.; New York City, N.Y.; and Washington, D.C. The Young Center opened offices in these areas in December 2014 and provided advocates for children in ORR custody. Beginning September 30, 2015, the child advocate program is funded under a contract between ORR and the Young Center. On March 14, 2016 the contract was modified to include reference to expanding the program to Los Angeles, Calif.; Phoenix, Ariz.; and San Antonio, Tex. beginning in March 2016. See figure 3 for a comparison of the number of children in ORR custody, ORR bed capacity, and the number of children released to sponsors in all child advocate program locations. These eight program locations are in cities that account for 79 percent of ORR’s total bed capacity, as of October 2015. To implement the program’s expansion, ORR allocated an increased amount of funds. Specifically, in fiscal year 2015 ORR provided approximately $1.8 million to the Young Center, up from $700,000 in fiscal year 2014 (see fig. 4). According to Young Center staff, this increase in funding in fiscal year 2015 allowed the Young Center to rent additional office space in the three expansion cities; meet the administrative costs of setting up new offices; and recruit, screen, and train new staff. The Young Center provided child advocate services to 97 unaccompanied children in the new program locations during the first 10 months of operation, without fully staffed programs in many locations, in addition to serving 230 children in pre-existing program locations. The new contract provides $2 million for fiscal year 2016, with 2 extension years at the government’s option. The Young Center anticipates using future funding to provide services to an increased number of unaccompanied children. Though ORR expanded the child advocate program to the required number of new locations and allocated increased funds for the expansion, the Young Center’s ability to advocate for children outside of its program areas is limited by its geographical reach. ORR care providers are generally limited to referring vulnerable children in the eight locations where the Young Center has offices, although ORR has care facilities in additional locations (see fig. 5). For instance, potentially vulnerable children in ORR custody in areas in northern California, Oregon, and Washington, where there are ORR care providers, typically do not receive child advocate services because the Young Center is not located there. In a small number of cases, the Young Center appointed advocates for especially vulnerable children in locations where there is no program, and refers to these as “national” cases. According to ORR officials, its care providers and field staff in all locations are given information about how to make referrals to the Young Center. However, the Young Center serves a very small number of national cases. In fiscal years 2012 through 2015, the Young Center provided advocates for 27 national cases, or 3 percent of total cases served. Further, Young Center staff explained that they have limited capacity to handle these cases under the new contract. However, Young Center staff said they expect less demand for national cases in the future because under the new contract, the Young Center is expanding to locations where it historically received referrals for national cases, such as San Antonio. In addition to a limited geographical reach, the child advocate program has not yet aligned the numbers of children served with potential program demand in certain locations. For example, as of October 2015, ORR’s capacity was 470 beds in Chicago, 930 beds in New York, and 2,028 beds in Brownsville. However, in fiscal year 2015, the Young Center served 172 children in Chicago, more than the number of children they served in New York and Brownsville combined. Young Center staff said the advocate program began in Chicago and explained that the higher number of children served in this location is due to a much larger and more established base of volunteer advocates in the area. Additionally, since the program began providing advocate services in Chicago in 2004, the Chicago office has more developed relationships with stakeholders who regularly refer children to the Young Center. ORR officials and Young Center staff expect that the child advocate program will serve an increasing number of children in locations with larger bed capacity over time. ORR officials described the child advocate program as a “capacity building project”—meaning that as the program gets up and running in new locations they will obtain the infrastructure needed to serve additional children. They said they hope to improve the distribution of children served across program locations as the program continues to expand. Young Center staff reported specific targets for which they are aiming to accomplish this increase. For example, in fiscal year 2015 the Young Center served 58 children in Brownsville and anticipates an increase to 75 children in fiscal year 2016. Still, the Young Center set targets to serve more children in Chicago than other program locations in fiscal year 2016 (see fig. 6). ORR officials said they will review monthly reports from the Young Center that include details on numbers of referrals to the program and cases served to monitor the Young Center’s progress towards meeting its caseload targets. Officials also said they conduct monthly calls with the Young Center to discuss the distribution of cases across program sites and any challenges the Young Center has meeting its caseload targets. Finally, ORR plans to examine the Young Center’s caseload distribution at the end of each contract year to determine target caseloads for the following year. As the program continues to expand, ORR’s efforts to monitor the number of children receiving child advocate services in each program site are important in order to ensure that child advocate services are distributed to areas of need. To help vulnerable children receive advocate services, ORR provided guidance in September 2011 to care providers and other relevant stakeholders on referring vulnerable children to the child advocate program. ORR officials said that its care providers are the most common source of referrals, though ORR field staff and other stakeholders can also make referrals. Our analysis of Young Center data found that among cases served by the Young Center from fiscal years 2013 through 2015, nearly 70 percent were referred by care providers (see table 2). ORR requires care providers to conduct an assessment of all unaccompanied children entering ORR custody that covers biographic, family, legal, medical, and mental health history, among other topics. Care providers are to use ORR’s child advocate program guidance, which lists 17 criteria developed by the Young Center and ORR that qualify children as vulnerable and eligible for advocate services, to help determine if children should be referred. Referrals are to be submitted directly to the Young Center which determines whether an advocate is available to work with the child (see fig. 7). According to Young Center staff, they have to make decisions about which cases can be staffed, based on advocate availability, language needs of the child, urgency of the case, or other factors. After the local Young Center office decides an advocate is available to work with a child, it sends a recommendation to ORR headquarters for ORR to officially appoint an advocate for the child. ORR officials said that because the Young Center has already determined that advocates are available to work with a child when the Young Center recommends an advocate appointment, the agency generally approves the Young Center’s requests. One exception noted by ORR officials is that requests for Young Center advocates for children who are released from ORR care are typically not approved, as advocate appointment is generally limited to children in ORR custody. Children served by the Young Center from fiscal years 2012 through 2015 met a range of ORR’s criteria and many children met multiple criteria, according to our analysis of Young Center data (see fig. 8). For example, the largest number of unaccompanied children who were appointed advocates during this time period were referred to the program because their potential sponsors were undergoing home studies, a possible indicator that the child’s safety or well-being may be in question (23 percent). Additionally, many children who were appointed advocates were referred because they were expected to be in ORR custody for 4 months or longer (21 percent), or they were from a country known to traffic children or identified as trafficking victims (17 percent). Lower percentages of children were referred to the Young Center and appointed advocates because they lacked appropriate legal representation or because they were in a residential treatment center. According to Young Center staff, changes in the population of unaccompanied children in ORR custody over time made it impractical to rely solely on ORR’s 2011 guidance because many children currently in custody meet the broad criteria outlined in that guidance. For example, ORR’s criteria calls for all children under 13 years old to be referred to the Young Center, and the Young Center used to automatically recommend advocates for these children. However, Young Center staff told us that due to the large increase in unaccompanied children entering the United States, there are so many young children in ORR custody moving quickly through ORR care that it does not consider age alone to determine whether a child is vulnerable and should receive an advocate. For example, according to the Young Center, a young child going to live with biological parents, with no trauma history or other factors endangering the child’s safety and well-being, would likely not need a child advocate since the parent can speak to the child’s best interest. In May 2014, the Young Center proposed modified referral criteria categories, in close consultation with ORR and other stakeholders, to supplement ORR’s criteria and distributed the modified criteria to care providers and other stakeholders with ORR’s approval. The Young Center’s modified criteria categories are intended to help cope with increases in referrals and address the challenge of screening a changing and increasing population of children. Specifically, the Young Center prioritized cases for children who met more than one of ORR’s 17 referral criteria and added additional vulnerability criteria (see table 3 for examples). Our analysis of Young Center data on cases served from fiscal years 2012 through 2015 found that 489 of 904 children served during this period were referred for multiple reasons. Further, cases served that met multiple criteria increased from 44 percent in fiscal year 2014 to 66 percent in fiscal year 2015. These data suggest that the Young Center’s efforts to supplement ORR’s criteria resulted in more child advocates appointed to children identified as having multiple vulnerabilities. Under ORR’s new child advocate program contract for fiscal year 2016, the Young Center is required to review and analyze the existing referral criteria. In addition, it is required to submit monthly reports to ORR assessing the strengths and weaknesses of the current referral process and explaining any recommended changes or refinements to the criteria. ORR officials said they plan to take time to evaluate the Young Center’s findings and then decide on any needed changes. These efforts to refine the child advocate program referral criteria are critical to ensure that ORR makes changes to the referral criteria that help stakeholders and the Young Center effectively identify the highest priority cases among the changing population of vulnerable children. Child advocate program stakeholders we interviewed highlighted challenges with the referral process, including care provider discretion, inconsistent referral practices, and unserved referrals. Referrer discretion. The child advocate program relies on other stakeholders, primarily care providers, to initiate the referral, according to ORR officials, Young Center staff, and our analysis of Young Center data. According to ORR policy, care providers shall refer any unaccompanied child in ORR care to the local child advocate program within 3 days after the care provider staff discovers that the child meets any of ORR’s referral criteria. However, ORR headquarters officials said they prefer to allow care providers to use their training, knowledge, and judgement to determine which children need advocates, with the assistance of ORR field staff and the Young Center. As a result, care providers exercise significant discretion when deciding which children to refer to the Young Center, according to Young Center staff. Referrals sometimes depend on the stakeholders’ working relationship with the Young Center. For example, when the Young Center advocates on behalf of vulnerable children, there may be disagreements with care providers on the best course of action and, at times, care providers have stopped referring children due to those disagreements, according to Young Center staff. Young Center staff suggested that to avoid this problem, children should be automatically referred for advocates if certain vulnerability criteria listed in ORR’s policy are met, such as trafficking concerns. Without a better understanding of how care providers make referral decisions, the Young Center and ORR lack assurance that eligible, vulnerable children are being referred. Inconsistent referral practices. In one program location, ORR field staff submitted most of the referrals and served a “gatekeeper” role, causing other stakeholders, including care providers, to make referrals less often, according to Young Center staff. Our analysis of Young Center data confirmed that in certain locations, care providers and other stakeholders submit referrals less often. Specifically, in fiscal year 2015 ORR field staff in one program location submitted referrals for 65 percent of cases served by the local Young Center office. Field staff in other locations submitted referrals for less than 4 percent of cases served by their local Young Center offices. ORR officials confirmed that referral practices vary by region, stating that in some locations, ORR field staff ask care providers to submit referrals, while in other locations, the ORR field staff make the referral. In locations where care providers and other stakeholders are not encouraged to make referrals themselves, it is possible that some vulnerable children may not be referred. Unserved referrals. Even with the Young Center’s efforts to modify referral criteria and prioritize cases, the Young Center continues to receive referrals for more children than they have the resources to serve. Five of six care providers we interviewed reported identifying more vulnerable children in need of child advocate services than the Young Center can serve. Further, our analysis of program data found that the Young Center was unable to serve an increasing number of referred cases. For example, from August 2013 to July 2014, the Young Center received 279 referrals and was unable to serve 60 of those cases. From August 2014 to July 2015, the Young Center received 433 referrals and was unable to serve 116 cases. According to the Young Center, children referred were not appointed advocates because no advocates were available or children were released or transferred before an advocate could begin working with them. For example, one care provider said that their local Young Center office was short on volunteer advocates and they experienced an average wait time of 4 weeks or more for a referral decision from the Young Center. Care providers in Chicago told us that in one shelter, 30 to 40 percent of children referred to the Young Center left ORR custody before an advocate could be appointed. ORR has not taken steps to monitor initial referrals to the Young Center to determine the extent to which eligible vulnerable children are referred, nor has ORR taken steps to monitor which children the Young Center has determined it is unable to serve. Federal standards for internal control state that ongoing monitoring should be performed continually, be ingrained in agency operations, and include regular management and supervisory activities, comparisons, and reconciliations, among other actions. ORR officials said they had not monitored referrals to the Young Center in the past because the child advocate program was a subcontract under the Vera Institute of Justice (Vera) until September 30, 2015. Under the new program contract, the Young Center is required to submit to ORR certain information, including the number of children referred, the number of children appointed advocates, as well as the reasons those children were referred and appointed advocates. However, while collecting this information is useful, it does not include a review of initial referrals to the Young Center from care providers and others to determine whether stakeholders decide to refer eligible, vulnerable children. Also, the information collected does not include a review of the Young Center’s decisions regarding which children it is unable to serve. Without these reviews, ORR may not have sufficient data to (1) make informed decisions about the kinds of vulnerable children care providers should refer to the Young Center and how consistently referrals should be made, and (2) ensure the program contractor effectively prioritizes children recommended for advocate services given limited advocate availability. The primary benefit of the child advocate program is its best interest recommendations, according to ORR field staff, immigration judges, and children’s attorneys we interviewed. The Young Center develops these recommendations to help ensure a child’s safety and well-being at different points in their case. Best interest recommendations vary depending on each child’s circumstances, but generally are to incorporate information on the child’s history, background, home country conditions, and the rationale for the recommendations. For example, recommendations could request that a child receive services while in ORR custody, express an opinion on the appropriateness of release to a sponsor, or provide information about whether a child can be safely returned to their home country. These recommendations give children— especially those who are unable to make an independent decision due to young age or trauma—a voice during the immigration process, according to our interviews with various stakeholders. For example, the Young Center was appointed as an advocate for a 2-year-old child. When the Young Center was appointed as advocate, no one had been able to locate the child’s biological mother. The Young Center gathered information, located the child’s mother, and learned the mother’s wishes for her child. Based on that information, the Young Center recommended to ORR that the child be placed in a long-term foster care home. According to the Young Center, recommendations are developed using information gathered during one-on-one meetings with the child, from the child’s ORR case file, in discussions with the ORR care provider, and sometimes, with the child’s potential sponsor or family in their home country. We analyzed Young Center program data and found that from fiscal years 2012 through 2015 the child advocate program submitted 493 recommendations to ORR, immigration courts, children’s attorneys, and others (see table 4). Over 70 percent of these recommendations were adopted by the entity receiving them. For example, the Young Center provided 70 recommendations to ORR that advocated for a particular placement for a child, such as a less restrictive setting or a facility closer to family members, and 67 percent were adopted. ORR field staff, immigration judges, children’s attorneys, and others can use the information in best interest recommendations to make decisions about the child’s case. For example, ORR field staff said they rely on these recommendations to make placement and release decisions, particularly in complicated cases because the Young Center learns information about the child that ORR may not be aware of. Child Advocate Case Example—Advocacy Related to Placement While in ORR Custody The Young Center was appointed as the advocate for an infant, apprehended while in the care of an adult woman. Due to concerns about the relationship between the woman and child, the two were separated and the infant was placed in an ORR short-term foster home. The woman tried to sponsor the child out of ORR custody, initially claiming to be the child’s biological mother and produced a birth certificate which was later determined to be counterfeit. Concerned that the child might be a victim of trafficking, the Young Center initiated an international home study. Through the home study, the Young Center discovered that the child’s mother did not want a relationship with the child. The mother had given the child to the woman; however, everything had been done outside of proper legal channels. The mother did not have any opinions about where the child should be placed. Faced with this information, the Young Center convened a best interest determination panel, in order to assess the child’s options. The panel was concerned about the number of caregivers and separations the child had experienced. The child, who had been in ORR custody for nearly a year, had formed a strong bond with her foster family. The panel concluded that it was in the child’s best interests to remain with the foster family. The Young Center recommended that ORR convert the child’s short-term foster care placement into a permanent home. ORR agreed with the recommendation and allowed the foster family to begin the process of making the placement permanent. These recommendations also inform attorneys and judges during immigration court proceedings. Attorneys told us that even though the recommendations are sometimes different from the child’s expressed wishes that are represented by their attorney, it is valuable information for the court. The three immigration judges we interviewed said they welcomed Young Center best interest recommendations. One judge said the Young Center provides information she would otherwise be unaware of and gives her greater assurance she has all the information needed to move forward with a case. The child advocate program also provides support for children during court proceedings and after release from ORR custody. Child advocates accompany children to court. An immigration judge and volunteer advocates told us that court proceedings are intimidating for children. Volunteer advocates can help prepare children for court proceedings and explain the legal process in an effort to ease their anxiety. Additionally, an immigration judge told us he relies on advocates to explain to children what occurred in court. ORR care providers noted that while they are required to check in with children one time after their release, the Young Center can have more frequent contact with unaccompanied children after their release from ORR custody. For example, after a child is placed with a sponsor, the Young Center may provide resources to the child and his or her sponsor, such as assistance finding an attorney, help with school enrollment, and finding housing for children aging out of care. In addition to obtaining stakeholder views on the benefits of the program, we also asked stakeholders to report any challenges regarding the role of child advocates. Most commonly, interviewees identified limited capacity and a need for additional advocates as a shortcoming of the program. ORR officials mentioned that sometimes the Young Center’s recommendations are not within the purview of ORR. For example, an ORR field staff official told us that a Young Center recommendation requested that she introduce a child to the new medical provider and counselor at the child’s placement location, which would have been outside of ORR’s scope of responsibility. While stakeholders told us they found best interest recommendations helpful, Young Center staff said their ability to advocate for children is hampered by ORR’s information sharing policies. Young Center staff we interviewed at the three program sites reported that obtaining complete ORR case file information was a challenge and it affected their work with children and the resulting recommendations. The Trafficking Victims Protection Reauthorization Act states that “ child advocate shall be provided access to materials necessary to effectively advocate for the best interest of the child.” In addition, federal standards for internal control state that relevant, reliable, and timely information should be communicated to those who need it in a form and time frame that enables them to carry out their responsibilities. However, ORR’s policies restrict access to certain information in children’s case files that describe children’s past circumstances and current conditions while in ORR care. Specifically, child advocates do not have access to significant incident reports, which Young Center staff described as critical information that should be factored into its best interest recommendations. Significant incident reports are prepared by care providers and include information about abuse or neglect in ORR care, behavioral incidents that threaten safety, incidences of running away or law enforcement involvement, pregnancy or pregnancy-related issues, safety measures, past abuse and neglect, criminal history, or contact or threats to the child while in ORR care from smuggling syndicates, organized crime, or other criminal actors. ORR officials told us that significant incident reports may include information about other children or ORR care provider staff and as a result, ORR does not provide copies to the Young Center to protect the confidentiality of the other individuals. ORR’s information sharing policies allow its care providers to verbally describe information contained in significant incident reports when requested by the Young Center. However, Young Center staff told us that crucial information can be lost when communicated verbally, and they rely on care providers to inform them if a significant incident report was placed in their assigned child’s case file. If ORR care providers do not tell the child advocate, the Young Center will not have that information to help develop their best interest recommendation. ORR does not deny the Young Center access to home study reports, but the Young Center is required to take several steps to obtain them, which can affect the timeliness of the advocates’ recommendations. ORR conducts home studies to evaluate a potential sponsor’s readiness to support an unaccompanied child upon his or her release from ORR custody. The Young Center uses home study reports to develop recommendations related to children’s reunification with potential sponsors, sometimes at the request of ORR field staff. Although the Young Center is an ORR contractor, ORR’s policies require that the Young Center obtain the sponsor’s consent to receive a copy of the home study (see fig. 9). ORR officials said they require the Young Center to obtain consent from the child’s sponsor because the agency is concerned about confidentiality, privacy, and the inadvertent release of sponsor information. According to Young Center staff, the process for obtaining home studies is cumbersome and lengthy due to challenges reaching sponsors to obtain consent. For example, sometimes sponsors need translation services when they receive a consent form, they may be wary of an unfamiliar organization requesting information about their family, or they may lack access to a fax machine to return completed forms. In addition, Young Center staff said that there are cases where sponsors do not consent to them receiving a copy of the home study. For example, in complex cases where there are concerns about the potential sponsor, the sponsor may not give consent, but those are the types of cases where best interest recommendations have value. When a sponsor does not consent to allowing the Young Center to review a home study reports, the Young Center submits a recommendation to ORR without viewing the report. Developing recommendations without critical information from significant incident reports and delays in obtaining home study reports as a result of ORR’s policy affects the completeness of the recommendations, according to Young Center staff. The Young Center provided an example in which they had been appointed as advocate for a child that was moved to a more restrictive ORR care facility as a result of behavioral incidents while in ORR care. These behavioral incidents were documented in significant incident reports, and after the move, the child received additional significant incident reports. However, the child’s care provider could not provide the Young Center with copies of the reports. According to the Young Center, without more detailed information about the child’s behavioral issues, they were unable to provide recommendations on services that might best meet the child’s needs. In another example provided by the Young Center, they were appointed as the child advocate for a 16-year-old girl, who had an extensive history of trauma and abuse. The child was placed in ORR custody and the child’s uncle sought to sponsor her out of custody. Because of the child’s extensive trauma history and a lack of relationship between her and her uncle, ORR conducted a home study. The Young Center reached out to the child’s uncle twice to obtain authorization to view the home study report; however, he declined to provide consent. As a result, the Young Center was unable to provide ORR a recommendation regarding whether it was in the child’s best interests to reunify with her uncle. Without relevant and timely information, the Young Center is unable to fully carry out its responsibilities. As part of the new child advocate program contract that began on September 30, 2015, ORR plans to work with the Young Center to draft joint information sharing policies. The contract states that within 180 days of the contract award, ORR and the Young Center will finalize information sharing policies. In February 2016, ORR officials told us that the agency was in the process of updating its information sharing policies, which could include providing the child advocate program with copies of significant incident reports and home study reports. Officials said they are working with the child advocate program to develop procedures for accessing this information. Officials plan to release new policies by late April 2016. The child advocate program provides vulnerable, unaccompanied children an advocate who is committed to learning their history and representing their best interests while in custody and during removal proceedings. As the program expands to the six new locations, additional children each year will receive help navigating the complex immigration system. However, because the number of unaccompanied children arriving in this country has increased significantly in recent years, the program will continue to serve a very small percentage of the total number of children in custody. Further, the program will likely continue to receive more referrals than it can serve; however, ORR has no assurance that all of the children who need services are being referred. Given this, it will be important for ORR to monitor program performance through this expansion and move forward in two key areas. First, by monitoring initial referrals to the child advocate program, ORR will be better positioned to know whether referrals are consistently being made by care providers and field staff. Second, by monitoring the cases that are assigned advocates by the contractor, ORR will be better able to assess whether limited resources are being used for the most vulnerable children. Without taking these steps to ensure thorough and consistent monitoring, ORR cannot be assured that the child advocate program is operating as effectively as intended. Since its inception, the child advocate program has provided recommendations to a variety of stakeholders to ensure that the best interests of hundreds of children are met while in ORR custody and after their release. Multiple stakeholders agree that these recommendations are valuable assets that help them determine how best to support the children. Although the program’s recommendations can contribute to better outcomes for the children, ORR’s information sharing policies may limit the program’s ability to effectively advocate on behalf of the children it serves. To help ensure vulnerable unaccompanied children receive child advocate services, we recommend that the Secretary of the Department of Health and Human Services direct ORR to develop a monitoring process that includes: (1) regularly reviewing referrals to the program contractor, including identifying which care providers in locations with a child advocate program do not make referrals; and (2) reviewing information on the children the program contractor determines it is unable to serve. To help the program’s contractor improve its recommendations on behalf of vulnerable unaccompanied children, the Secretary of Health and Human Services should direct ORR to work with the program’s contractor to ensure access to key information is provided in a timely manner. For example, this could include providing the program contractor with direct access to significant incident reports or exploring ways to streamline access to home studies without compromising the privacy of potential sponsors or other individuals. We provided a draft of this report to HHS for review and comment. HHS provided formal comments that are reproduced in appendix IV. HHS also provided technical comments that we incorporated, as appropriate. HHS concurred with both of our recommendations and outlined steps it is taking to implement them. In response to our recommendation to develop a monitoring process that includes regularly reviewing referrals to the child advocate program and reviewing information on children who cannot be served, HHS stated that ORR will be directly monitoring child advocate activities as required under the contract with the Young Center and under the law, including applying federal standards for internal control. In addition, HHS commented that ORR policy allows for any stakeholder to make a referral for a child advocate, a policy that has been standardized for over 5 years. However, we observed inconsistent referral practices and continue to encourage ORR to review the initial referrals to the Young Center from care providers and others to determine whether stakeholders decide to refer eligible, vulnerable children. HHS also said that ORR is continually evaluating its service model to ensure appropriate accountability for program staff and to provide improved services for children in the agency’s care and custody. In response to our recommendation to ensure the child advocate program’s timely access to key information, HHS stated that ORR is evaluating its information sharing policies and will consider how to meet all legal obligations regarding the provision of information to child advocates while protecting privacy and confidentiality rights of everyone involved in the child’s case. The program contract that began on September 30, 2015, states that within 180 days, ORR and the Young Center will finalize information sharing policies. Given this timeframe, we encourage ORR to expeditiously work with the Young Center to determine the best way child advocate program staff can obtain timely access to key information to best serve the children in need of the advocate services. We are sending copies of this report to the appropriate congressional committees and the Secretary of Health and Human Services. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. This report examines: (1) the extent to which the Office of Refugee Resettlement (ORR) implemented requirements to increase the number of child advocate program locations and the impact of the expansion on program costs and number of children served, (2) the extent to which ORR ensured vulnerable unaccompanied children were receiving services, and (3) the benefits of the child advocate program, and what challenges, if any, it faces. To address all three objectives, we conducted interviews with stakeholders that either had experience referring children to the child advocate program or had received recommendations from the program. To conduct these interviews, we visited Young Center child advocate programs in Chicago, Ill., and Washington, D.C., and spoke via phone with the Brownsville, Tex. program site. The Young Center is contracted to provide child advocate services for children in ORR custody. These sites were selected based on variation in the number of children served, amount of time the advocate program has been operational, and types of ORR care facilities in operation. In Chicago, we interviewed staff from three ORR care facilities (two shelters and one staff-secure facility), one immigration judge, two groups of volunteer child advocates, and the Young Center’s staff attorneys, social worker, and management team. In the Washington, D.C. area, we interviewed staff from one ORR facility with secure and staff-secure beds, one immigration judge, and Young Center staff attorneys. For the Brownsville, Tex. area, we interviewed care providers from two ORR facilities (one foster program and one shelter), one immigration judge, one group of volunteer child advocates, and Young Center staff attorneys. In addition, we interviewed three legal services providers—located in Chicago, Ill., Corpus Christi, Tex., and Washington, D.C.—that had experience representing children with an advocate. We also interviewed staff from the U.S. Conference of Catholic Bishops and the U.S. Committee for Refugees and Immigrants—the two organizations that operated temporary child advocate programs for children released from ORR custody in fiscal year 2015. The information obtained from these interviews is not generalizable. To address all three objectives, we also interviewed ORR headquarters and field staff. In addition, we interviewed headquarters staff for the Department of Justice’s Executive Office for Immigration Review, including the Assistant Chief Immigration Judge for Vulnerable Populations. We reviewed ORR documents, such as child advocate program policies and contracts that provided funding to the Young Center to operate the child advocate program. We also reviewed relevant federal laws and regulations, including the Violence Against Women Reauthorization Act of 2013 and the William Wilberforce Trafficking Victims Protection Reauthorization Act of 2008. To address our first and second objective, we analyzed data on cases served by the Young Center’s child advocate program from fiscal year 2012 through fiscal year 2015, the four most recent years for which data were available. These case data included information on children’s age, gender, country of origin, reasons they were referred to the program, and the source of those referrals. To prepare the data on reasons children were referred to the program for analysis, one analyst reviewed the data and assigned a code for each reason children were referred. Codes for each referral reason were based on categories of vulnerable, unaccompanied children established in ORR and Young Center referral guidance. In many cases children were referred to the program for multiple reasons and multiple codes were assigned to their case. A second analyst verified the coding to ensure reasons for referral were coded consistently. A third analyst conducted a final review and resolved any disputes over the appropriate codes. We also analyzed data on the number of cases the Young Center was unable to serve from August 2013 through September 2015. The Young Center did not track this information consistently for all program locations prior to August 2013. The data on unserved cases included information on children’s age, gender, the Young Center location that was unable to serve the case, and the reason the Young Center was unable to serve the case. Additionally, we analyzed data from ORR on care providers that included information on the locations and types of care provider facilities, numbers of children each facility could house, and numbers of children in custody in each facility as of October 2015. We compiled information from ORR’s data on facilities in current and proposed Young Center sites to determine (1) how many children could be cared for in each location, and (2) how many children were in custody in each location. We also used publically available ORR data on the number of children released to sponsors by county in fiscal year 2015 and data on counties included in metropolitan areas from the United States Census Bureau to determine how many children were released in the same metropolitan areas as current and proposed Young Center sites. To address our third objective, we analyzed information collected on formal best interest recommendations the Young Center provided to a variety of stakeholders from fiscal year 2012 through fiscal year 2015. These recommendations are provided to stakeholders in writing and verbally. The recommendations information we analyzed included all written recommendations from fiscal years 2012 through 2015 and some verbal recommendations tracked by the Young Center and provided to stakeholders in fiscal years 2014 and 2015. Therefore, the information we analyzed likely does not capture all of the Young Center’s formal best interest recommendations from fiscal years 2012 through 2015. The recommendation information included the number of recommendations made; types of decision makers that received recommendations (such as immigration judges, attorneys, ORR, etc.); key issues on which recommendations were provided (such as release, placement, legal relief, repatriation, etc.); and the outcome of those recommendations (such as adopted, adopted in part, or declined). In addition to formal best interest recommendations, the Young Center makes informal recommendations to decision makers; however, we did not analyze information related to informal recommendations. Additionally, in some instances one recommendation was submitted to multiple stakeholders; however, the data identified the primary recipient of the recommendation and did not include information on additional recipients. The case and recommendation data were provided by the Young Center. We assessed the reliability of the case and recommendation data by (1) performing electronic testing of required data elements, (2) reviewing information about the data and the system that produced them, and (3) interviewing knowledgeable Young Center staff about the data. We assessed the reliability of ORR’s data by (1) reviewing ORR business rules to ensure data reliability and (2) interviewing ORR officials and contractors knowledgeable about the data. We determined that the data were sufficiently reliable for our purposes. In addition to the contact named above, Sara Schibanoff Kelly (Assistant Director), Andrea Dawson (Analyst-in-Charge), Paulissa Earl, and Aimee Elivert made key contributions to this report. Also contributing to this report were James Bennett, Kate van Gelder, Jean McSween, James Rebbe, Jerry Sandau, Almeta Spencer, Ashanta Williams, and Paul Wright. Unaccompanied Children: HHS Can Improve Monitoring of Their Care. GAO-16-429T. Washington, D.C.: February 23, 2016. Unaccompanied Children: HHS Can Take Further Actions to Monitor their Care. GAO-16-180. Washington, D.C.: February 5, 2016. Unaccompanied Alien Children: Improved Evaluation Efforts Could Enhance Agency Programs to Reduce Migration from Central America. GAO-16-163T. Washington, D.C.: October 21, 2015. Central America: Improved Evaluation Efforts Could Enhance Agency Programs to Reduce Unaccompanied Child Migration. GAO-15-707. Washington, D.C.: July 29, 2015. Unaccompanied Alien Children: Actions Needed to Ensure Children Receive Required Care in DHS Custody. GAO-15-521. Washington, D.C.: July 14, 2015. Central America: Information on Migration of Unaccompanied Children from El Salvador, Guatemala, and Honduras. GAO-15-362. Washington, D.C.: February 27, 2015.
Thousands of unaccompanied children arrive in the United States each year. For a small number of especially vulnerable children—about 1 percent in fiscal year 2015—ORR provides an independent child advocate to develop safety and well-being recommendations to stakeholders, such as immigration judges. The Violence Against Women Reauthorization Act of 2013 directed HHS to expand the program and included a provision for GAO to review the child advocate program. This report examines (1) the extent to which ORR increased the number of program locations, (2) the extent to which ORR ensured vulnerable children received advocate services, and (3) the program's benefits and challenges. GAO reviewed relevant federal laws and regulations; analyzed data from fiscal years 2012-2015 on the number and characteristics of child advocate cases served and recommendations made to stakeholders; and interviewed officials at ORR and the Department of Justice's immigration judges, and child advocate service providers in Chicago, Ill.; Brownsville, Tex.; and Washington, D.C.—selected to obtain variation in the number of children served and amount of time the program was operational, among other factors. In fiscal year 2015, the Department of Health and Human Services' (HHS) Office of Refugee Resettlement (ORR) expanded the child advocate program from two locations to five and added three more locations in fiscal year 2016. The child advocate program—operated by a contractor—was developed in 2004 to promote the best interests of especially vulnerable unaccompanied children in ORR custody. Advocates meet with children regularly and develop recommendations regarding their care and custody. Approximately 336 children were assigned an advocate in fiscal year 2015—97 of them in the three new locations. ORR expects the contractor to provide advocates to an increasing number of children in locations with larger numbers of children in ORR custody, and plans to monitor progress through monthly reports from the contractor. Children are referred to the program primarily by shelter staff (care providers), who are expected to use a set of criteria established by ORR to determine eligibility. Once the program contractor receives the referral, it decides if an advocate is available to work with a child and then sends a recommendation to ORR to officially appoint an advocate. However, ORR does not receive a copy of referrals that the contractor is unable to serve. Further, GAO's data analysis shows and the program's contractor reported inconsistent referral practices. Contrary to federal internal control standards, ORR does not monitor referrals by care providers or contractor decisions about which children it serves. As a result, ORR cannot know whether eligible vulnerable children are overlooked. Stakeholders GAO interviewed said the advocate program gives children a voice during the immigration process and aids decision making regarding their care and custody. However, the contractor said their efforts are hampered by ORR's information sharing policy. GAO found that from fiscal years 2012-2015, more than 70 percent of the 493 recommendations made by advocates were adopted by ORR, immigration courts, and others. However, the contractor said ORR does not provide them with some key information on children. For example, they do not receive significant incident reports that describe behavioral incidents while in ORR care, past abuse or neglect, or other concerns. The William Wilberforce Trafficking Victims Protection Reauthorization Act of 2008 states that child advocates “shall be provided access to materials necessary to effectively advocate.” The contractor said creating recommendations without complete information limits their effectiveness. ORR officials told GAO that they are considering providing the contractor with copies of all significant incident reports and other key information; but as of April 6, 2016 the policy had not changed. GAO recommends that ORR improve its efforts to monitor care provider referrals and contractor decisions, and ensure that the contractor has timely access to key information on the children. HHS agreed with GAO's recommendations.